I didn't downvote, and the comment is now at +12 votes and +8 agreement (not sure where it was before), but my guess is it would be more upvoted if it were worded more equivocally (e.g., "I think the evidence suggests climate change poses...") and had links to the materials you reference (e.g., "[link] predicted that the melting of the Greenland ice cap would occur..."). There also may be object-level disagreements (e.g., some think climate change is an existential risk for humans in the long run or in the tail risks, such as where geoengineering might be necessary).
The EA Forum has idiosyncratic voting habits very different from most of the internet, for better or worse. You have to get to know its quirks, and I think most people should try to focus less on votes and more on the standards for good internet discussion. The comments I find most useful are rarely the most upvoted.
I see a lot of people's worldviews updating this week based on the collapse of FTX. One view I think people in EA may be neglecting to update towards is pessimism about the expected value of the long-term future. Doing good is hard. As Tolstoy wrote, "All happy families are alike; each unhappy family is unhappy in its own way." There is also Yudkowsky: "Value is fragile"; Burns: "The best-laid plans of mice and men often go awry"; von Moltke: "No plan survives contact with the enemy"; etc. The point is, there are many ways for unforeseen problems to arise and suffering to occur, usually many more ways than unforeseen successes. I think this an underlying reason why charity cost-effectiveness estimates from GiveWell and ACE went down over the years, as the optimistic case was clear but reasons to doubt took time to appreciate.
I think this update should be particularly strong if you think EA, or more generally the presence of capable value-optimizers (e.g., post-AGI stakeholders who will work hard to seed the universe with utopia), is one of the main reasons for optimism.
I think the strongest rebuttal to this claim is that the context of doing good in the long-term future may be very different from today's context, such that self-interest, myopia, cutting corners, etc. would either be solved (e.g., an AGI would notice and remove such biases) or merely lead to a reduction in the creation of positive value rather than an increase in negative value as occurred with the collapse of FTX (e.g., because a utopia-seeding expedition may collapse, but this is unlikely to involve substantial harm to current people like cryptocurrency investors). I don't think this rebuttal is much of a reduction in the strength of the evidence because long-term trajectories may depend a lot on initial values, and I think such problems could easily persist in superintelligent systems, and because there will be many routes to s-risks (e.g., because the failure of a utopia-seeding expedition may lead to dystopia-seeding rather than failing to spread at all).
Of course, if you were already disillusioned with EA or if this sort of moral catastrophe was already in line with your expectations, you may also not need to update in this direction.
"EA's are, despite our commitments to ethical behaviour, perhaps no more trustworthy with power than anyone else."
I wonder if "perhaps no more trustworthy with power than anyone else" goes a little too far. I think the EA community made mistakes that facilitated FTX misbehavior, but that is only one small group of people. Many EAs have substantial power in the world and have continued to be largely trustworthy (and thus less newsworthy!), and I think we have evidence like our stronger-than-average explicit commitments to use power for good and the critical reflection happening in the community right now suggests we are probably doing better than average—even though, as you rightly point out, we're far from perfect.
Well, my understanding now is that it is very structurally different (not just reputationally or culturally different) from publicly traded stock: the tiny trading volume, the guaranteed price floor, probably other things. If it were similar, I think I would probably have much less of that concern. This does imply standard net worth calculations for Sam Bankman-Fried were poor estimates, and I put a decent chance on Forbes/Bloomberg/etc. making public changes to their methodology because of this (maybe 7% chance? very low base rate).
I've updated a little toward this being less concerning. Thanks.
That makes sense.
I strongly agree with this. In particular, it seems that the critiques of EA in relation to these events are much less focused on the recent fraud concern than EAs are in their defenses. I think we are choosing the easiest thing to condemn and distance ourselves from, in a very concerning way. Deliberately or not, our focus on the outrage against recent fraud distracts onlookers and community members from the more serious underlying concerns that weigh more heavily on our behavior given their likelihood.
The 2 most pressing to me are the possibilities (i) that EAs knew about serious concerns with FTX based on major events in ~2017-2018, as recently described by Kerry Vaughan and others, as well as more recent concerns, and (ii) that EAs acted as if we had tens of billions committed for our projects even though many of us knew that money was held by FTX and FTX-affiliated entities, in particular FTX Token (FTT), a very fragile, illiquid asset that could arguably never be sold at anywhere close to current market value and arguably makes statements of tens of billions based on market value unjustified and misleading .
[Edit: Just to be clear, I'm not referring to leverage or fraud with point (ii); I know this is controversial! Milan now raises these same two concerns in a more amenable way here: https://forum.effectivealtruism.org/posts/WdeiPrwgqW2wHAxgT/a-personal-statement-on-ftx?commentId=3ZNGqJEpQSrDuRpSu]
This is great data to have! Thanks for collecting and sharing it. I think the Sioux Falls (Metaculus underestimate of the 48% ban support) and Swiss (Metaculus overestimate of the 37% ban support) factory farming ban proposals are particularly interesting opportunities to connect this survey data to policy results. I'll share a few scattered, preliminary thoughts to spark discussion, and I hope to see more work on this topic in the future.
Well-done again on this very interesting work! [minor edits made to this comment for clarity and fixing typos]
Rather than further praising or critiquing the FTX/Alameda team, I want to flag my concern that the broader community, including myself, made a big mistake in the "too much money" discourse and subsequent push away from earning to give (ETG) and fundraising. People have discussed Open Philanthropy and FTX funding in a way that gives the impression that tens of billions are locked in for effective altruism, despite many EA nonprofits still insisting on their significant room for more funding. (There has been some pushback, and my impression that the "too much money" discourse has been more prevalent may not be representative.)
I've often heard the marginal ETG amount, at which point a normal EA employee should be ambivalent between EA employment and donating $X per year, at well above $1,000,000, and I see many working on megaproject ideas designed to absorb as much funding as possible. I think many would say that these choices make sense in a community with >$30 billion in funding, but not one with <$5 billion in funding, just as ballparks to put numbers on things. I think many of us are in fortunate positions to pivot quickly and safely, but for many, especially from underprivileged backgrounds, this collapse in funding would be a complete disenchantment. For some, it already has been. I hope we'll be more cautious, skeptical, and humble in the future.
[Edit 2022-11-10: This comment started with "I'm grateful for and impressed by all the FTX/Alameda team has done, and", which I intended as an extension of compassion in a tense situation and an acknowledgment that the people at FTX and Alameda have done great things for the less fortunate (e.g., their grants to date, choosing to earn to give in the first place), regardless of the current situation and any possible fraud or other serious misbehavior. I still think this is important, true, and often neglected in crisis, but it distracts from the point of this comment, so I've cut it from the top and noted that here. Everyone involved and affected has my deepest sympathy.]
Thanks for going into the methodological details here.
I think we view "double-counting" differently, or I may not be sufficiently clear in how I handle it. If we take a particular war as a piece of evidence, which we think fits into both "Historical Harms" and "Disvalue Through Intent," and it is overall -8 evidence on the EV of the far future, but it seems 75% explained through "Historical Harms" and 25% explained through "Disvalue Through Intent," then I would put -6 weight on the former and -2 weight on the latter. I agree this isn't very precise, and I'd love future work to go into more analytical detail (though as I say in the post, I expect more knowledge per effort from empirical research).
I also think we view "reasons for negative weight" differently. To me, the existence of analogues to intrusion does not make intrusion a non-reason. It just means we should also weigh those analogues. Perhaps they are equally likely and equal in absolute value if they obtain, in which case they would cancel, but usually there is some asymmetry. Similarly, duplication and nesting are factors that are more negative than positive to me, such as because we may discount and neglect the interests of these minds because they are more different and more separated from the mainstream (e.g., the nested minds are probably not out in society campaigning for their own interests because they would need to do so through the nest mind—I think you allude to this, but I wouldn't dismiss it merely because we'll learn how experiences work, such as because we have very good neuroscientific and behavioral evidence of animal consciousness in 2022 but still exploit animals).
Your points on interaction effects and nonlinear variation are well-taken and good things to account for in future analyses. In a back-of-the-envelope estimate, I think we should just assign values numerically and remember to feel free to widely vary those numbers, but of course there are hard-to-account-for biases in such assignment, and I think the work of GJP, QURI, etc. can lead to better estimation methods.
Thanks for this summary. While there are many disanalogies between historical examples and current events, I think it's easy for us to neglect the historical evidence and try to reinvent wheels.