J

Jacy

676 karmaJoined

Comments
98

Thanks for this summary. While there are many disanalogies between historical examples and current events, I think it's easy for us to neglect the historical evidence and try to reinvent wheels.

Jacy
15
4
1

I didn't downvote, and the comment is now at +12 votes and +8 agreement (not sure where it was before), but my guess is it would be more upvoted if it were worded more equivocally (e.g., "I think the evidence suggests climate change poses...") and had links to the materials you reference (e.g., "[link] predicted that the melting of the Greenland ice cap would occur..."). There also may be object-level disagreements (e.g., some think climate change is an existential risk for humans in the long run or in the tail risks, such as where geoengineering might be necessary).

The EA Forum has idiosyncratic voting habits very different from most of the internet, for better or worse. You have to get to know its quirks, and I think most people should try to focus less on votes and more on the standards for good internet discussion. The comments I find most useful are rarely the most upvoted.

Jacy
10
1
0

The collapse of FTX may be a reason for you to update towards pessimism about the long-term future.

I see a lot of people's worldviews updating this week based on the collapse of FTX. One view I think people in EA may be neglecting to update towards is pessimism about the expected value of the long-term future. Doing good is hard. As Tolstoy wrote, "All happy families are alike; each unhappy family is unhappy in its own way." There is also Yudkowsky: "Value is fragile"; Burns: "The best-laid plans of mice and men often go awry"; von Moltke: "No plan survives contact with the enemy"; etc. The point is, there are many ways for unforeseen problems to arise and suffering to occur, usually many more ways than unforeseen successes. I think this an underlying reason why charity cost-effectiveness estimates from GiveWell and ACE went down over the years, as the optimistic case was clear but reasons to doubt took time to appreciate.

I think this update should be particularly strong if you think EA, or more generally the presence of capable value-optimizers (e.g., post-AGI stakeholders who will work hard to seed the universe with utopia), is one of the main reasons for optimism.

I think the strongest rebuttal to this claim is that the context of doing good in the long-term future may be very different from today's context, such that self-interest, myopia, cutting corners, etc. would either be solved (e.g., an AGI would notice and remove such biases) or merely lead to a reduction in the creation of positive value rather than an increase in negative value as occurred with the collapse of FTX (e.g., because a utopia-seeding expedition may collapse, but this is unlikely to involve substantial harm to current people like cryptocurrency investors). I don't think this rebuttal is much of a reduction in the strength of the evidence because long-term trajectories may depend a lot on initial values, and I think such problems could easily persist in superintelligent systems, and because there will be many routes to s-risks (e.g., because the failure of a utopia-seeding expedition may lead to dystopia-seeding rather than failing to spread at all).

Of course, if you were already disillusioned with EA or if this sort of moral catastrophe was already in line with your expectations, you may also not need to update in this direction.

Jacy
35
10
0

"EA's are, despite our commitments to ethical behaviour, perhaps no more trustworthy with power than anyone else."

I wonder if "perhaps no more trustworthy with power than anyone else" goes a little too far. I think the EA community made mistakes that facilitated FTX misbehavior, but that is only one small group of people. Many EAs have substantial power in the world and have continued to be largely trustworthy (and thus less newsworthy!), and I think we have evidence like our stronger-than-average explicit commitments to use power for good and the critical reflection happening in the community right now suggests we are probably doing better than average—even though, as you rightly point out, we're far from perfect.

Well, my understanding now is that it is very structurally different (not just reputationally or culturally different) from publicly traded stock: the tiny trading volume, the guaranteed price floor, probably other things. If it were similar, I think I would probably have much less of that concern. This does imply standard net worth calculations for Sam Bankman-Fried were poor estimates, and I put a decent chance on Forbes/Bloomberg/etc. making public changes to their methodology because of this (maybe 7% chance? very low base rate).

I've updated a little toward this being less concerning. Thanks.

That makes sense.

  • To clarify, I wasn't referring to leverage (which I think most would say counts as fraud because of FTX claims to the contrary) in the comment above, just the fragility and illiquidity of the token itself.
  • My understanding is that some EA leadership knew much of the committed wealth was in FTT (at least, I knew, and I know some others who knew), and I worry that a few knew enough about cryptocurrency to know how fragile and illiquid that situation was (I did not, but I should have looked into it more) but allowed that to go unmentioned or undershared.
  • The point is just that these are all serious concerns that I think have been belied by the public EA outrage statements, and I think if there is such an independent investigation into knowledge of fraud, these concerns should be investigated too.
Jacy
33
14
6

I strongly agree with this. In particular, it seems that the critiques of EA in relation to these events are much less focused on the recent fraud concern than EAs are in their defenses. I think we are choosing the easiest thing to condemn and distance ourselves from, in a very concerning way. Deliberately or not, our focus on the outrage against recent fraud distracts onlookers and community members from the more serious underlying concerns that weigh more heavily on our behavior given their likelihood.

The 2 most pressing to me are the possibilities (i) that EAs knew about serious concerns with FTX based on major events in ~2017-2018, as recently described by Kerry Vaughan and others, as well as more recent concerns, and (ii) that EAs acted as if we had tens of billions committed for our projects even though many of us knew that money was held by FTX and FTX-affiliated entities, in particular FTX Token (FTT), a very fragile, illiquid asset that could arguably never be sold at anywhere close to current market value and arguably makes statements of tens of billions based on market value unjustified and misleading .

[Edit: Just to be clear, I'm not referring to leverage or fraud with point (ii); I know this is controversial! Milan now raises these same two concerns in a more amenable way here: https://forum.effectivealtruism.org/posts/WdeiPrwgqW2wHAxgT/a-personal-statement-on-ftx?commentId=3ZNGqJEpQSrDuRpSu]

Jacy
27
3
2

This is great data to have! Thanks for collecting and sharing it. I think the Sioux Falls (Metaculus underestimate of the 48% ban support) and Swiss (Metaculus overestimate of the 37% ban support)  factory farming ban proposals are particularly interesting opportunities to connect this survey data to policy results. I'll share a few scattered, preliminary thoughts to spark discussion, and I hope to see more work on this topic in the future.

  • These 2022 results seem to be in line with the very similar surveys conducted by Rethink Priorities in 2019, which I found very useful, but I don't know if those results have been shared publicly. Will you be sharing that data too? I know it's been eagerly anticipated, and Sentience Institute has held off on similar work while waiting for it. I'm not sure if that 2019 data is now seen as just a pilot for this 2022 data collection?
  • In addition to 2017 and 2020, Sentience Institute asked these questions in the 2019 and 2021 Animals, Food, and Technology (AFT) surveys with similar results.
  • In 2017, we preregistered credible intervals and informally solicited estimates from many others. The data was surprisingly ban-supporting relative to priors, which may be a more important takeaway than any post-hoc explanation. I didn't preregister any CIs for the 2019 or 2022 RP results. I think these drops in ban support are around what I'd expect, but it's very hard to say in hindsight, especially with other variation (e.g., different outcome scales, presumably different samples).
  • The Sentience Institute AFT survey also has questions with pro/con information, e.g., "Some people think that we should ban all animal farming and transition to plant-based and cultured foods, to reduce harm to humans and animals. Others think that we should keep using animals for food, to provide the conventional meat consumers are used to eating. Where would you place yourself on this scale?" (wording based on the GSS). That wording seems to elicit much stronger ban support than this new wording (though take with a large grain of salt due to other variation in the surveys), which seems to make sense as it is much more ban-supporting than the ban-opposing "it is wrong to kill animals" and "right to eat meat if they choose" wordings. Concretely, on a 1–6 support scale, we found a mean of 4.12 (95% CI: 4.04–4.21)  for "ban all animal farming" with our nationally representative sample in 2021. I think it's fair to say that's much higher despite also having pro/con information, and I think it's important qualification for interpreting the 2022 RP results that people may miss out on in this post.
  • Social scientists have long asked "Is there really any such thing as public opinion?" (Lewis 1939), and I think the majority answer has been some version of  "public opinion does not exist" (e.g., Blumer 1948, Bourdieu 1972): There are many interesting wordings to consider: simple vs complex, ban-supporting vs ban-opposing, socially desirable and acquiescing vs socially undesirable and anti-acquiescing, politically left vs right favored, financially incentivized, politically engaged, etc. All question wordings matter, and none are objectively correct or objectively biased. I think we may disagree on this point because you say some question wordings  "are biased towards answering “Yes”...", though you may mean some subjective standard of bias, such as distance from likely counterfactual ballot measure results. Some wordings more naturally jibe with what people have in mind when they see survey results—I prioritize simple wordings in part for this reason—but ideally we share the exact survey wording alongside percentages or scores whenever possible to ensure that clarity.
  • If you have time, what was the sample (M Turk, Prolific, Civis Analytics, Ipsos Omnibus, Knowledge Panel, etc.), what were the demographics, and was it weighted for representativeness, and if so, how?
  • What exactly do you mean by "strong" in "strong basis for more radical action"? One operationalization I like is: All things considered, I think these survey and ballot results should update the marginal farmed animal advocate towards more radical approaches relative to their prior. I'd love to know if you agree.

Well-done again on this very interesting work! [minor edits made to this comment for clarity and fixing typos]

Jacy
87
54
9

Rather than further praising or critiquing the FTX/Alameda team, I want to flag my concern that the broader community, including myself, made a big mistake in the "too  much money" discourse and subsequent push away from earning to give (ETG) and fundraising.  People have discussed Open Philanthropy and FTX funding in a way that gives the impression that tens of billions are locked in for effective altruism, despite many EA nonprofits still insisting on their significant room for more funding. (There has been some pushback, and my impression that the "too much money" discourse has been more prevalent may not be representative.)

I've often heard the marginal ETG amount, at which point a normal EA employee should be ambivalent between EA employment and donating $X per year, at well above $1,000,000, and I see many working on megaproject ideas designed to absorb as much funding as possible. I think many would say that these choices make sense in a community with >$30 billion in funding, but not one with <$5 billion in funding, just as ballparks to put numbers on things. I think many of us are in fortunate positions to pivot quickly and safely, but for many, especially from underprivileged backgrounds,  this collapse in funding would be a complete disenchantment. For some, it already has been. I hope we'll be more cautious, skeptical, and humble in the future.

[Edit 2022-11-10:  This comment started with "I'm grateful for and impressed by all the FTX/Alameda team has done, and", which I intended as an extension of compassion in a tense situation and an acknowledgment that  the people at FTX and Alameda have done great things for the less fortunate (e.g., their grants to date, choosing to earn to give in the first place), regardless of the current situation and any possible fraud or other  serious misbehavior. I still think this is important, true, and often neglected in crisis, but it distracts from the point of this comment, so I've cut it from the top and noted that here. Everyone involved and affected has my deepest sympathy.]

Thanks for going into the methodological details here.

I think we view "double-counting" differently, or I may not be sufficiently clear in how I handle it. If we take a particular war as a piece of evidence, which we think fits into both "Historical Harms" and "Disvalue Through Intent," and it is overall -8 evidence on the EV of the far future, but it seems 75% explained through "Historical Harms" and 25% explained through "Disvalue Through Intent," then I would put -6 weight on the former and -2 weight on the latter. I agree this isn't very precise, and I'd love future work to go into more analytical detail (though as I say in the post, I expect more knowledge per effort from empirical research).

I also think we view "reasons for negative weight" differently. To me, the existence of analogues to intrusion does not make intrusion a non-reason. It just means we should also weigh those analogues. Perhaps they are equally likely and equal in absolute value if they obtain, in which case they would cancel, but usually there is some asymmetry. Similarly, duplication and nesting are factors that are more negative than positive to me, such as because we may discount and neglect the interests of these minds because they are more different and more separated from the mainstream (e.g., the nested minds are probably not out in society campaigning for their own interests because they would need to do so through the nest mind—I think you allude to this, but I wouldn't dismiss it merely because we'll learn how experiences work, such as because we have very good neuroscientific and behavioral evidence of animal consciousness in 2022 but still exploit animals).

Your points on interaction effects and nonlinear variation are well-taken and good things to account for in future analyses. In a back-of-the-envelope estimate, I think we should just assign values numerically and remember to feel free to widely vary those numbers, but of course there are hard-to-account-for biases in such assignment, and I think the work of GJP, QURI, etc. can lead to better estimation methods.

Load more