A few years ago, I read The Life You Can Save by Peter Singer. I felt deeply inspired. The idea that charities could be compared using evidence and reason, the thought that I could save many lives without sacrificing my own happiness: I found these ideas meaningful, and I hoped they would give my life a sense of purpose (even if other factors were likely also at play).
I became an Intro Fellow and read more. I went to conferences and retreats. I now lead my university group.
But I’m frustrated.
I’m now asked to answer for the actions of a man who defrauded millions of people, and for the purchase of castles and $2000+ coffee tables.
I’m now associated with predatory rationalists.
I’m now told to spend my life reducing existential risk by .00001 percent to protect 1018 future humans, and forced to watch money get redirected from the Global South to AI researchers.[1]
This is not what I signed up for.
I used to be proud to call myself an EA. Now, when I say it, I also feel shame and embarrassment.
I will take the Giving What We Can pledge, and I will stay friends with the many kind EAs I’ve met.
But I no longer feel represented by this community. And I think a lot of others feel the same way.
Edit log (2/6/23, 12:28pm): Edited the second item of the list, see RobBensinger's comment.
- ^
This is not to say that longtermism is completely wrong—it’s not. I do, however, think "fanatical" or "strong" longtermism has gone too far.
Is influencing the far future really tractable? How is x-risk reduction not a Pascal's mugging?
I agree that future generations are probably too neglected right now. But I just don't find myself entirely convinced by the current EA answers to these questions. (See also.)
I appreciate the feedback and I think it's helpful to think about what reference point we're using. I stand by what I'm saying, though, for a few reasons:
1) No cause has any prior claim to the funds, but they're zero-sum, and I think the counterfactual probably is more GH&D funding. Maybe there are funders who are willing to donate only to longtermist causes, but I think the model of a pool of money being split between GH&D/animal welfare and longtermism/x-risk is somewhat fair: e.g., OpenPhil splits its money between these two buckets, and a lot of EAs defer to the "party line." So "watching money get redirected from the Global South to AI researchers" is a true description of much of what's happening. (More indirectly, I also think EA's weirdness and futurism is turns off many people who might otherwise donate to GiveWell. This excellent post provides more detail. I think it's worth thinking about whether packaging global health with futurism and movement-building expenses justified by post hoc Pascalian “BOTECs" really does more good than harm.)
2) Even if you don't buy this, I believe making GH&D the baseline is (at least as I see it—Duncan Sabien says this is true of the drowning child thought experiment too), to some extent, the point of EA. It says "don't pay an extra $5,000/year for rent to get a marginally nicer apartment because the opportunity cost could be saving a life." At least, this is how Peter Singer frames it in The Life You Can Save, the book that originally got me into EA.
Also, this is basically what GiveWell does by using GiveDirectly as a lower bound that their top charities have to beat. They realize that if the alternative is giving to GD, giving to Malaria Consortium or New Incentives does in practice "redirect money from the wallets of world's poorest villagers." I agree with their framing that this is an appropriate bar to expect their top charities to clear.