A few years ago, I read The Life You Can Save by Peter Singer. I felt deeply inspired. The idea that charities could be compared using evidence and reason, the thought that I could save many lives without sacrificing my own happiness: I found these ideas meaningful, and I hoped they would give my life a sense of purpose (even if other factors were likely also at play).
I became an Intro Fellow and read more. I went to conferences and retreats. I now lead my university group.
But I’m frustrated.
I’m now asked to answer for the actions of a man who defrauded millions of people, and for the purchase of castles and $2000+ coffee tables.
I’m now associated with predatory rationalists.
I’m now told to spend my life reducing existential risk by .00001 percent to protect 1018 future humans, and forced to watch money get redirected from the Global South to AI researchers.[1]
This is not what I signed up for.
I used to be proud to call myself an EA. Now, when I say it, I also feel shame and embarrassment.
I will take the Giving What We Can pledge, and I will stay friends with the many kind EAs I’ve met.
But I no longer feel represented by this community. And I think a lot of others feel the same way.
Edit log (2/6/23, 12:28pm): Edited the second item of the list, see RobBensinger's comment.
- ^
This is not to say that longtermism is completely wrong—it’s not. I do, however, think "fanatical" or "strong" longtermism has gone too far.
Is influencing the far future really tractable? How is x-risk reduction not a Pascal's mugging?
I agree that future generations are probably too neglected right now. But I just don't find myself entirely convinced by the current EA answers to these questions. (See also.)
Thank you for that link, I find it genuinely heartening. I definitely don't want to ever discount the incredible work that EA does do in GHD, and the many, many lives that it has saved and continues to save in that area.
I can still see where the OP is coming from though. When I first started following EA and donating many years ago, it was primarily a GHD organisation about giving to developing countries in an effective manner based on the results of rigorous peer reviewed trial evidence. I was happy and proud to present it to anyone I knew.
But now, I see an organisation where the core of EA is vastly more concerned with AI risk than GHD. As an AI risk skeptic, I believe this is a mistake based on incorrect beliefs and reasoning, and by it's nature it lacks the rigorous evidence I expect from the GHD work. (you're free to disagree with this of course, but it's a valid opinion and one a lot of people hold). If I endorse and advocate for EA as a whole, a large fraction of the money that is brought in by the endorsement will end up going to causes I consider highly ineffective, whereas if I advocate for specific GHD causes, 100% of it will go to things I consider effective. So the temptation is to leave EA and just advocate directly for GHD orgs.
My current approach is to stick around, take the AI arguments seriously and to attempt to write in depth critique of what I find incorrect about them. But it's a lot of effort and very hard work to write , and it's very easy to get discouraged and think it's pointless. So I understand why a lot of people are not bothering.