Claude 3.7's coding ability forced me to reanalyze whether where will be a SWE job for me after college. This has forced me to re-explore AI safety and its arguments, and I have been re-radicalized towards the safety movement.
What I can’t understand, though, is how contradictory so much of Effective Altruism (EA) feels. It hurts my head, and I want to explore my thoughts in this post.
EA seems far too friendly toward AGI labs and feels completely uncalibrated to the actual existential risk (from an EA perspective) and the probability of catastrophe from AGI (p(doom)). Why aren’t we publicly shaming AI researchers every day? Are we too unwilling to be negative in our pursuit of reducing the chance of doom? Why are we friendly with Anthropic? Anthropic actively accelerates the frontier, currently holds the best coding model, and explicitly aims to build AGI—yet somehow, EAs rally behind them? I’m sure almost everyone agrees that Anthropic could contribute to existential risk, so why do they get a pass? Do we think their AGI is less likely to kill everyone than that of other companies? If so, is this just another utilitarian calculus that we accept even if some worlds lead to EA engineers causing doom themselves? What is going on...
I suspect that many in the AI safety community avoid adopting the "AI doomer" label. I also think that many AI safety advocates quietly hope to one day work at Anthropic or other labs and will not publicly denounce a future employer.
Another possibility is that Open Philanthropy (OP) plays a role. Their former CEO now works at Anthropic, and they have personal ties to its co-founder. Given that most of the AI safety community is funded by OP, could there be financial incentives pushing the field more toward research than anti AI-lab advocacy? This is just a suspicion, and I don’t have high confidence in it, but I’m looking for opinions.
Spending time in the EA community does not calibrate me to the urgency of AI doomerism or the necessary actions that should follow. Watching For Humanity’s AI Risk Special documentary made me feel far more emotionally in tune with p(doom) and AGI timelines than engaging with EA spaces ever has. EA feels business as usual when it absolutely should not. More than 700 people attended EAG, most of whom accept X-risk arguments, yet AI protests in San Francisco still draw fewer than 50 people. I bet most of them aren’t even EAs.
What are we doing?
I’m looking for discussion. Please let me know what you think.
I don't think you're alone at all. EY and other prominent rationalists (like LW webmaster Habryka) have also said they believe EA has been net-negative for human survival for quite a while already, EleutherAI's Connor Leahy has recently released the strongly EA-critical Compendium, which has been praised by many leading longtermists, particularly FLI's Max Tegmark, and Anthropic's recent antics like calling for recursive self-improvement to beat China is definitely souring a lot of people left unconvinced in those spaces on OP. From personal conservations, I can tell you PauseAI in particular is increasingly hostile to EA leadership.
Yeah IIRC I think EY do consider himself to have been net-negative overall so far, hence the whole "death with dignity" spiral. But I don't think one can claim his role has been more negative than OPP/GV deciding to bankroll OpenAI and Anthropic (at least when removing the indirect consequences due to him having influenced the development of EA in the first place).