In a recent Wired article about Anthropic, there's a section where Anthropic's president, Daniela Amodei, and early employee Amanda Askell seem to suggest there's little connection between Anthropic and the EA movement:
Ask Daniela about it and she says, "I'm not the expert on effective altruism. I don't identify with that terminology. My impression is that it's a bit of an outdated term". Yet her husband, Holden Karnofsky, cofounded one of EA's most conspicuous philanthropy wings, is outspoken about AI safety, and, in January 2025, joined Anthropic. Many others also remain engaged with EA. As early employee Amanda Askell puts it, "I definitely have met people here who are effective altruists, but it's not a theme of the organization or anything". (Her ex-husband, William MacAskill, is an originator of the movement.)
This led multiple people on Twitter to call out how bizarre this is:
In my eyes, there is a large and obvious connection between Anthropic and the EA community. In addition to the ties mentioned above:
- Dario, Anthropic’s CEO, was the 43rd signatory of the Giving What We Can pledge and wrote a guest post for the GiveWell blog. He also lived in a group house with Holden Karnofsky and Paul Christiano at a time when Paul and Dario were technical advisors to Open Philanthropy.
- Amanda Askell was the 67th signatory of the GWWC pledge.
- Many early and senior employees identify as effective altruists and/or previously worked for EA organisations
- Anthropic has a "Long-Term Benefit Trust" which, in theory, can exercise significant control over the company. The current members are:
- Zach Robinson - CEO of the Centre for Effective Altruism.
- Neil Buddy Shah - CEO of the Clinton Health Access Initiative, former Managing Director at GiveWell and speaker at multiple EA Global conferences
- Kanika Bahl - CEO of Evidence Action, a long-term grantee of GiveWell.
- Three of EA’s largest funders historically (Dustin Moskovitz, Sam Bankman-Fried and Jann Tallinn) were early investors in Anthropic.
- Anthropic has hired a "model welfare lead" and seems to be the company most concerned about AI sentience, an issue that's discussed little outside of EA circles.
- On the Future of Life podcast, Daniela said, "I think since we [Dario and her] were very, very small, we've always had this special bond around really wanting to make the world better or wanting to help people" and "he [Dario] was actually a very early GiveWell fan I think in 2007 or 2008."
- The Anthropic co-founders have apparently made a pledge to donate 80% of their Anthropic equity (mentioned in passing during a conversation between them here and discussed more here)
- Their first company value states, "We strive to make decisions that maximize positive outcomes for humanity in the long run."
It's perfectly fine if Daniela and Dario choose not to personally identify with EA (despite having lots of associations) and I'm not suggesting that Anthropic needs to brand itself as an EA organisation. But I think it’s dishonest to suggest there aren’t strong ties between Anthropic and the EA community. When asked, they could simply say something like, "yes, many people at Anthropic are motivated by EA principles."
It appears that Anthropic has made a communications decision to distance itself from the EA community, likely because of negative associations the EA brand has in some circles. It's not clear to me that this is even in their immediate self-interest. I think it’s a bad look to be so evasive about things that can be easily verified (as evidenced by the twitter response).
This also personally makes me trust them less to act honestly in the future when the stakes are higher. Many people regard Anthropic as the most responsible frontier AI company. And it seems like something they genuinely care about—they invest a ton in AI safety, security and governance. Honest and straightforward communication seems important to maintain this trust.
When I speak of a strong inoculant, I mean something that is very effective in preventing the harm in question -- such as the measles vaccine. Unless there were a measles case at my son's daycare, or a family member were extremely vulnerable to measles, the protection provided by the strong inoculant is enough that I can carry on with life without thinking about measles.
In contrast, the influenza vaccine is a weak inoculant -- I definitely get vaccinated because I'll get infected less and hospitalized less without it. But I'm not surprised when I get the flu. If I were at great risk of serious complications from the flu, then I'd only use vaccination as one layer of my mitigation strategy (and without placing undue reliance on it.) And of course there are strengths in between those two.
I'd call myself moderately cynical. I think history teaches us that the corrupting influence of power is strong and that managing this risk has been a struggle. I don't think I need to take the position that no strong inoculant exists. It is enough to assert that -- based on centuries of human experience across cultures -- our starting point should be that inoculants as weak until proven otherwise by sufficient experience. And when one of the star pupils goes so badly off the rails, along with several others in his orbit, that adds to the quantum of evidence I think is necessary to overcome the general rule.
I'd add that one of the traditional ways to mitigate this risk is to observe the candidate over a long period of time in conjunction with lesser levels of power. Although it doesn't always work well in practice, you do get some ability to measure the specific candidate's susceptibility in lower-stakes situations. It may not be popular to say, but we just won't have had the same potential to observe people in their 20s and 30s in intermediate-power situations that we often will have had for the 50+ crowd. Certainly people can and do fake being relatively unaffected by money and power for many years, but it's harder to pull off than for a shorter period of time.
Maybe. But on first principles, one might have also thought that belief in an all-powerful, all-knowing deity who will hammer you if you fall out of line would be a fairly strong inoculant. But experience teaches us that this is not so!
Also, if I had to design a practical philosophy that was maximally resistant to corruption, I'd probably ground it on virtue ethics or deontology rather than give so much weight to utilitarian considerations. The risk of the newly-powerful person deceiving themselves may be greater for a utilitarian.
--
As you imply, the follow-up question is where we go from here. I think there are three possible approaches to dealing with a weak or moderate-strength inoculant:
My point is that doing these steps well requires a reasonably accurate view of inoculant strength. And I got the sense that the community is more confident in EA-as-inoculant than the combination of general human experience and the limited available evidence on EA-as-inoculant warrants.