In a recent Wired article about Anthropic, there's a section where Anthropic's president, Daniela Amodei, and early employee Amanda Askell seem to suggest there's little connection between Anthropic and the EA movement:
Ask Daniela about it and she says, "I'm not the expert on effective altruism. I don't identify with that terminology. My impression is that it's a bit of an outdated term". Yet her husband, Holden Karnofsky, cofounded one of EA's most conspicuous philanthropy wings, is outspoken about AI safety, and, in January 2025, joined Anthropic. Many others also remain engaged with EA. As early employee Amanda Askell puts it, "I definitely have met people here who are effective altruists, but it's not a theme of the organization or anything". (Her ex-husband, William MacAskill, is an originator of the movement.)
This led multiple people on Twitter to call out how bizarre this is:
In my eyes, there is a large and obvious connection between Anthropic and the EA community. In addition to the ties mentioned above:
- Dario, Anthropic’s CEO, was the 43rd signatory of the Giving What We Can pledge and wrote a guest post for the GiveWell blog. He also lived in a group house with Holden Karnofsky and Paul Christiano at a time when Paul and Dario were technical advisors to Open Philanthropy.
- Amanda Askell was the 67th signatory of the GWWC pledge.
- Many early and senior employees identify as effective altruists and/or previously worked for EA organisations
- Anthropic has a "Long-Term Benefit Trust" which, in theory, can exercise significant control over the company. The current members are:
- Zach Robinson - CEO of the Centre for Effective Altruism.
- Neil Buddy Shah - CEO of the Clinton Health Access Initiative, former Managing Director at GiveWell and speaker at multiple EA Global conferences
- Kanika Bahl - CEO of Evidence Action, a long-term grantee of GiveWell.
- Three of EA’s largest funders historically (Dustin Moskovitz, Sam Bankman-Fried and Jann Tallinn) were early investors in Anthropic.
- Anthropic has hired a "model welfare lead" and seems to be the company most concerned about AI sentience, an issue that's discussed little outside of EA circles.
- On the Future of Life podcast, Daniela said, "I think since we [Dario and her] were very, very small, we've always had this special bond around really wanting to make the world better or wanting to help people" and "he [Dario] was actually a very early GiveWell fan I think in 2007 or 2008."
- The Anthropic co-founders have apparently made a pledge to donate 80% of their Anthropic equity (mentioned in passing during a conversation between them here and discussed more here)
- Their first company value states, "We strive to make decisions that maximize positive outcomes for humanity in the long run."
It's perfectly fine if Daniela and Dario choose not to personally identify with EA (despite having lots of associations) and I'm not suggesting that Anthropic needs to brand itself as an EA organisation. But I think it’s dishonest to suggest there aren’t strong ties between Anthropic and the EA community. When asked, they could simply say something like, "yes, many people at Anthropic are motivated by EA principles."
It appears that Anthropic has made a communications decision to distance itself from the EA community, likely because of negative associations the EA brand has in some circles. It's not clear to me that this is even in their immediate self-interest. I think it’s a bad look to be so evasive about things that can be easily verified (as evidenced by the twitter response).
This also personally makes me trust them less to act honestly in the future when the stakes are higher. Many people regard Anthropic as the most responsible frontier AI company. And it seems like something they genuinely care about—they invest a ton in AI safety, security and governance. Honest and straightforward communication seems important to maintain this trust.
(I know I'm late again replying to this thread.)
Hm, good point. This gives me pause, but I'm not sure what direction to update in. Like, maybe I should update "corporate speak is just what these large orgs do and it's more like a fashion thing than a signal of their (lack of) integrity on things that matter most." Or maybe I should update in the direction you suggest, namely "if an org grows too much, it's unlikely to stay aligned with its founding character principles."
I would have certainly thought so. If anything can be an inoculant against those temptations, surely a strong adherence to a cause greater than oneself packaged in lots warnings against biases and other ways humans can go wrong (as is the common message in EA and rationalist circles) seems like the best hope for it? If you don't think it can be a strong inoculant, that makes you pretty cynical, no? (I think cynicism is often right, so this isn't automatically a rejection of your position. I just want to flag that yours is a claim with quite strong implications on its own.)
If you were just talking about SBF, then I'd say your point is weak because he probably wasn't low on dark triad traits to start out with. But you emphasizing how other EAs around him were also involved (the direct co-conspirators at Alameda and FTX) is a strong point.
Still, in my mind this would probably have gone very differently with the same group of people minus SBF and with a leader with a stronger commitment and psychological disposition towards honesty. (I should flag that parts of Caroline Ellison's blog also gave me vibes of "seems to like having power too much" -- but at least it's more common for young people to later change/grow.) That's why I don't consider it a huge update for "power corrupts". To me, it's a reinforcement of "it matters to have good leadership."
My worldview(?) is that "power corrupts" doesn't apply equally to every leader and that we'd be admitting defeat straight away if we stopped trying to do ambitious things. There doesn't seem to be a great way to do targeted ambitious things without some individual acquiring high amounts of power in the process.(?) We urgently need to do a better job at preventing that those who end up with a lot of power are almost always those with kind of shady character. The fact that we're so bad at this suggests that these people are advantaged at some aspects of ambitious leadership, which makes the whole thing a lot harder. But that doesn't mean it's impossible.
I concede that there's a sense in which this worldview of mine is not grounded in empiricism -- I haven't even looked into the matter from that perspective. Instead, it's more like a commitment to a wager: "If this doesn't work, what else are we supposed to do?"
I'm not interested in concluding that the best we can do is criticise the powers that be from the sidelines.
Of course, if leaders exhibit signs of low integrity, like in this example of Anthropic's communications, it's important not to let this slide. The thing I want to push back against is an attitude of "person x or org y has acquired so much power, surely that means that they're now corrupted," and this leading to no longer giving them the benefit of the doubt/not trying to see the complexities of their situation when they do something that looks surprising/disappointing/suboptimal. With great power comes great responsiblity, including a responsibility to not mess up your potential for doing even more good later on. Naturally, this does come with lots of tradeoffs and it's not always easy to infer from publicly visible actions and statements whether an org is still culturally on track. (That said, I concede that you can often tell quite a lot about someone's character/an org's culture based on how/whether they communicate nuances, which is sadly why I've had some repeated negative updates about Anthropic lately.)