In a recent Wired article about Anthropic, there's a section where Anthropic's president, Daniela Amodei, and early employee Amanda Askell seem to suggest there's little connection between Anthropic and the EA movement:
Ask Daniela about it and she says, "I'm not the expert on effective altruism. I don't identify with that terminology. My impression is that it's a bit of an outdated term". Yet her husband, Holden Karnofsky, cofounded one of EA's most conspicuous philanthropy wings, is outspoken about AI safety, and, in January 2025, joined Anthropic. Many others also remain engaged with EA. As early employee Amanda Askell puts it, "I definitely have met people here who are effective altruists, but it's not a theme of the organization or anything". (Her ex-husband, William MacAskill, is an originator of the movement.)
This led multiple people on Twitter to call out how bizarre this is:
In my eyes, there is a large and obvious connection between Anthropic and the EA community. In addition to the ties mentioned above:
- Dario, Anthropic’s CEO, was the 43rd signatory of the Giving What We Can pledge and wrote a guest post for the GiveWell blog. He also lived in a group house with Holden Karnofsky and Paul Christiano at a time when Paul and Dario were technical advisors to Open Philanthropy.
- Amanda Askell was the 67th signatory of the GWWC pledge.
- Many early and senior employees identify as effective altruists and/or previously worked for EA organisations
- Anthropic has a "Long-Term Benefit Trust" which, in theory, can exercise significant control over the company. The current members are:
- Zach Robinson - CEO of the Centre for Effective Altruism.
- Neil Buddy Shah - CEO of the Clinton Health Access Initiative, former Managing Director at GiveWell and speaker at multiple EA Global conferences
- Kanika Bahl - CEO of Evidence Action, a long-term grantee of GiveWell.
- Three of EA’s largest funders historically (Dustin Moskovitz, Sam Bankman-Fried and Jann Tallinn) were early investors in Anthropic.
- Anthropic has hired a "model welfare lead" and seems to be the company most concerned about AI sentience, an issue that's discussed little outside of EA circles.
- On the Future of Life podcast, Daniela said, "I think since we [Dario and her] were very, very small, we've always had this special bond around really wanting to make the world better or wanting to help people" and "he [Dario] was actually a very early GiveWell fan I think in 2007 or 2008."
- The Anthropic co-founders have apparently made a pledge to donate 80% of their Anthropic equity (mentioned in passing during a conversation between them here and discussed more here)
- Their first company value states, "We strive to make decisions that maximize positive outcomes for humanity in the long run."
It's perfectly fine if Daniela and Dario choose not to personally identify with EA (despite having lots of associations) and I'm not suggesting that Anthropic needs to brand itself as an EA organisation. But I think it’s dishonest to suggest there aren’t strong ties between Anthropic and the EA community. When asked, they could simply say something like, "yes, many people at Anthropic are motivated by EA principles."
It appears that Anthropic has made a communications decision to distance itself from the EA community, likely because of negative associations the EA brand has in some circles. It's not clear to me that this is even in their immediate self-interest. I think it’s a bad look to be so evasive about things that can be easily verified (as evidenced by the twitter response).
This also personally makes me trust them less to act honestly in the future when the stakes are higher. Many people regard Anthropic as the most responsible frontier AI company. And it seems like something they genuinely care about—they invest a ton in AI safety, security and governance. Honest and straightforward communication seems important to maintain this trust.
I'm sympathetic to wanting to keep your identity small, particularly if you think the person asking about your identity is a journalist writing a hit piece, but if everyone takes funding, staff, etc. from the EA commons and don't share that they got value from that commons, the commons will predictably be under-supported in the future.
I hope Anthropic leadership can find a way to share what they do and don't get out of EA (e.g. in comments here).
I understand why people shy away from/hide their identities when speaking with journalists but I think this is a mistake, largely for reasons covered in this post but I think a large part of the name brand of EA deteriorating is not just FTX but the risk-averse reaction to FTX by individuals (again, for understandable reasons) but that
When PG refers to keeping your identity small, he means don't defend it or its characteristics for the sake of it. There's nothing wrong with being a C/C++ programmer, but realizing it's not the best for rapid development needs or memory safety. In this case, you can own being an EA/your affiliation to EA and not need to justify everything about the community.
We had a bit of a tragedy of the commons problem because a lot of people are risk-averse and don't want to be associated with EA in case something bad happens to them but this causes the brand to lose a lot of good people you'd be happy to be associated with.
I'm a proud EA.
Note that much of the strongest opposition to Anthropic is also associated with EA, so it's not obvious that the EA community has been an uncomplicated good for the company, though I think it likely has been fairly helpful on net (especially if one measures EA's contribution to Anthropic's mission of making transformative AI go well for the world rather than its contribution to the company's bottom line). I do think it would be better if Anthropic comms were less evasive about the degree of their entanglement with EA.
(I work at Anthropic, though I don't claim any particular insight into the views of the cofounders. For my part I'll say that I identify as an EA, know many other employees who do, get enormous amounts of value from the EA community, and think Anthropic is vastly more EA-flavored than almost any other large company, though it is vastly less EA-flavored than, like, actual EA orgs. I think the quotes in the paragraph of the Wired article give a pretty misleading picture of Anthropic when taken in isolation and I wouldn't personally have said them, but I think "a journalist goes through your public statements looking for the most damning or hypocritical things you've ever said out of context" is an incredibly tricky situation to come out of looking good and many of the comments here seem a bit uncharitable given that.)
My guess is that the people quoted in this article would be sad if e.g. 80k started telling people not to work at Anthropic. But maybe I'm wrong - would be good to know if so!
(And also yes, "people having unreasonably high expectations for epistemics in published work" is definitely a cost of dealing with EAs!)
Oh, definitely agreed - I think effects like "EA counterfactually causes a person to work at Anthropic" are straightforwardly good for Anthropic. Almost all of the sources of bad-for-Anthropic effects from EA I expect come from people who have never worked there.
(Though again, I think even the all-things-considered effect of EA has been substantially positive for the company, and I agree that it would probably be virtue-ethically better for Anthropic to express more of the value they've gotten from that commons.)
Edit: the comment above has been edited, the below was a reply to a previous version and it makes less sense now, leaving it for posterity
You know much more than I do, but I'm surprised by this take. My sense is that Anthropic is giving a lot back:
My understanding is that all early investors in Anthropic made a ton of money, it's plausible that Moskovitz made as much money by investing in Anthropic as by founding Asana. (Of course this is all paper money for now, but I think they could sell it for billions).
As mentioned in this post, co-founders also pledged to donate 80% of their equity, which seems to imply they'll give much more funding than they got. (Of course in EV, it could still go to zero)
I don't see why hiring people is more "taking" than "giving", especially if the hires get to work on things that they believe are better for the world than any other role they could work on
My sense is that (even ignoring funding mentioned above) they are giving a ton back in terms of research on alignment, interpretability, model welfare, and general AI Safety work
To be clear, I don't know if Anthropic is net-positive for the world, but it seems to me that its trades with EA institutions have been largely mutually beneficial. You could make an argument that Anthropic could be "giving back" even more to EA, but I'm skeptical that it would be the most cost-effective use of their resources (including time and brand value)
Great points, I don't want to imply that they contribute nothing back, I will think about how to reword my comment.
I do think 1) community goods are undersupplied relative to some optimum, 2) this is in part because people aren't aware how useful those goods are to orgs like Anthropic, and 3) that in turn is partially downstream of messaging like what OP is critiquing.
I'm a bit confused about people suggesting this is defendable.
"I'm not the expert on effective altruism. I don't identify with that terminology. My impression is that it's a bit of an outdated term".
There are three statements here
1. I'm not the expert on effective altruism - Its hard to see this as anything other than a lie. She's married to Holden Karnofsky and knows ALL about Effective Altruism. She would probably destroy me on a "Do you understand EA" quiz.... I wonder how @Holden Karnofsky feels about this?
2. I don't identify with that terminology. - yes true at least now! Maybe she's still got some residual warmth for us deep in her heart?
3. My impression is that it's a bit of an outdated term". - Her husband set up 2 of the biggest EA (or heavily EA based) institutions that are still going strong today. On what planet is it an "outdated" term? Perhaps on the planet where your main goal is growing and defending your company?
In addition to the clear associations from the OP, from Their wedding page 2017 seemingly written by Daniela "We are both excited about effective altruism: using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis. For gifts we’re asking for donations to charities recommended by GiveWell, an organization Holden co-founded."
If you want to distance yourself from EA, do it and be honest. If you'd rather not comment, don't comment. But don't obfuscate and lie pretending you don't know about EA and downplay the movement.
I'm all for giving people the benefit of the doubt, but there doesn't seem to be reasonable doubt here.
I don't love raising this as its largely speculation on my part, but there might still be a undercurrent of copium within the EA community by people who backed, or still back Anthropic as the "best" of the AI acceleration bunch (which they quite possibly are) and want to hold that close after failing with Open AI...
I think you shouldn't assume that people are "experts" on something just because they're married to someone who is an expert, even when (like Daniela) they're smart and successful.
Everything you say is correct I think, but I think in more normal circles, pointing out the inconsistency between someone's wedding page and their corporate PR bullshit would seem a bit weird and obsessive and mean. I don't find it so, but I think ordinary people would get a bad vibe from it.
That's interesting I think I might move in different circles. Most people I know would not really understand the concept of there being a PR world where your present different things from your personal life
Perhaps you move in more corporate or higher flying circles where this kind of disconnect is normal and where its fine to have a public/private communication disconnect which is considered rude to challenge? Interesting!
No, I don't move in corporate circles.
I agree that these statements are not defensible. I'm sad to see it. There's maybe some hope that the person making these statements was just caught off guard and it's not a common pattern at Antrhopic to obfuscate things with that sort of misdirection. (Edit: Or maybe the journalist was fishing for quotes and made it seem like they were being more evasive than they actually were.)
I don't get why they can't just admit that Anthropic's history is pretty intertwined with EA history. They could still distance themselves from "EA as the general public perceives it" or even "EA-as-it-is-now."
For instance, they could flag that EA maybe has a bit of a problem with "purism" -- like, some vocal EAs in this comment section and elsewhere seem to think it is super obvious that Antrhopic has been selling out/became too much of a typical for-profit corporation. I didn't myself think that this was necessarily the case because I see a lot of valid tradeoffs that Anthropic leadership is having to navigate, and the armchair quarterbacks EAs seem to be failing to take that into account? However, the communications highlighted in the OP made me update that Anthropic leadership probably does lack the integrity needed to do complicated power-seeking stuff that has the potential to corrupt. (If someone can handle the temptions from power, they should at the very least be able to handle the comparatively easy dynamics of don't willingly distort the truth as you know it.)
Yes. It's sad to see, but Anthropic is going the same way as OpenAI, despite being founded by a group that split from OpenAI over safety concerns. Power (and money) corrupts. How long until another group splits from Anthropic and the process repeats? Or actually, one can hope that such a group splitting from Anthropic might actually have integrity and instead work on trying to stop the race.
Not sure if I misunderstand something but the wedding page seems from 2017? (It reads "October 21, 2017" at the top.)
Apologies corrected
There's a lesson here for everyone in/around EA, which is why I sent the pictured tweet: it is very counterproductive to downplay what or who you know for strategic or especially "optics" reasons. The best optics are honesty, earnestness, and candor. If you have explain and justify why your statements that are perceived as evasive and dishonest are in fact okay, you probably did a lot worse than you could have on these fronts.
Also, on the object level, for the love of God, no one cares about EA except EAs and some obviously bad faith critics trying to tar you with guilt-by-association. Don't accept their premise and play into their narrative by being evasive like this. *This validates the criticisms and makes you look worse in everyone's eyes than just saying you're EA or you think it's great or whatever.*
But what if I'm really not EA anymore? Honesty requires that you at least acknowledge that you *were.* Bonus points for explaining what changed. If your personal definition of EA changed over that time, that's worth pondering and disclosing as well.
I agree with your broad points, but this seems false to me. I think that lots of people seem to have negative associations with EA, especially given SBF and in the AI and tech space where eg it's widely (and imo falsely) believed that the openai coup was for EA reasons
I think we feel this more than is the case. I think a lot of people know about it but don't have much of an opinion on it, similar to how I feel about NASCAR or something.
I recently caught up with a friend who worked at OpenAI until very recently and he thought it was good that I was part of EA and what I did since college.
I overstated this, but disagree. Overall very few people have ever heard of EA. In tech, maybe you get up to ~20% recognition, but even there, the amount of headspace people give it is very small and you should act as though this is the case. I agree it's negative directionally, but evasive comments like these are actually a big part of how we got to this point.
I'm specifically claiming silicon valley AI, where I think it's a fair bit higher?
"widely (and imo falsely) believed that the openai coup was for EA reasons"
False why?
Because Sam was engaging in a bunch of highly inappropriate behaviour for a CEO like lying to the board which is sufficient to justify the board firing him without need for more complex explanations. And this matches private gossip I've heard, and the board's public statements
Further, Adam d'Angelo is not, to my knowledge, an EA/AI safety person, but also voted to remove Sam and was a necessary vote, which is strong evidence there were more legit reasons
The "highly inappropriate behavior" is question was nearly entirely about violating safety protocols, and by the time Murati and Sutskever defected to Altman's side the conflict was clearly considered by both sides to be a referendum on EA and AI safety, to the point of the board seeking to nominate rationalist Emmett Shear as Altman's replacement.
Ilya too!
I think the people in the article you quote are being honest about not identifying with the EA social community, and the EA community on X is being weird about this.
I think the confusion might stem from interpreting EA as "self-identifying with a specific social community" (which they claim they don't, at least not anymore) vs EA as "wanting to do good and caring about others" (which they claim they do, and always did)
Going point by point:
This was more than 10 years ago. EA was a very different concept / community at the time, and this is consistent with Daniela Amodei saying that she considers it an "outdated term"
This was also more than 10 years ago, and giving to charity is not unique to EA. Many early pledgers don't consider themselves EA (e.g. signatory #46 claims it got too stupid for him years ago)
Amanda Askell explicitly says "I definitely have met people here who are effective altruists" in the article you quote, so I don't think this contradicts it in any way
https://x.com/AmandaAskell/status/1905995851547148659
That's false: https://en.wikipedia.org/wiki/Artificial_consciousness
Wanting to make the world better, wanting to help people, and giving significantly to charity are not prerogatives of the EA community.
I think that's exactly what they are doing in the quotes in the article: "I don't identify with that terminology" and "it's not a theme of the organization or anything"
I don't think they suggest that, depending on your definition of "strong". Just above the sceenshotted quote, the article mentions that many early investors were at the time linked to EA.
I don't think X responses are a good metric of honesty, and those seem to be mostly from people in the EA community.
In general, I think it's bad for the EA community that everyone who interacts with it has to worry about being liable for life for anything the EA community might do in the future.
I don't see why it can't let people decide if they want to consider themselves part of it or not.
As an example, imagine if I were Catholic, founded a company to do good, raised funding from some Catholic investors, and some of the people I hired were Catholic. If 10 years later I weren't Catholic anymore, it wouldn't be dishonest for me to say "I don't identify with the term, and this is not a Catholic company, although some of our employees are Catholic". And giving to charity or wanting to do good wouldn't be gotchas that I'm secretly still Catholic and hiding the truth for PR reasons. And this is not even about being a part of a specific social community.
The point I was trying to make is, separate from whether these statements are literally false, they give a misleading impression to the reader. If I didn't know anything about Anthropic and I read the words “I definitely have met people here who are effective altruists, but it's not a theme of the organization or anything”, I might think Anthropic is like Google where you may occasionally meet people in the cafeteria who happen to be effective altruists but EA really has nothing to do with the organisation. I would not get the impression that many of the employees are EAs who work at Anthropic or work on AI safety for EA reasons. And that the three members of the trust they've given veto power over the company to have been heavily involved in EA.
I also think being weird and evasive about this isn't a good communication strategy (for reasons @Mjreard discusses above).
As a side point, I'm confused when you say:
That was said by the author of the article who was trying to make the point that there is a link between Anthropic and EA. So I don't see this as evidence of Anthropic being forthcoming.
I think in the context of the article, their quotes (44 words in total) make more sense:
In that context, the quotes clarify that Anthropic is not an "EA company", and give a more accurate understanding of the relationship to the reader.
A more in-depth analysis of the historical affiliations, separations, agreements, and disagreements of Anthropic's funders, founders, and employees with various parts of EA over the past 15 years would take far more than two paragraphs.
You wouldn't think that in the context of the article, though.
I don't know what percentage of Anthropic employees consider themselves part of the EA community. Also, I don't agree that it's clear that Evidence Action's CEO is part of the effective altruism community because evidence action received money from GiveWell.
https://www.linkedin.com/in/kanika-bahl-091a936/details/experience/ She was working in global health since before effective altruism was a thing, and many/most people funded by OpenPhilanthropy don't consider themselves part of the community. In the same way that charities funded by Catholic donors are not necessarily Catholic. It does seem that OpenPhilanthropy was their main source of funding for many years though, which makes the link stronger than I originally thought.
You seem to have ignored a central part of what was said by Daniela Amodei; "I'm not the expert on effective altruism," which seems hard to defend.
This works both ways. EA should be distancing itself from Anthropic, given recent pronouncements by Dario about racing China and initiating recursive self-improvement. Not to mention their pushing of the capabilities frontier.
As always, and as I've said in other cases, I.don't think it makes sense to ask a disparate movement to make pronouncements like this.
No, but the main orgs in EA can still act in this regard. E.g. Anthropic shouldn't be welcome at EAG events. They shouldn't have their jobs listed on 80k. They shouldn't be collaborated with on research projects etc that allow them to "safety wash" their brand. In fact, they should be actively opposed and protested (as PauseAI have done).
Giving this an "insightful" because I appreciate the documentation of what is indeed a surprisingly close relationship with EA. But also a disagree because it seems reasonable to be skittish around the subject ("AI Safety" broadly defined is the relevant focus, adding more would just set-off an unnecessary news media firestorm).
Plus, I'm not convinced that Anthropic has actually engaged in outright deception or obfuscation. This seems like a single slightly odd sentence by Daniela, nothing else.