EA has been an unusually high-trust community. This has pros and cons. That trust comes with a lot of what makes EA great: people are unusually likely to help each other if they think it will be good overall, reports can discuss with managers whether they should leave for another job, and grantees can be unusually frank with grantmakers.
As discussions grow on the forum about Anthropic employees potentially donating large amounts of equity to EA charities, I want to discuss below a few risks I worry about:
- A collective action problem of everyone pitching Anthropic staff directly.
- Engaging in dishonest or disingenuous behaviour to better fundraise.
Avoiding both these issues matters for two reasons. First, Anthropic staff deserve to be treated with honesty and respect. Second, I think it’s good for impact for several reasons, including the value of trust and transparency in maintaining donations over many years.
Collective Action Problem 1. Pitching directly
Individual orgs and fundraisers each have rational incentives to pitch Anthropic employees directly. From any single org's perspective, reaching out once seems fine. But when there are hundreds of orgs potentially worthy of donations, if pitching a donor directly gets you an edge, we have a collective action problem. If each org tries to reach individuals donating 8 figures, then every org needs to employ several fundraisers to contact dozens of individuals. This is clearly a horrendously inefficient way for the movement as a whole to fundraise, and leaves everyone worse off than cooperation.
Relatedly, most Anthropic employees aren't active grantmakers, and don't want to be. From my conversations with them, most prefer finding a fund manager they trust, focused on a thesis they agree with, rather than evaluating individual charities. There's already significant complexity in choosing between cause areas and then choosing which funds to trust within each area. Pitching them directly on specific charities adds another decision layer they mostly don't want (and some have found actively stressful).[1] On top of that, evaluating individual charities’ pitches is both a tough skill and a huge time sink for people working crazy hours at their day job.
Collective Action Problem 2. Dishonesty
There's another problem I worry about: dishonesty. It wouldn’t be the first time someone wasn’t entirely honest with a donor in order to get money. I’ve worked with major donors for a while, and it’s fairly common for people to:
- Pitch you on a grant that internally has one goal (e.g., reduce chicken suffering) but is framed in a way that sounds good based on your values (e.g., climate change).
- Ask to meet for one reason (e.g., can I get your advice on an upcoming project) when they really want something else (e.g., oh by the way, are you interested in funding our work)
- Treat you differently in a social context. James Özden said the following on working at a foundation “when some people I knew before my current role want to socialise, there can be a lingering suspicion that they have ulterior motives, even if this isn’t the case at all. The changes in social dynamics also appear in more subtle ways. People seem to listen to you more attentively, agree with your ideas more and even laugh at your jokes”.
I think this is broadly bad for a few reasons:
- Dishonesty erodes donor trust, which hurts everyone. Donors are more wary in future interactions.
- A high-trust environment is one that is collaborative, and donors can defer more to subject matter experts. A low trust environment is one that is adversarial, and donors need to spend more energy vetting claims, figuring out who to trust.
- This is going to be a multi-stage game, where people donate over multiple years. Dishonesty one year means a donor is more likely to give less or nothing at all the next year.
This is all very abstract, so I’d invite you to think about being a prospective donor, where you are left wondering who at this house party you’re getting along with will pitch you next week. That you’re working 60-80 hour weeks and then feel guilty about all the pitches you have to turn down. Contrast this with the world in which your community, fund advisors, and friends are honest and trying their best to help you donate in a way that is transparent and respectful.
My recommendations for high-integrity fundraising:
For those of you who don’t work at a fund or regrantor, avoid adding to the number of people pitching Anthropic donors. Err towards applying for funding from an appropriate fund.[2]
- Approach interactions with honesty and integrity
- Be direct about your asks. If you want to pitch someone, despite Collective Action Problem #1 above, make clear that this is the ask, and make it very easy for them to say no.
- Be honest when pitching. Clearly and honestly communicate the goals and limitations rather than what you think the person across from you wants to hear. This is one of the norms that I think makes EA great and exceptional.
- Let the relationship be real. Some of the most effective donor cultivation in this space comes from people who genuinely enjoy talking to donors about the movement, with no specific ask, trusting that good relationships produce good outcomes over time. The extractive version, where every interaction is engineered toward a funding outcome, eventually becomes obvious and corrosive.
This is the highest integrity community I know of. Let’s keep it up.
Thanks to Zoë Sigle, James Özden, Robin Larson, and Olivia Larsen, for feedback on an early draft of this post. While this work overlaps with my work at Senterra Funders, all views are my own. Claude was helpful, but I ended up typing everything myself.
- ^
Noting my conflict of interest here: I work at an organisation that recommends funds to Anthropic donors as our primary call to action. I still think this is the right call.
- ^
For context, there are many regranting organisations across all the main EA cause areas pitching Anthropic prospective donors such as GiveWell, Coefficient Giving, etc. The org where I work, Senterra Funders, is largely pitching the animal advocacy movement via a few large regranting organisations.

FWIW, I think this approach limits downside risk, as you outline, but really caps upside potential for EA. The existing funds are highly correlated with each other in values / perspectives on certain issues (e.g. I think it is quite bad that there isn't an animal welfare-focused fund that is skeptical of the value of marginal dollars spent on farmed animal welfare, which I think is a reasonable position, and all of the AI funds seem to coordinate quite heavily). There just aren't that many funds, they aren't that diverse in viewpoint, and donations primarily going via funds concentrates power in EA in the hands of a few dozen people.
I think that I'd feel less certain about this approach if there were tons and tons of good funds with independent theses, but there aren't. Maybe that means it is time for lots of people to start new funds. But by default, I expect everything going through funds to mean way less grift, yes, but also way less experimentation, less risk taking, and less divergence from consensus views.
FTX Future Fund, for all its issues / impacts on community dynamics, spurred a massive change in what people thought was on the table. The regrantor program in particular seemed like a genuinely massive improvement in democratizing EA, which, in my view, makes EA better and lets people do high risk/reward bets. There are downsides to that, including grift. But the push for funds, especially in animal welfare, seems like a fairly large mistake to me, and I think the end result will be a bunch of money wasted on marginal farmed animal interventions that Coefficient could have otherwise funded, or were obviously not worth funding.
I agree with the problems you outline in not going with the fund approach, but I don't think the solutions being applied, especially on the fund side, are the right ones for doing the most good.
Very pro groups approaching fundraising with honesty and integrity though!
[Commenting in a personal capacity]
I really appreciate this post, and I agree that the community should be mindful of both risks.
I think the instinct to share giving opportunities with Anthropic employees or other value-aligned prospective donors usually comes from a good place. And I would be excited to see more people apply to roles supporting donors who reach out for giving advice at the worldview level.
But to add to Elliot’s point about the value of coordination, here are a few other considerations:
Some people think FTX not collapsing would've been on net worse for EA than FTX collapsing, cuz not collapsing would've led to such a grifter problem. You can find people who saw early signs of people just getting into it cuz of the free flow of money.
I'm pretty prepared to be worried about this, if we get another couple foundations out of Anthropic alums it could be FTX all over again (without the gambling, which makes it better. But with the AI race accelerant, which makes it worse).