People working in the AI industry are making stupid amounts of money, and word on the street is that Anthropic is going to have some sort of liquidity event soon (for example possibly IPOing sometime next year). A lot of people working in AI are familiar with EA, and are intending to direct donations our way (if they haven't started already). People are starting to discuss what this might mean for their own personal donations and for the ecosystem, and this is encouraging to see.
It also has me thinking about 2022. Immediately before the FTX collapse, we were just starting to reckon, as a community, with the pretty significant vibe shift in EA that came from having a lot more money to throw around.
CitizenTen, in "The Vultures Are Circling" (April 2022), puts it this way:
The message is out. There’s easy money to be had. And the vultures are coming. On many internet circles, there’s been a worrying tone. “You should apply for [insert EA grant], all I had to do was pretend to care about x, and I got $$!” Or, “I’m not even an EA, but I can pretend, as getting a 10k grant is a good instrumental goal towards [insert-poor-life-goals-here]” Or, “Did you hear that a 16 year old got x amount of money? That’s ridiculous! I thought EA’s were supposed to be effective!” Or, “All you have to do is mouth the words community building and you get thrown bags of money.”
Basically, the sharp increase in rewards has led the number of people who are optimizing for the wrong thing to go up. Hello Goodhart. Instead of the intrinsically motivated EA, we’re beginning to get the resume padders, the career optimizers, and the type of person that cheats on the entry test for preschool in the hopes of getting their child into a better college. I’ve already heard of discord servers springing up centered around gaming the admission process for grants. And it’s not without reason. The Atlas Fellowship is offering a 50k, no strings attached scholarship. If you want people to throw out any hesitation around cheating the system, having a carrot that’s larger than most adult’s yearly income will do that.
Other highly upvoted posts from that era:
- I feel anxious that there is all this money around. Let's talk about it - Nathan Young, March 2022
- Free-spending EA might be a big problem for optics and epistemics - George Rosenfield, April 2022
- EA and the current funding situation - Will MacAskill, May 2022
- The biggest risk of free-spending EA is not optics or motivated cognition, but grift - Ben Kuhn, May 2022
- Bad Omens in Current Community Building - Theo Hawking, May 2022
- The EA movement’s values are drifting. You’re allowed to stay put. - Marisa, May 2022
I wish FTX hadn't done fraud and collapsed for many reasons, but one feels especially salient currently: we never finished processing how abundant funding impacts a high-trust altruistic community. The conversation had barely started.
I would say that I'm worried about these dynamics emerging again, but there's something a little more complicated here. Ozy actually calls out a similar strand of dysfunction in (parts of) EA in early 2024:
Effective altruist culture ought to be about spending resources in the most efficient way possible to do good. Sure, sometimes the most efficient way to spend resources to do good doesn’t look frugal. I’ve long advocated for effective altruist charities paying their workers well more than average for nonprofits. And a wise investor might make 99 bets that don’t pay off to get one that pays big. But effective altruist culture should have a laser focus on getting the most we can out of every single dollar, because dollars are denominated in lives.
...
It’s cool and high-status to travel the world. It’s cool and high-status to go on adventures. It’s cool and high-status to spend time with famous and influential people. And, God help us, it’s cool and high-status to save the world.I think something like this is the root of a lot of discomfort with showy effective altruist spending. It’s not that yachting is expensive. It’s that if your idea of what effective altruists should be doing is yachting, a reasonable person might worry that you’ve lost the plot.
So these dynamics are not "emerging again". They haven't left. And I'm worried that they might get turbocharged when money comes knocking again.

Thanks for restarting this conversation!
Relatedly, it's also time to start focusing on the increased conflicts of interest and epistemic challenges that an influx of AI industry insider cash could bring. As Nathan implies in his comment, proximity to massive amounts of money can have significant adverse effects in addition to positive ones. And I worry that if and when a relevant IPO or cashout is announced, the aroma of expected funds will not improve our ability to navigate these challenges well.
Most people are very hesitant to bite the hand that feeds them. Orgs may be hesitant to do things that could adversely effect their ability to access future donations from current or expected donors. We might expect that AI-insider donors will disproportionately choose to fund charities that align fairly well with -- or at least are consonant with -- their personal interests and viewpoints.
(I am aware that significant conflicts of interest with the AI industry have existed in the past and continue to exist. But there's not much I can do about that, and the conflict for the hypothesized new funding sources seems potentially even more acute. I imagine that some of these donors will retain significant financial interests in frontier AI labs even if they cash out part of their equity, as opposed to old-school donors who have a lesser portion of their wealth in AI. Also, Dustin and Cari donated their Anthropic stake, which addresses their personal conflict of interest on that front (although it may create a conflict for wherever that donation went)).
For purposes of these rest of this comment, a significantly AI-involved source has a continuing role at an frontier AI lab, or has a significant portion of their wealth still tied up in AI-related equity. The term does not include those who have exited their AI-related positions.
What Sorts of Adverse Effects Could Happen?
There are various ways in which the new donors' personal financial interests could bias the community's actions and beliefs. I use the word bias here because those personal interests should not have an effect on what the community believes and says.
Take stop/pause advocacy for an obvious example. Without expressing a view about the merits of such advocacy, significantly AI-involved sources have an obvious conflict of interest that creates a bias against that sort of work. To be fair, it is their choice on how to spend their money.
But -- one could imagine the community changing its behavior and/or beliefs in ways that are problematic. Maybe people don't write posts and comments in support of stop/pause advocacy because they don't want to irritate the new funders. Maybe grantmakers don't recommend stop/pause advocacy grants for their other clients because their AI-involved clients could view their money as indirectly supporting such advocacy via funging.
There's also a risk of losing public credibility -- it would not be hard to cast orgs that took AI-involved source funds as something like a lobbying arm of Anthropic equity holders.
What Types of Things Could Be Done to Mitigate This?
This is tougher, but some low-hanging fruit might include:
Anyway, it is this sort of thing that concerns me more than (e.g.) some university student scamming a free trip to some location by simulating interest in EA.
A basic issue with a lot of deliberate philanthropy is the tension between:
The kneejerk solution I'd propose is "proof of novel work". If you want funding to do X, you should show that you've done something to address X that others haven't done. That could be a detailed insightful write-up (which indicates serious thinking / fact-finding); that could be some you did on the side, which isn't necessarily conceptually novel but is useful work on X that others were not doing; etc.
I assume that this is an obvious / not new idea, so I'm curious where it doesn't work. Also curious what else has been tried. (E.g. many organizations do "don't apply, we only give to {our friends, people we find through our own searches, people who are already getting funding, ...}".)
Sure, seems plausible.
I guess I kind of like @William_MacAskill's piece or as much as I remember of it.
My recollection is roughly this:
This seems good, though I guess it feels like a missing piece is:
Also, looking back @trammell's takes have aged very well:
Had Phil been listened to, then perhaps much of the FTX money would have been put aside, and things could have gone quite differently.
So my non-EA friends point out that EAs have incentives to suck up to any group that are about to become rich. This seems something which I haven't seen a solid path through:
Having known, and had conflict with a number of wealthy people, it is hard to retain ones sense of integrity in the face of lifechanging funds. I've talked to SBF and even after the crash I felt a gravity that I didn't want to insult him lest he one day return to the heights of his influence. Sometimes that made me too cautious, sometimes, avoiding caution I was reckless.
I guess in some sense the problem is that finding ways through uncomfortable situations requires sitting in discomfort, and I don't find EA to have a lot of internal battery for that kind of thing. Have we really resolved most of the various crises in a way that created harmony between those who disagreed? I'm not sure we have. So it's hard to be optimistic here.
My understanding of what happened is different:
And some of the FTXFF monies went to entities with no clear connection to the EA community, especially bioscience firms. Several of the bigger recipients on the list Tobias linked fall into that category.
I think the other missing piece is "what will this money do to the community fabric, what are the trade-offs we can take to make the community fabric more resilient and robust, and are those trade-offs worth it?"
When it comes to funding effective charities, I agree that having more money is straightforwardly good. It's the second-order effects on the community (the current people in it and what might make them leave, the kinds of people who are more likely to become new entrants) that I'm more concerned with.
I anticipate that the rationalists would have to face a similar problem but to a lesser degree, since the idea that well-kept gardens die by pacifism is more in the water there, and they are more ambivalent about scaling the community. But EA should scale, because its ideas are good, and this leaves it in a much more tricky situation.
I'll just note that when the original conversation started, I addressed this in a few parts.
To summarize, I think that yes, EA should be enormous, but it should not be a global community, and it needs to grapple with how the current community works, and figure out how to avoid ideological conformity.
Unless you explicitly warn your donors that you’re going to sit on their money and do nothing with it, you might anger them by employing this strategy, such that they won’t donate to you again. (I don’t know if SBF would have noticed or cared because he couldn’t even sit through a meeting or an interview without playing a video game, but what applies to SBF doesn’t apply to most large donors.)
Also, if there is a most important time in history, and if we can ever know we’re in the most important time in history while we’re in it, it might be 100 years or 1,000 years from now, and obviously holding onto money that long is a silly strategy. (Especially if you think we’re going to start having 10% economic growth within 50 years due to AI, but even if you don’t.)
As a donor, I want to donate to charities that can "beat the market" in terms of their impact, i.e., the impact they create by spending the money now is big enough that it is bigger than the effects of investing the money and spending it in 5 years. I would be furious if I found out the charities I donate to were employing the invest-and-wait strategy. I can invest my own money or give it to someone who will spend it.
I don't think trying to invest for a long time is obviously a silly strategy. But I agree that people or groups of people should decide for themselves whether they want to try to do that with their money, and a charity fundraising this year would be betraying their donors' trust if their plan was actually to invest it for a long time.
My intuition about patient philanthropy is this: if I have $1 million that I can spend philanthropically now or I can invest it for 100 years at a 7% CAGR and grow it to $868 million in 2126, I think spending the $1 million in 2026 will have a bigger, better impact than the $868 million in 2126.
Gross world product per capita (PPP) is around $24,000 now. It’s forecasted to grow at 2% a year. At 2% a year for 100 years, it will be $174,000 in 2126. So, the world on average will be much wealthier than the wealthiest nations today. The U.S. GDP per capita (PPP) is $90,000, Norway’s is $107,000 — I’m ignoring tax havens with distorted stats.
Why should the poor people of today give to the rich people of the future? How is that cost-effective?
The difference between the GiveWell estimate of the cost to save a life and the estimated statistical cost of saving a life in the U.S. is $3,500 vs. $9 million, so a ~2,500x difference. $1 million now could save 285 lives. $868 million in 2126 could save 96 lives — if we think poorer countries will have catch-up growth that brings them up to $90,000+ in GDP per capita (PPP).
The poorest countries may not have catch-up growth, and may not even grow commensurately with the world on average, but in that case, it makes it even more important to spend the $1 million on the poorest countries now to try to make sure that growth happens. Stimulating economic growth in sub-Saharan African countries where growth has been stagnant may be one of the most important global moral priorities. Thinking about 100 years in the future only makes it feel more urgent, if anything.
Plus, the risk that a foundation trying to invest money for 100 years doesn’t make it to 2126 seems high.
If you factor in the possibility of transformative technologies like much more advanced AI and robotics, biotech, and so on, and/or the possibility much faster per capita economic growth over the next 100 years, the case for spending now rather than waiting a century gets even stronger.
I think the case for waiting is stronger, not weaker, if you think the chance that poor countries won't have exhibited catch-up growth by 2126 is non-negligible. If they haven't exhibited catch-up growth by 2126, I expect $868 million then is much more likely to trigger it than $1 million today.
But the opportunity cost of not spending the $1 million today — the lost intervening 100 years of economic growth — is surely much more than $867 million? That is, surely it's at least 1,000x better to stimulate faster economic growth in the poorest countries today than it is to do it 100 years from now.
That depends on how long it would have stayed poor without the intervention!
Didn't you stipulate it would be at least 100 years in the scenario we're imagining? Surely it's worth spending at least 1,000x more resources to end global poverty 100 years sooner? (Otherwise, why not wait 1,000 years or 10,000 years to donate your first dollar to global poverty, if all that matters is the CAGR of your investments?)
The returns certainly aren't all that matter.
I don't follow your questions. We're comparing spending now to induce some chance of growth starting now with spending later to induce some chance of growth starting later, right? To make the scenario precise, say
In this case, the expected utility produced by spending now is 1%x(2-1)x200 = 2 utils.
The expected utility produced by spending in 100y is 4%x(2-1)x100 = 4 utils.
The gap can be arbitrarily large if we imagine that the default is stagnation for a longer period of time than 200y (or arbitrarily negative if we imagine that it was close to 100y), and this is true regardless of how much money the beneficiaries wind up with (due to the growth) is producing the gap between the 2 utils and the 1 util.
Do you really, actually, in practice, recommend that everyone in the world delays all spending on global poverty/global health for 100+ years? As in, the Against Malaria Foundation should stop procuring anti-malarial bednets and just invest all its funds in the Vanguard FTSE Global All Cap Index Fund instead? Partners in Health should wind down its hospitals and become a repository for index funds? If not, why not?
With the closest thing we have to real numbers (that I've been able to figure out, so far, anyway), my back-of-the-envelope calculation above found that it was ~3x as cost-effective to donate money now than to invest and wait 100 years. Do you find that rough math at all convincing?
I don't know how to quantify the economic growth question with anything approaching real numbers. It would probably be a back-of-the-envelope calculation with a lot more steps and a lot more uncertainty than even the non-rigorous calculation I did above. There are many complicated considerations that can't be mathematically modelled.
For example: if wealthy people in wealthy countries have ~1,000x more resources in 100 years, it seems like the marginal cost-effectiveness of any one patient philanthropic foundation on global poverty would decline commensurately, since, all else being equal, you'd think overall giving to global poverty would increase ~1,000x. And as giving increased, you'd think the low-hanging fruit would get picked, economic growth would be stimulated, and global poverty would become incrementally more and more solved, such that the remaining opportunities to give would be much less cost-effective than the ones you started with 100 years ago.
If you think there's at least an, I don't know, 5% chance of transformative AI within the next 100 years, that also changes things. Because transformative AI would cause rapid economic growth all over the planet, and then the marginal cost-effectiveness of your philanthropic funds in 2126 will really have decreased. But of course the invention of transformative AI is impossible to forecast.
You can imagine similar things for other speculative futuristic technologies. If it becomes vastly cheaper to prevent and treat all infectious diseases due to new technologies or biotechnologies, or, say, someone figures out how to wipe out all mosquitoes using a gene drive or something, and countries with high rates of mosquito-borne illness decide to wipe them out, then the cost-effectiveness of any money you were investing long-term to spend on infectious diseases later will drop dramatically.
To simplify it: if you have $1 million earmarked for malaria invested until 2126, and then in 2076 someone finds a super cheap way to quickly eradicate malaria worldwide, then your $1 million is now worthless. By spending it in 2026, you could have saved 285 lives, but now you can save zero lives.
The cost-effectiveness of the spending by whoever does the super cheap way to quickly eradicate malaria is through the roof, but the cost-effectiveness of everyone else's dollars earmarked for malaria drops like a stone. So, if you're not the lucky philanthropist who funds that specific thing, you've made a terrible cost-effectiveness trade-off.
No: I think that people should delay spending on global poverty/health on the current margin, not that optimal total global poverty/health spending today would be 0.
But that's a big question, and I thought we were just trying to make progress on it by focusing one one narrow angle here: namely whether or not it is in some sense "at least 1,000x better to stimulate faster economic growth in the poorest countries today than it is to do it 100 years from now". I think that, conditional on a country not having caught up in 100 years, there's a decent chance it will still not have caught up in 200 years; and that in this case, when one thinks it through, initiating catch-up in 100 years is at least half as good as doing so today, more or less.
I thought of a way to sketch this out.
Let’s say I have $10 billion to donate.
Option A. I donate all $10 billion now through GiveDirectly. It is disbursed to poor people who invest it in the Vanguard FTSE Global All Cap Index Fund and earn a 7% CAGR. In 2126, the poor people’s portfolios will have collectively grown to $8.68 trillion.
Option B. I invest all $10 billion in the Vanguard FTSE Global All Cap Index Fund for 100 years. In 2126, I have $8.68 trillion. I then disburse all the money to poor people through GiveDirectly.
Option B clearly provides no advantage to the poor people over Option A. On the other hand, it sure seems like Option A provides an advantage to the poor people over Option B.
If a philanthropist has $10 billion, I think they should prefer to arrange for Option A to happen rather than opt for Option B. But there may be other options that offer even more advantages to the poor people than Option A. So, they should seek out those options and choose an even better one, if they can.
To the extent Option B looks like it has higher impact, that’s just an artefact of how we might decide to do the accounting, rather than a true reflection of the causality involved or what’s morally best — or what the recipients of the aid would rationally prefer.
I don't think Option A is available in practice: I think the recipients will tend save too little of the money. That's the primary argument by which I have argued for Option B over giving now (see e.g. here).
But with all respect, it seems to me that you got a bit confused a few comments back about how to frame the question of when it's best to spend on an effort to spur catch-up growth, and when that was made clear, instead of acknowledging it, you've kept trying to turn the subject to the question of when to give more generally. Maybe that's not how you see it, but given that that's how it seems to me, I hope it's understandable if I say I find it frustrating and would rather not continue to engage.
Would you mind addressing the argument that patient philanthropy is empirically ~3x less cost-effective than donating now?
I think it depends on the time horizon. If catch-up growth is not near-guaranteed in 100 years, I think waiting 100 years is probably better than spending now. If it is near-guaranteed, I think that the case for waiting 100 years ambiguous, but there is some longer period of time which would be better.
Full-length post here. Feel free to comment if you want or not comment if you don’t want.
I didn’t understand your argument about economic growth above. I was hoping you’d give an argument based on empirical data or forecasts rather than a purely theoretical argument (e.g. utils don’t really exist, the percentage chances assigned to spurring economic growth at different funding levels are completely arbitrary, the scenario is overall contrived). So, I wasn’t convinced by that. But I acknowledge there is high uncertainty with regard to future growth, and whether patient philanthropy makes sense in practice partly depends on assumptions about growth.
Thanks, I agree that when to spend remains an important and non-obvious question! I'm glad to see people engaging with it again, and I think a separate post is the place for that. I'll check it out in the next few days.
Since we have no real numbers for that narrow angle and it involves important factors we can't mathematically model, I don't know if we can settle that narrow question.
But what about the other narrow question: that if you assume the poorest countries will grow to a per capita GDP that’s ~50% of the per capita GWP in 100 years, which we assume will continue to grow by 2% annually over that timespan, the cost-effectiveness of saving a life by donating to GiveWell's top charities today is ~3x higher than investing for 100 years and giving in 2126? Does that sound convincing at all to you?
The most arbitrary/most uncertain part of this calculation is how the per capita GDP of the poorest countries will compare to the global average over the very long-term.
By the way, how did you determine that the current margin is either just enough or too much giving on global poverty to be optimal? Why isn't the margin at which delaying is the right move a 10x higher or 10x lower level of aggregate spending? Or 100x higher/lower? How does one determine that? Is there a quantitative, empirical argument based on real data?
Interesting, say more about how you see EA struggling or failing to sit in discomfort?