There's a huge amount of energy spent on how to get the most QALYs/$. And a good amount of energy spent on how to increase total $. And you might think that across those efforts, we are succeeding in maximizing total QALYs.
I think a third avenue is under investigated: marginally improving the effectiveness of ineffective capital. That's to say, improving outcomes, only somewhat, for the pool of money that is not at all EA-aligned.
This cash is not being spent optimally, and likely never will be. But the sheer volume could make up for the lack of efficacy.
Say you have the option to work for the foundation of one of two donors:
- Donor A only has an annual giving budget of $100,000, but will do with that money whatever you suggest. If you say “bed nets” he says “how many”.
- Donor B has a much larger budget of $100,000,000, but has much stronger views, and in fact, only wants to donate domestically to the US.
Donor B feels overlooked to me, despite the fact that even within the US, even without access to any of the truly most effective charities, there are still lots of opportunities to do marginally better.
In practice, I note a conspicuous lack of EAs working for Donor B-like characters. There does not seem to be any kind of concerted effort influence ineffective foundations.[1]
Most money is not EA money
I’ve often heard of the Seeing Eye Dog argument for the overwhelming importance of EA:
One eye-seeing dog charity claims it costs ‘$42,000 or more’ to train the dog and provide instructions to the user. A cataract charity claims to be able to perform the procedure for $25.
Less anecdotally, an 80k report highlights the power law in the effectiveness of global health interventions.
The power law of impact is a really strong argument for prioritizing QALYs/$, even at the cost of overall dollars. If the best interventions are literally 100x or even 1,000x as impactful as the median ones, that is going to be tough to make up for.
On the other hand, there is a really steep power law on the dollars side too! Some people have way more money than others. And most money in the world is not EA money.
Take the Azim Premji Foundation. It has a massive endowment of $21 billion, yet there are no mentions of it on the EA Forum.
APF was founded with an explicit focus on education in India, so their grant making is pretty restricted, and it might feel like getting them to have anything to do with EA would be a huge long shot. But there is a lot of room between "status quo" and "fully optimized", and this space feels neglected (to EAs), tractable and highly scalable.
Modified from Giving What We Can
As a simplified model, let's take these numbers literally, and assume that the APF currently operates at a mere 1x. Their geographic and cause restrictions might mean that they'll never go to 100x, but 10x could be entirely plausible. If the status quo is that $20b of APF giving is like $0.2b of Give Well giving, there's an opportunity to 10x that impact, and generate net $1.8b in Give Well equivalent impact.
Those units aren't entirely intuitive and the numbers are largely made up, but they get across the point that moving huge sums of money from Charity C to Charity B matters a huge amount, even if you never get to Charity A.
Going further, there could be room to negotiate or expand the charity's entire charter. It would perhaps not be impossible to argue that Vitamin A supplementation should count as an education intervention since it can prevent blindness. And in fact, APF has already begun to branch out into health, and specifically on the nutritional status of young children. Nudging them in this direction even a year earlier, or doing the work they now want to do in a marginally more effective way (even without say, redirecting the funds to Niger), could be hugely impactful.
How much money is there?
Give Well already does hundreds of millions per year, and in some sense that's a lot, but it pales in comparison to the total pool of capital that feels, in principle, "available".
As of 2025, there were 3000 billionaires representing a total net worth of $1.6T. That is a lot of money! They might not give it all to charities, but they will have to do something with it, and we should work hard to make sure that something is as good as it reasonably can be.
There are already lots of very motivated EAs trying to to direct Dustin Moskovitz’s relatively modest $12b. There seem to be way fewer trying hard to direct even 1% of the budget of the other billionaires. There is only one mention even of Amancio Ortega on EA Forum, even though he is the 9th richest person with a net worth of $124b. And barely any mention of Bernard Arnault or Larry Ellison or Steve Ballmer.
These four alone represent over $600b in wealth that could at least be spent on marginally better causes. And in fact, they’ve already collectively spent billions on philanthropic causes. Marginally improving even a small portion of these donations could be huge.
Effective Everything?
EA-relevant organizations do around $1b/year in giving, but the world collectively does around $1t. There’s a tremendous neglected opportunity to improve the effectiveness of the world’s charities, even without making it all the way to truly optimal causes.
It feels, in some sense, against EA principles, but there should be lists of "the most effective charities, given that you can only give to X country" or "given that you're only interested in X cause". Beyond analysis, it feels like there is a huge amount of room for direct engagement, and trying to work at some the world's largest non-EA foundations.
DAFgiving360™ does not sound like an EA nonprofit. It works with Charles Schwab & Co., which is not a very EA organization. And yet they have done $44 billion in recommendations, will do a lot more in the future, and are hiring a senior manager for charitable consulting. This kind of job does not currenly end up on the 80,000 Hours Job Board, but I believe it really ought to. And we ought to think harder about "marginal reform of legacy non-EA institutions" as an important skill set.
- ^
An obvious explanation for the lack of visibility would be that these people don’t want to identify as EAs, because it would alienate "normie" donors. This is possible, but I’m still suspicious that I’ve literally never heard of anyone in EA taking this path to impact.
I agree that it could be useful but I don't think it's as neglected as you think.
Anecdotally I know quite a few people in your second category, people in less 'EA' branded areas/orgs (although a lot will have more impact). There are several orgs looking into advising donors that haven't heard/aren't interested in EA (Giving Green, Longview, Generation Pledge, Ellis Impact, Founders Pledge, etc).
I think some may not be seen in EA spaces as much because of PR concerns but I think the main reason is that they are focused on their target audiences or mainly just interact with others in the effective giving ecosystem.
Also it's not quite the forum, but I did link to a blog listing Azim Premji on this global health landscape post (not that you would ever be expected to know that).
Robin Hanson's post Marginal Charity seems relevant, even though it's a distinct idea.
I agree with the Overall statement of this Post. Regarding a "GiveWell of X" type of organization I believe it would have to function quite differently, ideally only working on-demand instead of doing broadly aimed research for the following 2 reasons:
I really hope this is the sort of thing that can be enabled by the (relatively recent) shift to principles-first EA as a community outreach organising mode.
Previously, EA was a union of three(ish) cause areas sharing the same infrastructure, and getting someone "into EA" was basically about persuading them on one or more of the cause areas. I think this is a mistake.
I really think we have the opportunity to make EA more of an educational program on how to evaluate impact and optimise for effectiveness given the resources you have, as well as the provision of such resources insofar as they are copyable without marginal cost. And that means that this kind of stuff is included in the movement.
Broadly agree that applying EA principles towards other cause areas would be great, especially for areas that are already intuitively popular and have a lot of money behind them (eg climate change, education). One could imagine a consultancy or research org that specializes as "Givewell of X".
Influencing where other billionaires give also seems great. My understanding is that this is Longview's remit, but I don't think they've succeeded at moving giant amounts yet, and there's a lot of room for others to try similar things. It might be harder to advise billionaires who already have a foundation (eg Bill Gates), since their foundations see it as their role to decide where money should go; but doing work to catch the eye of newly minted billionaires might be a viable strategy, similar to how Givewell found a Dustin.
Agree with your assessment of the DAF360 job. I also generally agree with your overall points.
I've said (in like 5 comments now) that a very high impact opportunity is to influence grantmakers of large private foundations.
I strongly agree! Improving the cost-effectiveness (and cost-efficiency) of non-EA resources seems underexplored in EA discussions. I'd argue this applies to talent, not just funding.
In mainstream fields like global development and climate change, there are many talented, impact-driven professionals who don't know EA or wouldn't join the EA community (perhaps disagreeing with cause-neutrality or the utilitarian foundations). Yet many of these professionals would be quite willing to put in a lot of effort and energy into high-impact projects if exposed to important agendas and projects they're well-positioned to tackle. There could be significant value in shaping agendas and channeling these professionals toward more impactful (not necessarily "most impactful" by EA standards) work within their existing domains.
I should note this point is less relevant/salient for AI Safety field-building, where there already seem to be more pathways for non-EA people and broader engagement beyond the EA-aligned community.
On an additional note: Rethink Priorities' A Model Estimating the Value of Research Influencing Funders report had a relevant point:
"Moving some funders from an overall lower cost effectiveness to a still relatively low or middling level of cost effectiveness can be highly competitive with, and, in some cases, more effective than working with highly cost-effective funders."
I'm glad you shared this. I wrote something 18 months ago that was trying to get at a similar point, but then got distracted by delivering on contracts and applying for jobs, so never followed up on it.
Having spent most of my career working with and consulting for funders like 'Donor B', I couldn't agree more.
It also prompts me to reflect on a conversation I had at EAG this year, in which a prominent EA shared that they were considering applying to be CEO of a well-resourced non-EA animal welfare organisation. I'm not sure if they ended up applying, or how that went, but I suspect that this kind of thing might also help on the path to (marginally more) 'effective everything'.