This is a couple weeks old, but I don't think it's been shared here yet. For context: GiveWell recently took GiveDirectly (which runs cash-transfer programs) off its "top charities" list. GiveDirectly wrote a very gracious and interesting response. One section particularly jumped out:

GiveWell is a charity evaluator with the stated goal of “finding and recommending a small number of outstanding giving opportunities to help donors save or improve lives the most with their gifts.” As one of the largest private funders in global health and development, they’ve made a huge difference in the lives of millions and been a positive influence in the sector. Their priority is to maximize the impact of the funds they direct from donors (~$500M in 2021), so they choose cost-effectiveness cutoffs based on the amount they expect to move. 

But there’s a lot more money at stake. Official development assistance is $178B a year, and U.S. charitable giving alone is another $484B a year. Recently, Mackenzie Scott gave away $4B in 9 months, and Elon Musk debated how to use $6B with the head of the World Food Programme. We want an approach to giving well that has answers for budgets at these scales.

GiveWell would say they’re focused on prudently allocating the money they expect to direct, and if they received even more funds, they would figure out how to allocate those well too. We expect that they would. But GiveWell is not just any other donor — it is the premier, trusted voice on how to give. Structuring GiveWell’s recommendations only around the funds they expect to direct, at best, says nothing about the vast majority of funds that could help people living in extreme poverty. At worst, it suggests these funds can’t do much good.

This strikes me as a good point. It makes complete sense to direct the funds we have, right now, to charities that are 10x as cost-effective as cash transfers. But my (very uncertain!) understanding is that those programs will run out of room for funding at some point. So it does seem weird for EA organisations to not say anywhere that if we had billions more in resources we could also massively scale up cash transfers. That seems particularly worth saying because by highlighting that, we might be able to encourage those billions to come our way. To take GiveDirectly's example, we could say to Mackenzie Scott: if you give us $4B tomorrow to spend on global health, we can give all of it away really effectively through a mix of malaria nets, malaria chemoprevention, vitamin A supplements, conditional cash transfers and unconditional cash transfers. 

But, like I say, I'm uncertain here, and would love other people's thoughts. One idea: perhaps GiveWell should have a "If we had X amount of money we'd do this" page, with milestone targets?

Comments11


Sorted by Click to highlight new comments since:

But large donations to the other charities might still have greater value than to GD. After they meet their “rfmf” threshold they can expand their programs for following years. It might not 10x GD but it won’t immediately fall below it.

Also there are a range of other charities GW suggests might exceed the return of GD but just not 10x it. Eg fistula fund.

I suppose the operative question would be what is the total funding capacity for all programs that are demonstrably more cost-effective than Givedirectly? Would it be in the tens of billions? Hundreds of billions? Trillions?

Direct cash transfers to the global poor could likely absorb in the trillions annually and still be very cost effective. But if there's no risk of exhausting the lower hanging fruit any time soon, I suppose it is not a concern.

Yes but we are far from exhausting better solutions then giving money directly and I doubt we ever will.  And then there is the scaling problem - as long as Give Directly is small then problems such as inequity, corruption, inflation and the free-loader problem are all negligible.  If we scaled Give Directly to be significant then how would these problems grow? 

My nonprofit, the Consumer Power Initiative, has a plan to leverage consumer sentiment to direct a significant portion of our global economy to effective charities. This might create the scenario where we've plucked the lower hanging fruit than Givedirectly.

I actuality suspect that Givedirectly would become more efficient an judicious with its resources with scaling, while the extent of need would allow it to absorb hundreds of billions annually without large utility losses.

To learn more about my project, here's my most recent newsletter.

https://drive.google.com/file/d/1jXeT6SHoLoaXfkoT_7YCSgpMGHTiDwFU/view?usp=drivesdk

Ah yeah, that’s a good point. I guess what I’d love to see is what Brad mentions — a sense of how much money GW thinks it can distribute before getting to GD levels of return.

There are definitely worlds where effective charities gain much more money, in which case it is critical that outlets with extremely deep uses for funding like Givedirectly will be critical.

For instance, if the Guided Consumption project of the Consumer Power Initiative and BOAS takes off, there may be tens or hundreds of billions of dollars that we need to make sure are directed towards effective global health and development and animal welfare charities. If you're interested in learning more about that:

https://forum.effectivealtruism.org/posts/WMiGwDoqEyswaE6hN/making-trillions-for-effective-charities-through-the

Charitable and other private efforts are essential to alleviate immediate suffering.

However, I support GiveDirectly because they are also gathering data about the positive effects of giving cash to people instead of goods and services decided by donors. 
That data then can be used as convincing evidence to governments everywhere to institute Universal Basic Income to replace the current inefficient and demeaning welfare models.

This would eliminate the need for a charity industry that, in effect, leaves in place a socioeconomic system designed to extract surplus wealth and direct it towards the top decile of a population.

I don't really know how giving works for a very wealthy person, but to me it seems unlikely that they or someone on their staff would just look at the GiveWell site and be done.  It seems a lot more likely that they would have a conversation, with GiveWell staff or others, which would create an opportunity for more nuanced advice.  So I really doubt it much matters for that scenario.

"If we had X amount of money we'd do this" page, with milestone targets?

That's a neat idea!

Yeah good point, giving is definitely more involved at the billionaire level. But I do still think the message of “we would like as much as you can give, we can do so much with your money!” is a good thing to have circulating — billionaires are just as online as anyone else and those messages might resonate!

Thanks for posting these two together!

Reading the GiveDirectly response highlighted important epistemic differences as a key differentiator for donors who may want to give directly - namely, whether or not you agree with GiveWell's moral weights and whether or not you value empowering individual choice.

While many donors may be seeking to maximize "impact" from an outcomes perspective and agree with GiveWell's approach for moral weights, I've found that individual empowerment resonates equally if not more with some groups. Even if GiveWell stops directing funds to GiveDirectly, it seems important to highlight these differences in values and approach elsewhere among donor facing materials in EA, such as this GWWC page.

I think the 'resonating with individual empowerment' point is important. While Give Directly may not be as effective as the top charities recommended by Give Well, in my experience it has a 'low intellectual bar to entry' for getting non-EAs to donate. I've had trouble convincing certain people to donate to charities like GW's Top 4 (and existential risk initiatives are an even harder sell), but Give Directly seems to resonate quite easily — it's still more effective than most of the charities out there, especially in the poverty alleviation space. 

Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 7m read
 · 
Introduction I have been writing posts critical of mainstream EA narratives about AI capabilities and timelines for many years now. Compared to the situation when I wrote my posts in 2018 or 2020, LLMs now dominate the discussion, and timelines have also shrunk enormously. The ‘mainstream view’ within EA now appears to be that human-level AI will be arriving by 2030, even as early as 2027. This view has been articulated by 80,000 Hours, on the forum (though see this excellent piece excellent piece arguing against short timelines), and in the highly engaging science fiction scenario of AI 2027. While my article piece is directed generally against all such short-horizon views, I will focus on responding to relevant portions of the article ‘Preparing for the Intelligence Explosion’ by Will MacAskill and Fin Moorhouse.  Rates of Growth The authors summarise their argument as follows: > Currently, total global research effort grows slowly, increasing at less than 5% per year. But total AI cognitive labour is growing more than 500x faster than total human cognitive labour, and this seems likely to remain true up to and beyond the point where the cognitive capabilities of AI surpasses all humans. So, once total AI cognitive labour starts to rival total human cognitive labour, the growth rate of overall cognitive labour will increase massively. That will drive faster technological progress. MacAskill and Moorhouse argue that increases in training compute, inference compute and algorithmic efficiency have been increasing at a rate of 25 times per year, compared to the number of human researchers which increases 0.04 times per year, hence the 500x faster rate of growth. This is an inapt comparison, because in the calculation the capabilities of ‘AI researchers’ are based on their access to compute and other performance improvements, while no such adjustment is made for human researchers, who also have access to more compute and other productivity enhancements each year.