Hide table of contents

Author: Isabel Arjmand

This post is the first in a multi-part series, covering how GiveWell works and what we fund. We'll add links to the later posts here as they're published. Through these posts, we hope to give a better understanding of our research and decision-making.

Why cost-effectiveness matters

The core question we try to answer in our research is: How much good can you do by giving money to a certain program?

Consider how much good your donation could do if you give to a program that costs $50,000 to save a life versus one that costs $5,000 to save a life (which is roughly what we estimate for our top charities). Giving to the latter would have 10 times more impact. While in an ideal world both programs would receive funding, we focus on identifying the most cost-effective programs so that the limited amount of funding available can make the greatest difference.

The basics

We've written in detail here about our approach to cost-effectiveness analysis and its limitations. Our bottom-line estimates are always uncertain, and we don't expect them to be literally true. At the same time, they help us compare programs to each other so that we can direct funding where we believe it will have the greatest impact.

At a very high level, assessing cost-effectiveness generally involves looking at:

  • The cost per person reached. For example, how much does it cost to treat one child with vitamin A supplementation for one year?
  • The outcomes of the program. Determining these outcomes often involves examining two factors:

The overall burden of a problem. For instance, how many kids who will be reached with vitamin A supplementation would otherwise have died?[1]

The effect the program has. For example, how much does vitamin A supplementation reduce mortality rates relative to that baseline, and are there other benefits to providing vitamin A?

We use unconditional cash transfers as a benchmark for comparing opportunities, such that a program is estimated to be "12x cash" if we believe it's 12 times more impactful per dollar than giving money directly to people living in poverty. In other words, if we estimate that a program is 12x cash, we think donating $100 to that program does as much good as donating $1,200 to a program that delivers unconditional cash transfers.

More detail

We aim to come to an all-things-considered view that involves a variety of complex factors and judgment calls. The evidence we rely on varies in strength but is always imperfect, and our conclusions represent best guesses rather than truth. (More on judgment calls and uncertainty in a later post.)

For our top charities, our cost-effectiveness analyses generally involve dozens of parameters and assumptions. In constructing these analyses, we ask questions like the following:

How well do the study results generalize to a different time and setting? (That is, what is the evidence's external validity?) For example, what are current levels of insecticide resistance relative to study conditions, and how do they affect the efficacy of antimalarial nets?[2]

Are people already receiving the relevant program even in the absence of our funding? For example, how many kids served by a recent grant to Helen Keller International would receive vitamin A supplementation outside of its program?[3]

If we fund an intervention, such as insecticide-treated nets, how would that affect what other funders spend on it? Would they have funded this program if we didn't? If we're influencing them to spend more on this program, what would they otherwise have spent their funding on?[4]

How would GiveWell funding affect the actions of governments in the countries where grants are funded? For example, will Evidence Action's work scaling up syphilis testing during pregnancy be successfully transitioned to the government? Would the government take on that program in the absence of Evidence Action's work, and if so, on what timescale?[5]

What spillover effects might an intervention have? For example, as New Incentives increases immunization rates, what impact does that have on disease transmission throughout a population?[6]

How does the impact of averting a death compare to the impact of doubling someone's income?[7] The amount of time we spend on a research question depends both on how much progress we believe we can make and how important the answer is to the bottom line. Many considerations are built into the main model, and we make supplementary adjustments for other factors to yield a final estimate.

We aren't inherently risk-averse in our assessments—we fund the global health and development programs we think are best. That said, we do look for reasonably compelling evidence in order to fund something. Not everything we fund needs to be backed by a randomized controlled trial, but our goal is to help our donors give with confidence. And for our top charities, where we want to give donors an even higher level of confidence in the impact of their donations, we have additional criteria. More on those here.

Qualitative considerations

Qualitative factors that aren't part of our formal quantitative model can influence our final recommendations, especially when we're deciding between several programs with similar apparent cost-effectiveness. We bake as much as we reasonably can into our quantitative analyses, but still ask ourselves questions like these: Does this organization have a strong track record? Are there risks to program implementation that we haven't fully captured? Does this bottom-line estimate match our true beliefs, which might involve relevant information and intuitions that might be hard to quantify? We might recommend grants that we estimate to be slightly below our cost-effectiveness threshold if we think they're especially strong qualitatively, and vice versa.

A small portion of our grantmaking relies on qualitative reasoning rather than substantial quantitative modeling. For two examples, see this grant for research on cash spillovers and this grant to the Agency Fund. In those cases, the grant investigator believed that the expected impact of the grant justified its cost, but may not have conducted a formal analysis.

Further reading

For a deeper dive into GiveWell's cost-effectiveness work, see the following materials:

  • The cost-effectiveness analyses for our top charities
  • Our high-level page on cost-effectiveness
  • Examples and discussion on specific grant pages, such as this grant to Nutrition International for vitamin A supplementation in Chad
  • This walkthrough of the cost to save a life, using an antimalarial net distribution in Guinea by the Against Malaria Foundation as an example

Putting it all together

Our cost-effectiveness models aim to incorporate as many of the important considerations as we reasonably can, and much of our research effort goes toward building these models and refining the key parameters.

Cost-effectiveness matters because we aim to direct funding to the highest-impact opportunities we can identify. With our current cost-effectiveness threshold, this means recommending programs we believe are at least 10 times as impactful per dollar as unconditional cash transfers to people living in extreme poverty.

In future posts, we'll share more about the details of our research and the types of opportunities we fund. In the meantime, please comment or email us at info@givewell.org with any questions!


This framing doesn't apply to every program. For a program like unconditional cash transfers, in which selected families living in poverty receive cash without a requirement to meet additional conditions, it might make sense to focus primarily on the cost per household and the effect on each household. But for many of the programs we look at, like health programs that aim to cover an entire population in which not everyone is affected by a particular problem, the burden of the problem the program addresses is an important factor. ↩︎

We last updated our full report on insecticide resistance in 2020, and have continued to research it. This spreadsheet has our current bottom-line estimates of insecticide resistance by country and type of net. ↩︎

In the case of our April 2023 grant to Helen Keller International for vitamin A supplementation in Madagascar, we noted "a recent survey estimated that 44% of children have been receiving VAS in the areas Helen Keller plans to support. We may over- or under-estimate cost-effectiveness if either (a) Helen Keller's support primarily leads to reaching children who would [not] have received VAS without their support, or (b) the additional children reached are systematically at lower [higher] risk of illness and death than those who are reached by the routine health system." ↩︎

In many of our cost-effectiveness analyses, we consider what other funders are contributing and how our funding might change their actions, as discussed in this 2018 blog post. (Note that the specific numbers in the 2018 blog post are now outdated.) ↩︎

For our grant to Evidence Action for syphilis screening and treatment in pregnancy in Zambia and Cameroon, we created forecasts, including projecting a 55% chance that "Evidence Action will have fully transitioned this work to the government in Zambia in 5 years (2027), spending less than $150,000 per year in-country on this program." We also projected a 50% chance that the Zambian government would have scaled up the program in one year in the absence of Evidence Action's support (see this section of our cost-effectiveness analysis). ↩︎

We estimate that New Incentives' program is 25% more cost-effective than our main model would otherwise suggest because of the benefits of herd immunity (see here). We don't actually model the disease transmission effects and think it would be challenging to do so, but we include the 25% increase in cost-effectiveness as a very speculative best guess of the magnitude of the effect. ↩︎

We compare different outcomes to one another via our moral weights. We currently value an averted death as roughly 100 times more valuable than a year of doubled consumption. These tradeoffs aren't limited to deaths averted vs. income increased; we also consider the value of disability averted (as in the case of clubfoot treatment) and other benefits. ↩︎





More posts like this

Sorted by Click to highlight new comments since:

Thanks for sharing!

I noted many of your (marginal) cost-effectiveness estimates are higher than 10 times that of cash transfers. For example, the mean estimate for the Agaist Malaria Foundation is 14.8. However, in theory, each intervention should be funded until the marginal cost-effectiveness reaches the bar. I guess the difference is explained by considerations not modelled in the cost-effectiveness analysis, but I am surprised the difference can be so large.

Hi Vasco,

Thanks for your comment! To clarify, our funding bar being 10x cash doesn't mean that every grant we make will be to things that are 10x cash – it means that we'll generally fund all of the programs we find that are above 10x, and not the ones that we estimate to be below 10x (with caveats that sometimes we will/won't make grants that fall on either side of that line for other reasons not captured in the CEA, e.g. learning value). You can read more on how we make funding decisions here.

Many of the grants we make are above 10x, including a fair amount in the 10-20x range (like this recent CHAI grant – we estimate delivering the program is ~17x cash, not counting the evaluation grants). Using Against Malaria Foundation (AMF) as an example, we fund net distribution campaigns in specific geographic regions that meet our 10x bar (see this grant made to AMF in January 2022 that supports net distribution campaigns in three Nigerian states). Theoretically, if we evaluated six states for net distribution campaigns and only four states met our criteria to be above 10x, we would only fund those four and not the other two, and the average cost-effectiveness across those 4 states would be higher than 10x.

Thanks for clarifying!

It makes sense that your grants have a cost-effectiveness higher that your bar of 10 times that of cash. However, I wonder whether the cost-effectiveness of the last dollar of each of your grants is around 10 times that of cash. For example, you recommended 8.2 M$ to AMF in January 2022, and estimated the cost-effectiveness to be 12 times that of cash. However, if I donate 1 k$ to All Grants Fund, the cost-effectiveness of my donation will be 10 times that of cash, right? Due to diminishing returns, the cost-effectiveness of my donation would be lower than that of your grants, but I am not sure whether you select the size of your grants such that their last dollar has a cost-effectiveness of 10 times that of cash.

We don't select/structure our grants such that we necessarily think the "last dollar" or marginal dollar to that grant is 10x cash. For example: if there was a discrete $5M funding opportunity to support a program in a specific area, we might model the cost-effectiveness of that opportunity as say, 15x overall, but there wouldn't be any particular reason to think the 'last dollar' was more like 10x. Generally, when it comes to funding discrete opportunities (e.g. vaccination promotion in a certain state in Nigeria), we don't tend to think about the value of the first versus last dollar for that discrete opportunity, because we're often making a binary decision about whether to support the program in that area at all. Hope this clarifies! 

Thanks for the helpful clarification! I would be curious to know whether you have explicitly modelled diminishing returns in the context of assessing non-discrete opportunities.

We've haven't explicitly modeled diminishing returns in this way. Most of the opportunities we consider are for specific pre-defined gaps, so they're more discrete than something you can really scale in that continuous type of way.

If right after AMF received the grant you recommended, I donated 1 k$ to AMF, would the cost-effectiveness of my donation be 10 times that of cash?

I'd guess so, given how large AMF's room for more funding is (GW as of July '22, AMF CEO on 300 M$ shortfall 2024-26), but I'm also curious to know what GW has to say.

That said, one consideration pushing the cost-effectiveness bar up and one down:

  • up: cf. 2nd bullet in section 4.2, GW's AMF model reports effectiveness per philanthropic dollar, but I don't care about that; I care about my own philanthropic dollar. This overcounts the cost denominator by 47-133% depending on region 
  • down: if you donated directly to AMF, it's likely they may fund net distribution according to their internally-prioritized considerations, while if you donate via GW they'll (quoting GW's response to you above) "fund net distribution campaigns in specific geographic regions that meet [their] 10x bar" which isn't an option available to you specifically. This is part of my own reasoning for not donating directly to charities; I'm also mostly persuaded by GWWC's funds-over-charities argument. Perhaps this isn't really what you meant though, in which case I apologize for misconstruing   

Nice points, Mo!

I'd guess so, given how large AMF's room for more funding is (GW as of July '22, AMF CEO on 300 M$ shortfall 2024-26), but I'm also curious to know what GW has to say.

Makes sense. Large room for more funding means the cost-effectiveness of additional funds would match that of the last dollar of the big grant. However, GiveWell's cost-effectiveness estimates of AMF do not depend on the size of the donation:

So I do not think GiveWell's cost-effectiveness sheet is modelling diminishing returns, but this seems important to infer the cost-effectiveness of the last dollar of their grants.

This is part of my own reasoning for not donating directly to charities; I'm also mostly persuaded by GWWC's funds-over-charities argument. Perhaps this isn't really what you meant though, in which case I apologize for misconstruing

Good catch! I had in mind donating 1 k$ to the same programs the big grant was supporting, but as you said this would not be possible by donating to AMF, so I will update my comment above. Now it is about donating to All Grants Fund at any time instead of donating to AMF after a big grant from GiveWell.

Curated and popular this week
Relevant opportunities