3818 karmaJoined


We want to share a few thoughts that might help clarify our approach.

Our research incorporates potential downsides or unintended consequences, such that our recommendations take into account factors like potential side effects. Most (if not all) of the information in Leif Wenar’s WIRED piece is something that anyone could find on our website. However, we believe some of what is published in the piece is misleading or inaccurate. Here is a lightly edited version of the material that GiveWell sent to WIRED in response to a request for comment on Leif Wenar's piece. 

Hi Ian, thanks for sharing this article – our team wrote up some notes on how this topic intersects with our work, and I (Isabel Arjmand writing on behalf of GiveWell) thought it might be useful to share here.

Like the Bloomberg editorial board, we're concerned about stalling progress in the fight against malaria, but we're skeptical that quality issues with PermaNet 2.0s have influenced this progress as much as the article suggests.

All things considered, we believe that malaria nets are, and have been, highly effective in reducing malaria burden. The Against Malaria Foundation had first shared the studies highlighted above with us in 2020, and the claims in the Bloomberg article have prompted us to do some additional research.

Based on the work we've done so far, we aren't convinced that decreased net quality is primarily responsible for malaria resurging in Papua New Guinea. So far, we see this as a milder negative update on nets than the article would indicate, in part because we think these tests of net quality may not be a perfect proxy for effectiveness in reducing cases and in part because we no longer fund PermaNet 2.0s (for unrelated reasons). At the same time, renewed interest in the evidence around PermaNet 2.0 quality is a nudge for us to prioritize further work to understand net quality control in general.

More detail on the implications of this research for GiveWell's work

While we no longer fund PermaNet 2.0s because we now fund newer types of nets instead, they make up roughly 20% of nets we've funded historically. The studies referenced in the Bloomberg article looked at nets distributed in Papua New Guinea and indicate that the post-2012 PermaNet 2.0s perform worse on certain efficacy tests. We aren't sure how well those efficacy tests serve as a proxy for malaria transmission (e.g. mosquitoes in these tests could be impaired from the exposure to insecticides even if it isn't sufficient to kill them). We're also skeptical that changes to the formulation of PermaNet 2.0s were the key driver of increased malaria cases in Papua New Guinea. During this time, we think other factors like insecticide resistance and shifts in biting patterns likely played a meaningful role (as highlighted in this paper). That said, we see these studies as a negative update on the effectiveness of those nets.

We did a quick back-of-the-envelope calculation (so this is more illustrative than fully baked, at this point):

  • Assuming the insecticide treatment on PermaNet 2.0s was 80% less effective after 2012 would make those nets look 30-50% less effective overall than we'd previously modeled. That's because we model roughly 30% of the benefit of nets as coming from the physical barrier in the absence of insecticide resistance, and we already discount the effectiveness of nets like PermaNet 2.0 because of insecticide resistance. We would guess that with further work, we'd estimate that 80% is on the pessimistic side of things (which would put the overall impact on net efficacy at the low end of our 30-50% range, or lower).
  • Then, assuming that similar issues don't apply to other nets (which could be wrong – we plan to look into this more), our overall nets grantmaking would look roughly 5-10% less cost-effective than we'd previously estimated, since PermaNet 2.0s are around 20% of our historical nets distributed. That proportion has varied over time. In 2018, all of the nets we funded were PermaNet 2.0s; now, we fund newer types of nets instead.

While concerns specific to PermaNet 2.0s don't directly affect our future allocation decisions, this issue does raise more general concerns about quality control for nets. Ideally, we would have prioritized more work in this area in the past. We're planning to learn more about quality control processes and we also want to better understand how others in the malaria field are thinking about this.

We've haven't explicitly modeled diminishing returns in this way. Most of the opportunities we consider are for specific pre-defined gaps, so they're more discrete than something you can really scale in that continuous type of way.

We don't select/structure our grants such that we necessarily think the "last dollar" or marginal dollar to that grant is 10x cash. For example: if there was a discrete $5M funding opportunity to support a program in a specific area, we might model the cost-effectiveness of that opportunity as say, 15x overall, but there wouldn't be any particular reason to think the 'last dollar' was more like 10x. Generally, when it comes to funding discrete opportunities (e.g. vaccination promotion in a certain state in Nigeria), we don't tend to think about the value of the first versus last dollar for that discrete opportunity, because we're often making a binary decision about whether to support the program in that area at all. Hope this clarifies! 

Thanks to Vasco for reaching out to ask whether GiveWell has considered: 

  • e-Government procurement (benefit-to-cost ratio of 125)
  • Trade (95)
  • Land tenure security (21).

GiveWell has not looked into any of these three areas. We'd likely expect both the costs and the benefits to be fairly specific to the particular context and intervention. For example, rather than estimating the impact of reduced tariffs broadly, we'd ask something along the lines of: What is the intervention that can actually e.g., lead to a reduction in tariffs? On which set of goods/services would it apply? Which sets of producers would benefit from those lower tariffs? And thus, what is the impact in terms of increased income/consumption?

We think there's a decent chance that different methodologies between Copenhagen Consensus Center and GiveWell would lead to meaningfully different bottom line estimates, based on past experience with creating our own estimates vs. looking at other published estimates, although we can't say for sure without having done the work.

Thanks SoGive for the post! We wanted to share some of GiveWell's current thinking around malaria vaccines in case it's helpful. We also wrote a report on RTS,S in 2022 here and have recommended a couple grants for vaccine rollout and research.

On a cost-per-person-reached basis, we agree ITNs and SMC are superior to either of the two WHO-approved malaria vaccines. However, we think there's less of a differential in cost-effectiveness than this post implies, for a number of reasons:

  • The difference in all-in delivery costs is probably less substantial: We think it roughly costs $6 to deliver an ITN to a household[1], and roughly $7 to provide a child with a full course of SMC. (See also impact metrics here). We estimate the total costs for fully immunizing a child range from $23-$43, depending on the choice of vaccine.[2] 
  • We would approach the comparison for the reduction in malaria incidence differently: Using the effect size from the Pryce et al meta-analysis for ITNs (which consists mostly of trials that lasted one year) and comparing it with the incidence reduction observed after 12 months in RTS,S/R21 trials is not as straightforward as is seems:
    • The reduction found in Pryce et al has to be considerably updated in light of recent developments, notably new types of nets (PBO, dual active ingredient) and increased insecticide resistance. 
    • The RTS,S Clinical Partnership trial includes results from a follow-up four years after the start of the intervention and finds a reduction in clinical malaria from 28% (3 dose group) to 36% (4-dose group) at endpoint. Our malaria team recommends these results rather than the earlier snapshots, as they are less noisy.
    • Our best guess is that R21 and RTS,S are similarly effective at preventing malaria. Available evidence suggests that short-term effectiveness is broadly similar between RTS,S and R21. No data has been published for the impact of R21 on malaria incidence over the long run (>20 months after the first dose). Because the short-term outcomes are broadly similar, and both vaccines employ the same mechanism to induce an immune response in the vaccinated person[3], our best guess is that four doses of R21 probably offer similar levels of protection as four doses of RTS,S over the long run.
    • We believe the apparent difference cited in the post is most likely due to the different setup between trials (Datoo 2022, unlike the RTS,S trial, was carried out in a seasonal setting). Note also that results from a phase III trial on R21 are now available (Datoo 2023 (preprint)).
  • The duration of protection differs: Due to factors such as attrition and physical decay, we currently estimate an ITN to provide between 1.2 - 2.0 years of effective protection. As indicated above, we estimate that malaria vaccines offer around 30% protection over four years. 

There are, of course, additional factors that need to be taken into account to get a full picture (for example, what coverage levels are achievable for each intervention?). However, our current best guess is that even with those included, nets (and SMC) will be more cost-effective than malaria vaccines - just not by an order of magnitude.


  1. ^

    Costs per child reached are much higher, roughly $15-$26.

  2. ^

    The price per dose of RTS,S was $9.30 in 2022, and estimates for R21 indicate a price per dose of $3.90. We expect that, on average, 70% of children who received three doses will also get a booster shot, which implies vaccine costs per child between $14-$37. The best costing estimates for the delivery of the doses suggest around $9 per child.

  3. ^

    “The leading malaria vaccine in development is the circumsporozoite protein (CSP)-based particle vaccine, RTS,S, which targets the pre-erythrocytic stage of Plasmodium falciparum infection. It induces modest levels of protective efficacy, thought to be mediated primarily by CSP-specific antibodies. We aimed to enhance vaccine efficacy by generating a more immunogenic CSP-based particle vaccine and therefore developed a next-generation RTS,S-like vaccine, called R21. The major improvement is that in contrast to RTS,S, R21 particles are formed from a single CSP-hepatitis B surface antigen (HBsAg) fusion protein, and this leads to a vaccine composed of a much higher proportion of CSP than in RTS,S.” Collins et al. 2017, “Abstract”

Hi Vasco,

Thanks for your comment! To clarify, our funding bar being 10x cash doesn't mean that every grant we make will be to things that are 10x cash – it means that we'll generally fund all of the programs we find that are above 10x, and not the ones that we estimate to be below 10x (with caveats that sometimes we will/won't make grants that fall on either side of that line for other reasons not captured in the CEA, e.g. learning value). You can read more on how we make funding decisions here.

Many of the grants we make are above 10x, including a fair amount in the 10-20x range (like this recent CHAI grant – we estimate delivering the program is ~17x cash, not counting the evaluation grants). Using Against Malaria Foundation (AMF) as an example, we fund net distribution campaigns in specific geographic regions that meet our 10x bar (see this grant made to AMF in January 2022 that supports net distribution campaigns in three Nigerian states). Theoretically, if we evaluated six states for net distribution campaigns and only four states met our criteria to be above 10x, we would only fund those four and not the other two, and the average cost-effectiveness across those 4 states would be higher than 10x.

Hi Sanjay - thanks for the close read! You're right that Figure 3 should read 95%, not 90% - we're working on correcting the figure and will update the post ASAP. Thanks again!

Hi there - thanks so much for catching this! Our malnutrition CEA is not yet public because it's still a work-in-progress. I've removed the hyperlink accordingly. Thanks again!

Hi Vasco and Caleb, we appreciate the interest in the Global Health and Development Fund! This is Isabel Arjmand responding on behalf of GiveWell.

We're grateful for the opportunity to manage this fund, and we think it's a great opportunity for donors who want to support highly cost-effective global health and development programs. We're also interested in having more in-depth conversations with Caleb and others involved in EA Funds about what the future of this fund should look like, and we’ll reach out to schedule that.

In the meantime, here are some notes on our grantmaking and how donations to the fund are currently used.

  • We expect the impact of giving to the Global Health and Development Fund (GHDF) is about the same as giving to GiveWell's All Grants Fund: both go to the most impactful opportunities we've identified (across programs and organizations), and are a good fit for donors who'd like to support the full range of our grantmaking, including higher-risk grants and research. The online description of GHDF was written before the All Grants Fund existed (it launched in 2022), and the two funds are now filling a very similar niche. Caleb, we'd love to collaborate on updating the GHDF webpage to both reflect the existence of the All Grants Fund and include more recent grant payout reports.
  • In the broadest sense, GiveWell aims to maximize impact per dollar. Cost-effectiveness is the primary driver of our grantmaking decisions. But, “overall estimated cost-effectiveness of a grant” isn't the same thing as “output of cost-effectiveness analysis spreadsheet.” (This blog post is old and not entirely reflective of our current approach, but it covers a similar topic.)
  • The numerical cost-effectiveness estimate in the spreadsheet is nearly always the most important factor in our recommendations, but not the only factor. That is, we don’t solely rely on our spreadsheet-based analysis of cost-effectiveness when making grants. 
    • We don't have an institutional position on exactly how much of the decision comes down to the spreadsheet analysis (though Elie's take of "80% plus" definitely seems reasonable!) and it varies by grant, but many of the factors we consider outside our models (e.g. qualitative factors about an organization) are in the service of making impact-oriented decisions. See this post for more discussion. 
    • For a small number of grants, the case for the grant relies heavily on factors other than expected impact of that grant per se. For example, we sometimes make exit grants in order to be a responsible funder and treat partner organizations considerately even if we think funding could be used more cost-effectively elsewhere.
    • To add something to our top charities list (vs. make a grant from the All Grants FundGHDF), we want a high degree of confidence in the program. See our list of additional criteria for top charities here; some of those criteria aren't proxies for cost-effectiveness, but are instead capturing whether a program provides the confidence and direct case for impact that donors expect from that product. 
    • Also, we recognize it was confusing to have GiveDirectly on our top charity list when we believed our other top charities were substantially more cost-effective. Now, our list of top charities is limited to the programs that we think can most cost-effectively use marginal funding (currently, programs we believe to have room for more funding that is at least 10x unconditional cash transfers); see the fourth bullet point here.
Load more