Hi Nick,
Thanks for noting that section of the post could have been clearer! We’ve edited the article to clarify that New Incentives went from serving 70,000 to 1.5 million children per year.
We agree that the extra lives saved (“indirect deaths” in our analysis) is an interesting question. Both the magnitude of the adjustment and the exact mechanisms (i.e., which other causes those deaths are coming from in the GBD bucket) are major sources of uncertainty in our model, and we don’t currently specify what other deaths are being averted through vaccination in our analysis. We may follow up with a post to share more about our work on indirect deaths in the future.
Thanks again for the feedback!
Hi Nick,
Thank you for providing this feedback! My name is Vicky, and I am a Research Associate at GiveWell, on the vaccines team. We really appreciate these kinds of rough sense checks on our work and thought this was a great approach.
Our lookback includes children enrolled across multiple years of programming (roughly covering 2020 to 2026) whereas the enrollment figures in your estimate only include a single year of program operations.
We think this difference–the assumed number of children enrolled with GiveWell funding—is the main reason the upper bound you estimated for the number of deaths averted appears significantly lower than the estimates in our lookback, although we’re still exploring other potential discrepancies between the numbers in your approach and our estimates.[6]
Thanks again for your engagement!
In 2023, New Incentives reported enrolling 1,518,904 children across 9 states. See New Incentives, 2023 Annual Report, p.8-9
We estimated this by taking the total amount of funding (roughly $120 million) divided by the cost per child enrolled (roughly $19 per child enrolled) between 2020 and 2024. This assumes that the cost per child enrolled between 2025 and 2026 will remain similar to the historical weighted average.
The 81% in our public report is based on a single state, Bauchi, and the exact percentage differs across states depending on baseline coverage and New Incentives’ expected impact in that state. In addition, we've made some internal updates to the model since the last version of our intervention report was published.
6.3 million * (1 - 81%) = roughly 1.2 million children counterfactually vaccinated by the program.
1.2 million children counterfactually vaccinated * 5% risk of dying from causes that might be preventable through vaccination = 60,000 deaths potentially averted as an upper bound.
Across states where the New Incentives program operates, we estimate that unvaccinated children experience roughly a 3% to 8% chance of dying from vaccine-preventable diseases and that vaccination reduces their risk of dying by roughly 50%, which appears more in-line with your estimates. For more on how we estimate these, see our public report here.
Thanks for sharing your critique of our recent grants with Open Philanthropy for technical support units (TSUs). We really appreciate this thoughtful pushback! We've recommended (and are considering) a number of grants to help respond to the current situation with cuts in US foreign health assistance. So, getting critiques like yours is helpful since it encourages us to pause and consider whether we’re making the right tradeoffs in these grants. While we share some of your perspectives on the uncertainties of this work, we're still excited about our decision in this case.
While this grant’s impact is particularly uncertain, we see this as a difference in degree, not kind, compared to other grants we recommend. Most of our funding still goes to Top Charities - proven programs backed by strong evidence and our cost-effectiveness analysis. But we also recommend opportunities through the All Grants Fund. The goal of this fund is to find and fund what we believe are the highest-impact uses of marginal dollars, even when those opportunities are riskier or harder to model. This grant fits squarely in that approach. We’ve funded technical assistance from the All Grants Fund before, alongside grants that are uncertain for other reasons. For example, sometimes we're trying to generate new evidence, while at other times we're recommending high expected value bets even when we know we’re unlikely to get a definitive answer on their impact.
We agree that the evidence base for TSUs is thin. In general, we think it’s challenging to evaluate technical assistance programs because
So even if the review that Nick cites had found good evidence for past TA programs, we still might not feel sure that it would generalize to the TSUs we recommended funding.
But as discussed above, we don’t consider high uncertainty to be a dealbreaker in grants funded from the All Grants Fund. We still think it can be worth funding TA (see, e.g., our maternal syphilis grants) and we’re very interested in building up our ability to learn about programs like this over time. (We’re working on a project looking back on a subset of technical assistance grants we’ve funded, but don’t have a publication date for that yet.)
While we don't have detailed theories of change, we still think it's plausible that TSUs could be impactful. We are excited about this grant because we think it could help governments to make difficult prioritization and program adaptation decisions in countries affected by US government funding freezes and cuts. We expect that the details of how this could look will vary by country and so we don’t feel confident that any particular mechanism will cash out in impact. But for example, we think TSUs could help governments to:
While we think the above examples are plausible, we agree that the theory of change for these programs is not tightly specified. However, we spoke with senior Ministry of Health officials in each country about this grant, and overall governments voiced support and demand for the proposed TSUs and were eager to have CHAI and PATH's support on this work. We also think both organizations are well-placed to support this work, as both have supported on malaria-specific TSUs in the past, have teams with specific focus on health systems and health financing, and have established relationships with the governments they’re supporting.
Budget - Thank you for sharing your estimates - this is helpful for us as we continue to update how we review budgets. We’ll share a high level budget breakdown for this grant with our public grant write-ups (which are coming - see below!). One quick clarification (which wasn’t clear in the podcast) is that costs for Nigeria reflect support in seven states as well as national support.
Outside of that, we think that the higher budget reflects both higher salaries and a higher non-salary budget share (to account for travel, coordinating stakeholder engagement, and support from global technical teams). Our understanding is that salaries are set based on globally-benchmarked salary ranges and localized equity adjustments to account for organizational equitable pay standards and differential cost of living across different geographies. A portion of the compensation costs is also due to benefits (such as health insurance) that may be standard to each location.
Learning - We agree that we should try to learn about the impact of these grants and also agree with commenters and Nick’s revision that an RCT isn’t an appropriate strategy. We’ve asked CHAI and PATH to track and report out on the following.
We’ll attempt to triangulate these reports through speaking with other stakeholders, though we expect we’ll still have substantial uncertainty about impact given the lack of counterfactuals.
No CEA and grant write up. These are coming! We typically have a lag between making grants and publishing our write-ups, but wanted to share about this grant sooner because we’ve received a lot of interest in our response to the funding cuts. We expect to publish pages for CHAI and PATH (including a rough BOTEC) by the end of June.
Urgency. We see the urgency here as being specifically related to governments’ needs to adapt to frozen or cut US health assistance: we heard when investigating this grant that governments were already beginning this planning process and that lighter touch versions of the support offered by TSUs were already being provided on nights and weekends by CHAI staff in certain countries. We also think this kind of grant is inherently uncertain and it didn’t seem likely that we’d reduce that uncertainty by spending additional time investigating. So, with apparent demand for support at the time and since we didn't think waiting would lead to a better decision, we chose to recommend funding relatively quickly.
We want to share a few thoughts that might help clarify our approach.
Our research incorporates potential downsides or unintended consequences, such that our recommendations take into account factors like potential side effects. Most (if not all) of the information in Leif Wenar’s WIRED piece is something that anyone could find on our website. However, we believe some of what is published in the piece is misleading or inaccurate. Here is a lightly edited version of the material that GiveWell sent to WIRED in response to a request for comment on Leif Wenar's piece.
Hi Ian, thanks for sharing this article – our team wrote up some notes on how this topic intersects with our work, and I (Isabel Arjmand writing on behalf of GiveWell) thought it might be useful to share here.
Like the Bloomberg editorial board, we're concerned about stalling progress in the fight against malaria, but we're skeptical that quality issues with PermaNet 2.0s have influenced this progress as much as the article suggests.
All things considered, we believe that malaria nets are, and have been, highly effective in reducing malaria burden. The Against Malaria Foundation had first shared the studies highlighted above with us in 2020, and the claims in the Bloomberg article have prompted us to do some additional research.
Based on the work we've done so far, we aren't convinced that decreased net quality is primarily responsible for malaria resurging in Papua New Guinea. So far, we see this as a milder negative update on nets than the article would indicate, in part because we think these tests of net quality may not be a perfect proxy for effectiveness in reducing cases and in part because we no longer fund PermaNet 2.0s (for unrelated reasons). At the same time, renewed interest in the evidence around PermaNet 2.0 quality is a nudge for us to prioritize further work to understand net quality control in general.
More detail on the implications of this research for GiveWell's work
While we no longer fund PermaNet 2.0s because we now fund newer types of nets instead, they make up roughly 20% of nets we've funded historically. The studies referenced in the Bloomberg article looked at nets distributed in Papua New Guinea and indicate that the post-2012 PermaNet 2.0s perform worse on certain efficacy tests. We aren't sure how well those efficacy tests serve as a proxy for malaria transmission (e.g. mosquitoes in these tests could be impaired from the exposure to insecticides even if it isn't sufficient to kill them). We're also skeptical that changes to the formulation of PermaNet 2.0s were the key driver of increased malaria cases in Papua New Guinea. During this time, we think other factors like insecticide resistance and shifts in biting patterns likely played a meaningful role (as highlighted in this paper). That said, we see these studies as a negative update on the effectiveness of those nets.
We did a quick back-of-the-envelope calculation (so this is more illustrative than fully baked, at this point):
While concerns specific to PermaNet 2.0s don't directly affect our future allocation decisions, this issue does raise more general concerns about quality control for nets. Ideally, we would have prioritized more work in this area in the past. We're planning to learn more about quality control processes and we also want to better understand how others in the malaria field are thinking about this.
We don't select/structure our grants such that we necessarily think the "last dollar" or marginal dollar to that grant is 10x cash. For example: if there was a discrete $5M funding opportunity to support a program in a specific area, we might model the cost-effectiveness of that opportunity as say, 15x overall, but there wouldn't be any particular reason to think the 'last dollar' was more like 10x. Generally, when it comes to funding discrete opportunities (e.g. vaccination promotion in a certain state in Nigeria), we don't tend to think about the value of the first versus last dollar for that discrete opportunity, because we're often making a binary decision about whether to support the program in that area at all. Hope this clarifies!
Thanks to Vasco for reaching out to ask whether GiveWell has considered:
GiveWell has not looked into any of these three areas. We'd likely expect both the costs and the benefits to be fairly specific to the particular context and intervention. For example, rather than estimating the impact of reduced tariffs broadly, we'd ask something along the lines of: What is the intervention that can actually e.g., lead to a reduction in tariffs? On which set of goods/services would it apply? Which sets of producers would benefit from those lower tariffs? And thus, what is the impact in terms of increased income/consumption?
We think there's a decent chance that different methodologies between Copenhagen Consensus Center and GiveWell would lead to meaningfully different bottom line estimates, based on past experience with creating our own estimates vs. looking at other published estimates, although we can't say for sure without having done the work.
Hi there - thanks for your interest in GiveWell! We don't currently offer informational calls, but we'd encourage you to check out our open roles and apply to any that seem like a good fit for your background and interests.