Hi Vasco, thanks for your engagement! I have put together some responses to your questions/comments below. Please let me know if I missed anything or you have further questions.
> The Centre for Exploratory Altruism Research (CEARCH) estimated GWWC's marginal multiplier to be 17.6 % (= 2.18*10^6/(12.4*10^6)) of GWWC's multiplier. This suggests GWWC's marginal multiplier from 2023 to 2024 was 1.06 (= 0.176*6), such that donating to GWWC over that period was roughly as cost-effective as to GiveWell's top charities. A marginal multiplier of 1 may look bad, but is actually optimal in the sense GWWC should spend more (less) for a marginal multiplier above (below) 1.
I would actually expect our marginal multiplier to be much closer to our average multiplier than the CEARCH method implies. Most importantly, I expect most of our marginal resources are dedicated to identifying and executing on scalable pledge growth strategies. I think this work, in expectation, provides a pretty strong multiplier. By comparison our average multiplier includes some major fixed costs (e.g., related to running our donation platform).
It's also worth noting that pledge growth accelerated between 2023 and 2024, such that our average multiplier for 2024 was roughly 50% higher than that for 2023. In 2025, pledge growth is currently exceeding 2024 growth (by this time in 2024 we had ~280 new 🔸10% Pledges, so far in 2025 we have ~370 new 🔸10% Pledges), although our costs are also higher.
> So I wonder whether the information below on GWWC's website is somewhat misleading.
I don't think I agree that the information on the website is misleading seeing as it just states the number of people who have taken the pledge. I think it’s important to bear in mind that the pledge has never required that pledgers record their donations with GWWC and we know that many of our most engaged pledgers do not record their donations.
> I guess pledges starting in later years are less valuable, such that you are overestimating your impact by not controlling for the year the pledge started.
The regression you suggest is something we have considered, but don’t think it is an obvious improvement over our approach of taking the mean over the most recent pledge years. While there might be an effect of the year the pledge started on average first-year pledge donations, we do not think this trend is linear. For instance, the 2021 cohort had the second highest average first-year donations across all cohorts and the five cohorts with the lowest average first-year donations were 2010, 2017, 2018, 2016 and 2012. Ultimately, this is an empirical question and my prediction is that our average method will be more predictive of the first year of pledge donations for the 2024 cohort than the regression. If you are interested in performing this analysis yourself, you can find total inflation-adjusted pledge donations by pledge cohort and year of pledge in a table in this document.
> Have you considered retiring The Trial Pledge? You estimated 96 % of your impact came from The 10 % Pledge.
We currently aren’t considering retiring the 🔹Trial Pledge. While in terms of direct donation value the 🔹Trial Pledge contributes a relatively small fraction of our pledge impact, we believe the main value add of the 🔹Trial Pledge comes from 🔹Trial pledgers ‘upgrading’ to 🔸10% Pledges. For example, roughly 10% of those who have taken a 🔹Trial Pledge are now 🔸10% Pledges and we are currently exploring ways to improve conversion rates even more. We have also seen some evidence that retention may be stronger for 🔸10% Pledges that follow 🔹Trial Pledges than for other 🔸10% Pledges.
Hi Rosie, thanks for sharing your thoughts on this! It’s great to get the chance to clarify our decision-making process so it’s more transparent, in particular so readers can make their own judgement as to whether or not they agree with our reasoning about FP GHDF. Some one my thoughts on each of the points you raise:
Thanks for the comment! While we think it could be correct that the quality of evaluations differs between our recommendations in different cause areas, my view is that the evaluating evaluators project applies pressure to increase the strength of evaluations across all cause areas. In our evaluations we communicate areas where we think evaluators can improve. Because we are evaluating multiple options in each cause area, if in future evaluations we find one of our evaluators has improved and another has not, then the latter evaluator is less likely to be recommended in future, which provides an incentive for both evaluators to improve their processes over time.
Thanks for the comment, Matt! We are very grateful for the transparent and constructive engagement we have received from you and Rosie throughout our evaluation process.
I did want to flag that you are correct in anticipating that we do not agree that with the three differences in perspectives that you note here nor do we think our approach implies we do agree:
1) We do not think a grant can only be identified as cost-effective in expectation if a lot of time is spent making an unbiased, precise estimate of cost-effectiveness. As mentioned in the report, we think a rougher approach to BOTECing intended to demonstrate opportunities meet a certain bar under conservative assumptions is consistent with a GWWC recommendation. When comparing the depth of GiveWell’s and FP’s BOTECs we explicitly address this:
[This difference] is also consistent with FP’s stated approach to creating conservative BOTECs with the minimum function of demonstrating opportunities to be robustly 10x GiveDirectly cost-effectiveness. As such, this did not negatively affect our view of the usefulness of FPs BOTECs for their evaluations.
Our concern is that, based on our three spot checks, it is not clear that FP GHDF BOTECs do demonstrate that marginal grants in expectation surpass 10x GiveDirectly under conservative assumptions.
2) We would not claim that CEAs should be the singular determinant of whether a grant is made. However, considering that CEAs seem decisive in GHDF grant decisions (in that grants are only made from the GHDF when they are shown by BOTEC to be >10x GiveDirectly in expectation), we think it is fair to assess these as important decision-making tools for the FP GHDF as we have done here.
3) We do not think maximising calculated EV in the case of each grant is the only way to maximise cost-effectiveness over the span of a grantmaking program. We agree some risk-neutral grantmaking strategies should be tolerant to some errors and ‘misses’, which is why we checked three grants, rather than only checking one. Even after finding issues in the first grant we were still open to relying on FP GHDF if these seemed likely to be only occurring to a limited extent, but in our view their frequency across the three grants we checked was too high to currently justify a recommendation.
I hope these clarifications make it clear that we do think evaluators other than GiveWell (including FP GHDF) could pass our bar, without requiring GiveWell levels of certainty about individual grants.
Thanks for the comment! I first want to highlight that in our report we are specifically talking about institutional diet change interventions that reduce animal product consumption by replacing institutional (e.g., school) meals containing animal products with meals that don’t. This approach, which constitutes the majority of diet change programs that ACE MG funds, doesn’t necessarily involve convincing individuals to make conscious changes to their consumption habits.
Our understanding of a common view among the experts we consulted is that diet change interventions are generally not competitive with promising welfare asks in terms of cost-effectiveness, but that some of the most promising institutional diet change interventions plausibly could be. For example, I think some of our experts would have considered the grant ACE MG made to the Plant-Based Universities campaign worth funding. Reasons for this include:
As noted in the report, not all experts agreed that the institutional diet change interventions were on average competitive with the welfare interventions ACE MG funded. However, as you noted, this probably has a fairly limited impact on how cost-effective ACE MG is on the margin, not least because these grants made up a small fraction of ACE MG’s 2024 funding.
Hi Vasco, thanks for the comment! I should clarify that we are saying that we expect marginal cost-effectiveness of impact-focused evaluators to change more slowly than marginal cost-effectiveness for charities. All else equal, we think size is plausibly a useful heuristic. However, because we are looking at the margin, both the program itself and its funding situation can change, and as THL hasn’t been evaluated for how it allocates funding on the margin or starts new programs, but just on the quality of its marginal programs at the time of evaluation, there is a less robust signal there than there is for EA AWF, which we did evaluate on the basis of how it allocates funding on the margin. I hope that makes sense!
Hi Ramiro, thanks for sharing your concerns. In my response to Vasco’s comment, I explain why I don’t think our communications around the number of people who have signed the pledge is misleading. As for whether we take more credit than is due for pledge donations, I want to flag two important ways we try to ensure we aren’t overestimating our impact (among others):
If this is still your preference, you can resign from your pledge using this form. I hope I have understood your concerns, but please let me know if you have any further questions or concerns.