Hide table of contents

Edit: GiveWell's Response at the Bottom

A past event has shown how reputation damage to one EA entity can affect the entire movement's credibility and therefore funding and influence. While GiveWell's evaluation process is thorough, it largely relies on charity-provided data. I propose they consider implementing independent verification methods.

Reliance on coverage surveys

GiveWell performs no independent verification of their charity's claims to draw their conclusions. However they do do a downward adjustment.

This feels lacking.

Getting numbers that are closer to reality not only increases the cost effectiveness calculation accuracy but also reduces the risk of adding a new entry to their mistake list.

Suggestions to shore it up

GiveWell is an important cornerstone of the movement and thus preserving its reputation should be further explored to see if more could be done.


GiveWell's Response

GiveWell would also like to improve in this area. Some work has already been done. However, this year our cross-cutting research subteam, which focuses on high-priority questions affecting all intervention areas, plans to improve our coverage survey estimates. Examples of work we’ve done to address our concerns with coverage survey estimates include:

  • Our research team is working on a project to identify, connect with, and potentially test different external evaluator organizations to make it easier for grantmakers to identify well-suited evaluation partners for the questions they’re trying to answer.
  • We recently approved a grant to Busara Center to conduct a qualitative survey of actors in Helen Keller Intl’s vitamin A supplementation delivery, including caregivers of children who receive vitamin A supplementation.
  • We made a grant to IDinsight to review and provide feedback on Against Malaria Foundation’s monitoring process for their 2025 campaign in Democratic Republic of the Congo.
  • For New Incentives, we mainly rely on the randomized controlled trial of their work to estimate coverage, which was run by an independent evaluator, IDinsight. Only recently have we begun to give weight to their coverage surveys.
  • We funded a Tufts University study to compare findings to Evidence Action’s internal monitoring and evaluation for their Dispensers for Safe Water program, which caused us to update our coverage data and consider funding an external coverage survey.
  • Our grant to CHAI to strengthen and support a community-based tuberculosis household contact management program includes IDinsight to evaluate the program through a large-scale cluster randomized control trial (cRCT) and process evaluation.

70

3
0
2

Reactions

3
0
2
Comments12


Sorted by Click to highlight new comments since:

Independent verification seems good, but mainly for object-level epistemic reasons rather than reputational. 

Transparency is only a means for reputation. The world is built on trust and faith in the systems and EA is no different.

I believe more people would be alarmed by the lack of independent vetting than the nominal cost effective numbers being inaccurate themself. It feels like there are perverse incentives at play.

Epistemologically speaking, it's just not a good idea to have opinions relying on the conclusions of a single organization, no matter how trustworthy it is. 

EA in general does not have very strong mechanisms for incentivising fact-checking: the use of independent evaluators seems like a good idea. 

Just wanted to note that this take relies on "GiveWell performs no independent verification of their charity's claims to draw their conclusions" being true, and it struck me as surprising (and hence doubtful). Does anyone have a good citation for this / opinions on it? 

GiveWell's Carley Moor from their philantrophic outreach team contacted me and we had a conversation a few weeks ago which prompted this post.

Among other things I asked about independent verification there. The short answer seems to be no independent verification with the caveat that they adjust. The spreadsheets I linked were sourced from her.

They do fund at least one meta charity that help improve monitoring & evaluation at these charities.

I asked her to either post her response email here or let me post it verbatim and am awaiting to hear from her next week. Being cautious lest I misrepresent them.

Thanks for the details! Keen to see their response if Carley OKs it. 

I hope so! Apparently the concept was received well with the team.

I love these suggestions and have wondered about this for some time. Independent surveyors is a really good idea - not only for impact data but also for progamattic data. Although findng truly independednt surveyors is harde than you might think in relatively small NGO econsystems.

I don't really understand what you mean by "Creating funding opportunities for third-party monitoring organisations" can you explain what you mean?

I also would have liked to see a couple more paragraphs explaining the background and reasoning, although good on you for putting up the draft rather than leaving it just in the word doc :D.

I read it as "providing enough funding for independent auditors of charities to exist and be financially sustainable"

This is what I meant.

Appreciate the feedback, although can you elaborate on what you mean by impact data and progamattic data?

I agree I could have made a better case on the reputation part.

It is news to me that this isn't already the case, seems like an obvious positive, both for the potentially higher ratings (not being down-adjusted) and as instructive for the organisations themselves.

Curated and popular this week
 ·  · 23m read
 · 
Or on the types of prioritization, their strengths, pitfalls, and how EA should balance them   The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone is trying to figure out how to prepare for AI. This is the first in a series of posts examining the state of cause prioritization and proposing strategies for moving forward.   Executive Summary * Performing prioritization work has been one of the main tasks, and arguably achievements, of EA. * We highlight three types of prioritization: Cause Prioritization, Within-Cause (Intervention) Prioritization, and Cross-Cause (Intervention) Prioritization. * We ask how much of EA prioritization work falls in each of these categories: * Our estimates suggest that, for the organizations we investigated, the current split is 89% within-cause work, 2% cross-cause, and 9% cause prioritization. * We then explore strengths and potential pitfalls of each level: * Cause prioritization offers a big-picture view for identifying pressing problems but can fail to capture the practical nuances that often determine real-world success. * Within-cause prioritization focuses on a narrower set of interventions with deeper more specialised analysis but risks missing higher-impact alternatives elsewhere. * Cross-cause prioritization broadens the scope to find synergies and the potential for greater impact, yet demands complex assumptions and compromises on measurement. * See the Summary Table below to view the considerations. * We encourage reflection and future work on what the best ways of prioritizing are and how EA should allocate resources between the three types. * With this in mind, we outline eight cruxes that sketch what factors could favor some types over others. * We also suggest some potential next steps aimed at refining our approach to prioritization by exploring variance, value of information, tractability, and the
 ·  · 1m read
 · 
I wanted to share a small but important challenge I've encountered as a student engaging with Effective Altruism from a lower-income country (Nigeria), and invite thoughts or suggestions from the community. Recently, I tried to make a one-time donation to one of the EA-aligned charities listed on the Giving What We Can platform. However, I discovered that I could not donate an amount less than $5. While this might seem like a minor limit for many, for someone like me — a student without a steady income or job, $5 is a significant amount. To provide some context: According to Numbeo, the average monthly income of a Nigerian worker is around $130–$150, and students often rely on even less — sometimes just $20–$50 per month for all expenses. For many students here, having $5 "lying around" isn't common at all; it could represent a week's worth of meals or transportation. I personally want to make small, one-time donations whenever I can, rather than commit to a recurring pledge like the 10% Giving What We Can pledge, which isn't feasible for me right now. I also want to encourage members of my local EA group, who are in similar financial situations, to practice giving through small but meaningful donations. In light of this, I would like to: * Recommend that Giving What We Can (and similar platforms) consider allowing smaller minimum donation amounts to make giving more accessible to students and people in lower-income countries. * Suggest that more organizations be added to the platform, to give donors a wider range of causes they can support with their small contributions. Uncertainties: * Are there alternative platforms or methods that allow very small one-time donations to EA-aligned charities? * Is there a reason behind the $5 minimum that I'm unaware of, and could it be adjusted to be more inclusive? I strongly believe that cultivating a habit of giving, even with small amounts, helps build a long-term culture of altruism — and it would