An interesting newish article in the Stanford Social Innovation Review, The Problem with Randomized Controlled Trials, raises some good questions.
RCTs have been central to how the Effective Altruism community evaluates evidence and directs funding. Many of the most influential organizations in global health and development have built their strategies around interventions that have been rigorously tested through randomized trials. This has brought enormous clarity and accountability to the sector. But as the EA community explores broader cause areas and complex systems, it may be useful to reflect on their limitations.
Below are a few ideas from the article that resonated with my own experience working in development and philanthropy.
1. Evidence as Culture - RCTs have become more than just a research tool. They now function as a kind of cultural rule for funding decisions: only fund what has been proven to work. This has strengthened the link between evidence and money, but it also shapes what kinds of problems get studied and who receives support. It can unintentionally narrow our field of vision to what can be tested, rather than what might matter most.
2. What RCTs Miss - Many nonprofits focus on outcomes that are hard to measure in a controlled trial. Things like building trust, strengthening local governance, or increasing civic participation. These contributions are often at the heart of lasting social change, yet they rarely fit neatly into an RCT framework.
3. The False Certainty Problem - When programs have modest but real effects, detecting them through an RCT requires very large sample sizes. Most organizations cannot reach that scale, which can lead to null results even when an intervention works. This creates a risk of underestimating impact simply because of statistical limitations, not because the idea itself failed.
4. Agility - RCTs take time, resources, and stability. They are valuable for understanding clear causal effects but can slow down organizations that need to adapt quickly. In fast-changing environments, waiting for multi-year results can make programs less responsive and less innovative.
5. Learning Rather Than Proving - The authors suggest a few other tools which allow for continuous improvement. Approaches like A/B testing, rapid-cycle evaluations, or plan-do-study-act loops can help organizations test and refine ideas in real time. Evidence becomes part of an ongoing learning process rather than a single verdict.