The Allure of the Measurable: Are We Trading Impact for Certainty?
At its heart, Effective Altruism is a commitment to using evidence and reason to do the most good. The engine of this effectiveness has always been measurement. By quantifying our impact—collecting data, analyzing outcomes, and comparing interventions—we have managed to stay rational, avoid ineffective dead ends, and make a tangible difference in the world. Measurement is the flag on the EA ship, the very principle that distinguishes our approach.
However, a tool that is instrumental to our success can also become a critical weakness.
In this post, I will argue that the EA community's deep-seated reliance on measurement, counting, and monetization may inadvertently cause us to overlook superior opportunities. We risk dismissing or ignoring the most promising routes to impact simply because their effectiveness cannot be easily quantified.
The Central Analogy: The Bednet and The Health System
Let's consider a classic example. For years, one of the most celebrated conclusions in EA has been the cost-effectiveness of distributing insecticide-treated bednets to prevent malaria. Let's examine why.
- Intervention A: The Bednet. The appeal of the bednet is its legibility. The math is beautifully straightforward. We can calculate the cost per net, measure the baseline rate of malaria, distribute the nets, and measure the new, lower rate of infection. The causal link is direct, the feedback loop is short, and we can arrive at a clear, satisfying number: X dollars spent to avert one case of malaria.
- Intervention B: The Health System. Now, consider an alternative: a long-term project to reform the underlying healthcare system. This could involve improving public health education, draining swamps where mosquitos breed, or increasing the number of local clinics. The goal is to address the root cause. A successful project wouldn't just reduce malaria; it would likely reduce the incidence of the next disease as well. The potential upside is enormous.
But how do we measure this? The causal chains are a tangled web. The timeframe is years. The success is not just a lower malaria rate, but a more resilient and healthy society. There is no simple, clean number to put in a spreadsheet.
Faced with these two options, our current tools and culture will almost always favor the bednet. It is clear, proven, and measurable. But is it truly the most effective thing we can do? Or is it just the most effective thing we can count?
When Numbers Deceive
Our trust in measurement appears to be the most objective way to identify the best option. But this reliance can be deceptive. As I've explored in a previous post, a purely numerical calculation like Expected Value (EV) can mask the true risk of failure, making a 99.9% failure rate seem like a rational choice.
This problem extends beyond complex calculations. Our intense focus on what can be measured may cause us to systemically undervalue interventions that are simply hard to quantify, making them seem less effective even when their potential impact is far greater.
Addressing Common Objections
- Objection 1: "If we move away from measurement, we lose our rigor and just waste money." This isn't a call to abandon rigor, but to expand it. True rationality requires us to see the world clearly, and if our measurement tools are acting as blinders, we must supplement them. We need to develop more sophisticated frameworks that incorporate qualitative evidence and expert judgment without being "disconnected from the world."
- Objection 2: "It's better to fund a proven intervention that saves 100 lives for sure than an unproven one that might save 100,000." This highlights a potential flaw in our evaluation process. If our system consistently overlooks opportunities with a vastly higher potential impact because they carry uncertainty, then the system itself may not be fully rational. A truly effective system must be capable of identifying and pursuing these high-reward opportunities, even if it means tolerating risk.
- Objection 3: "Systemic change is too complex, too political, and too slow for philanthropy to solve." If the goal of Effective Altruism is to solve the world's most pressing problems, we cannot shy away from them simply because they are difficult. To ignore the root causes of suffering because they are complex is a choice. Our ambition for impact must be as large as the problems we hope to solve.
A Path Forward: Potential Solutions
- A Portfolio Approach: We should consider formally splitting funding into distinct portfolios. For example, a fund might allocate 70% of its resources to proven, highly measurable interventions, while dedicating 30% to a "high-risk, high-reward" fund for systemic change. This would allow us to continue supporting reliable interventions while also creating space for potentially transformative work.
- Develop Better Evaluation Tools: We need to invest in creating new analytical tools that go beyond simple numbers and help us see the bigger picture. This means getting better at root cause analysis. When we see the malaria problem, we should model the effects of fixing the healthcare system. We can use qualitative data, historical case studies, and expert forecasting to evaluate the potential impact of these larger, systemic interventions.
Conclusion
The tools of quantification have served our community immeasurably well. But we must ensure our tools do not become our masters. We must be willing to look up from the spreadsheet and into the messy, complex reality of the world, ready to tackle the greatest challenges, not just the most measurable ones.
My question to the community is this: How do we hold onto the rigor that defines us, without letting our tools define the limits of our moral ambition?
My post is a philosophical critique of EA's epistemic culture. The core argument is that our community has a built-in preference for the easily measurable, and I'm exploring the downsides of that bias.
On your points about proof (1 & 3):
The demand for pre-existing, legible proof for a class of interventions highlights the exact paradox I'm concerned about. If we require a high standard of quantitative evidence before trying something new, we may never run the experiments needed to generate that evidence. This creates a catch-22 that can lock us out of potentially transformative work.
On your point about existing orgs (2):
You're right, these organizations do this work. My argument isn't that systemic change is absent from EA, but that it's on the periphery. It's not central to the movement's core narrative or funding strategy in the way direct interventions are. The question is why this type of work isn't more foundational to our approach across all cause areas.
On your final question (the "root cause"):
Good catch on my "root cause" phrasing—it was imprecise. A better term is "foundational systems" or "underlying structures." My hypothesis isn't about a single cause, but that improving these foundational systems acts as an "impact multiplier" that makes all direct interventions more effective. The core problem remains that the impact of strengthening these systems is diffuse and hard to measure.