This post challenges a foundational assumption in some longtermist calculations: that a small probability of an astronomical outcome is directly comparable to a certain, smaller outcome. I will argue that the common practice of equating these scenarios using a simple Expected Value (EV) formula is logically flawed, and that a more robust framework is required for us to be truly effective.
Let's begin with the core dilemma, framed as a choice between two interventions:
- Intervention A: The Certain Outcome
- Value (V): 1 life saved
- Probability of Success [P(S)]: 1.0 (100%)
- Expected Value (EV): 1.0 life saved
- Intervention B: The Probabilistic Outcome
- Value (V): 1,000 lives saved
- Probability of Success [P(S)]: 0.001 (0.1%)
- Expected Value (EV): 1.0 life saved
While their simple expected values are identical, these two options are not equivalent. A rational analysis reveals critical distinctions that the EV calculation completely masks.
The Flaw of Inefficiency and the Dominant Outcome
The most immediate flaw is one of resource efficiency. Intervention A has a 100% probability of success and therefore represents a perfect efficiency of resources to outcome. There is no possibility of waste.
Intervention B’s dominant, most probable outcome is failure. With a 99.9% probability, the result is zero lives saved and a 100% loss of all invested resources. A rational framework for resource allocation cannot be indifferent to an action that results in total waste in 999 out of 1,000 expected instances. This near-certain inefficiency points to a deeper error: the mathematical model itself is being misapplied.
The Category Error of Probability
The simple formula, EV = Value × Probability, is only reliable under specific conditions. Its reliability is highest when dealing with aleatoric probability—the statistical frequency of an event over many repeated trials, like a die roll. Your intuition that this works in an "infinite universe" is correct. If we could attempt to save the 1,000 people millions of times, the problem of inefficiency would average itself out.
However, many real-world problems, especially existential risks, are a one-shot scenario. We only have one Earth to save. This is the domain of epistemic probability, which measures our degree of belief or uncertainty about a unique event, not its long-term frequency.
It is a fundamental category error to treat a high-confidence, evidence-based probability (Scenario A) and a low-confidence, speculative one (Scenario B) as interchangeable variables. Because we face a single, high-stakes event, we cannot ignore the dominant probability of failure. We must confront the fact that in choosing Scenario B, the near-certain outcome is the complete loss of our resources for zero gain.
A Call for a Rational Readjustment
These arguments are not merely philosophical points; they have direct and practical implications for how we allocate our finite resources. Continuing to rely on a simple model that equates certain outcomes with highly speculative ones is a demonstrable flaw in our collective methodology. It is an inefficient and fragile strategy.
Therefore, I propose the following actions for the community:
- For Researchers and Theorists: We must prioritize the development of more robust decision models. We need to move beyond EV = P x V and formally incorporate epistemic uncertainty. What would a "Certainty-Weighted Expected Value" (CWEV) model look like? How can we quantify the reliability of our own probability estimates?
- For Grantmakers and Donors: We must reconsider how we balance our 'moral portfolios'. It is not rational to be indifferent between an intervention with a 100% success rate and one with a 99.9% failure rate. We need to consciously decide how much of our capital should be allocated to the robust certainty of near-term aid versus highly speculative x-risk gambles.
Effective Altruism's core promise is to use reason and evidence to do the most good. If our foundational calculations are flawed, we risk failing at our most basic goal. Let us apply our core principles of rigor and self-correction to fix this.
A lucid illustration of this conundrum would involve preventing the demise of an approaching infinite number of individuals with a probability approaching zero (p→0). Apprehension concerning infinitesimal risks, even when associated with an astronomically vast scale of impact, is illogical. The improbable nature of an event's occurrence should not be mitigated by the magnitude of its potential harm.
Consider a scenario where, for a single dollar, one is offered a lottery ticket providing the chance to win a sum of 10^100 with a probability of 10^(−99). Superficially, an Expected Value (EV) calculation might suggest this to be a favorable transaction. However, any discerning individual would intuitively recognize that participating in such a lottery will, with near certainty, result in the loss of their investment. The probability of winning is infinitesimally small, and unless an exceptionally large quantity of tickets is purchased, one is virtually guaranteed to incur a loss.