This post challenges a foundational assumption in some longtermist calculations: that a small probability of an astronomical outcome is directly comparable to a certain, smaller outcome. I will argue that the common practice of equating these scenarios using a simple Expected Value (EV) formula is logically flawed, and that a more robust framework is required for us to be truly effective.
Let's begin with the core dilemma, framed as a choice between two interventions:
- Intervention A: The Certain Outcome
- Value (V): 1 life saved
- Probability of Success [P(S)]: 1.0 (100%)
- Expected Value (EV): 1.0 life saved
- Intervention B: The Probabilistic Outcome
- Value (V): 1,000 lives saved
- Probability of Success [P(S)]: 0.001 (0.1%)
- Expected Value (EV): 1.0 life saved
While their simple expected values are identical, these two options are not equivalent. A rational analysis reveals critical distinctions that the EV calculation completely masks.
The Flaw of Inefficiency and the Dominant Outcome
The most immediate flaw is one of resource efficiency. Intervention A has a 100% probability of success and therefore represents a perfect efficiency of resources to outcome. There is no possibility of waste.
Intervention B’s dominant, most probable outcome is failure. With a 99.9% probability, the result is zero lives saved and a 100% loss of all invested resources. A rational framework for resource allocation cannot be indifferent to an action that results in total waste in 999 out of 1,000 expected instances. This near-certain inefficiency points to a deeper error: the mathematical model itself is being misapplied.
The Category Error of Probability
The simple formula, EV = Value × Probability, is only reliable under specific conditions. Its reliability is highest when dealing with aleatoric probability—the statistical frequency of an event over many repeated trials, like a die roll. Your intuition that this works in an "infinite universe" is correct. If we could attempt to save the 1,000 people millions of times, the problem of inefficiency would average itself out.
However, many real-world problems, especially existential risks, are a one-shot scenario. We only have one Earth to save. This is the domain of epistemic probability, which measures our degree of belief or uncertainty about a unique event, not its long-term frequency.
It is a fundamental category error to treat a high-confidence, evidence-based probability (Scenario A) and a low-confidence, speculative one (Scenario B) as interchangeable variables. Because we face a single, high-stakes event, we cannot ignore the dominant probability of failure. We must confront the fact that in choosing Scenario B, the near-certain outcome is the complete loss of our resources for zero gain.
A Call for a Rational Readjustment
These arguments are not merely philosophical points; they have direct and practical implications for how we allocate our finite resources. Continuing to rely on a simple model that equates certain outcomes with highly speculative ones is a demonstrable flaw in our collective methodology. It is an inefficient and fragile strategy.
Therefore, I propose the following actions for the community:
- For Researchers and Theorists: We must prioritize the development of more robust decision models. We need to move beyond EV = P x V and formally incorporate epistemic uncertainty. What would a "Certainty-Weighted Expected Value" (CWEV) model look like? How can we quantify the reliability of our own probability estimates?
- For Grantmakers and Donors: We must reconsider how we balance our 'moral portfolios'. It is not rational to be indifferent between an intervention with a 100% success rate and one with a 99.9% failure rate. We need to consciously decide how much of our capital should be allocated to the robust certainty of near-term aid versus highly speculative x-risk gambles.
Effective Altruism's core promise is to use reason and evidence to do the most good. If our foundational calculations are flawed, we risk failing at our most basic goal. Let us apply our core principles of rigor and self-correction to fix this.
A healthy 30 year old, uninvolved in crime or drugs, has less than a 0.1% chance of death. Do you think it is irrational for them to buy some cheap life insurance to protect their family in the small fraction of worlds where they do die?
That analogy is flawed because it equates a pooled risk portfolio with a singular risk.
An insurance company isn't taking a 0.1% gamble. It's managing a statistical certainty across a portfolio of millions, where the law of large numbers guarantees that the 0.1% rate is predictable and profitable.
An existential risk is a one-shot scenario for humanity. We are the single participant. There is no larger portfolio to average out the 99.9% chance of failure.
The analogy fails because joining a predictable risk pool is not logically equivalent to taking on an unpooled, singular gamble.
I didn't ask about the insurance company's perspective - I asked about the individual buying the life insurance. For them, buying the insurance is a 0.1% gamble, they are a single person who can only die once.
So let's view from the individual's perspective. I think looking at that perspective reveals a fundamental difference in the types of value we're comparing. The analogy breaks down because it incorrectly assumes that the value of money and the value of lives are structured in the same way.
1. The Non-Linear Utility of Money (The Insurance Case)
For an individual, money has non-linear utility. The first $1000 you earn is life-changing, while an extra $1000 when you're a millionaire is not.
Losing your last dollar is catastrophic—a state of ruin. Therefore, it is perfectly rational to pay a small premium, which represents a tiny certain loss of your least valuable money, to prevent a small chance of a catastrophic loss of your most valuable money. The insurance decision is rational precisely because of this non-linear value.
2. The Linear Value of Saving Lives (The EA Dilemma)
In contrast, the moral value of saving lives is treated as linear in these calculations. The first life saved is just as valuable as the 1000th. There is no "diminishing return" on a human life.
Because the value is linear, we don't need to worry about utility curves. We can compare the outcomes directly. My argument is that when comparing these linear values, a guaranteed outcome (saving 1 life) is rationally preferable to an action with a 99.9% chance of achieving nothing.
The insurance analogy relies on the non-linear utility of money to be persuasive. Since that feature doesn't exist in our dilemma about saving lives, the analogy is flawed and doesn't challenge the original point.
But there may well be a large portfolio of actions we can take to reduce existential risk. In most cases there are many shots we can take.
That's an interesting point about the portfolio of actions. You're suggesting that while the existential risk itself is a one-shot scenario for humanity, we might be able to take multiple 'shots' at reducing it. In essence, you're proposing that if a single, highly uncertain intervention isn't reliable, we can average out the risk by attempting many such interventions.
However, this then shifts the problem, rather than solving it. Even if we could, in theory, take a 'thousand shots' at reducing x-risk, we still lack a robust framework for comparing the aggregated expected value of these highly speculative, low-probability interventions against the more certain, smaller-scale interventions. My original argument is precisely that our current EV tools are inadequate for such comparisons, especially when dealing with epistemic uncertainty and truly one-shot, high-stakes scenarios for which a 'portfolio' approach might still be an insufficient or unproven solution.
A lucid illustration of this conundrum would involve preventing the demise of an approaching infinite number of individuals with a probability approaching zero (p→0). Apprehension concerning infinitesimal risks, even when associated with an astronomically vast scale of impact, is illogical. The improbable nature of an event's occurrence should not be mitigated by the magnitude of its potential harm.
Consider a scenario where, for a single dollar, one is offered a lottery ticket providing the chance to win a sum of 10^100 with a probability of 10^(−99). Superficially, an Expected Value (EV) calculation might suggest this to be a favorable transaction. However, any discerning individual would intuitively recognize that participating in such a lottery will, with near certainty, result in the loss of their investment. The probability of winning is infinitesimally small, and unless an exceptionally large quantity of tickets is purchased, one is virtually guaranteed to incur a loss.