## Abstract

Various decision theories share a troubling implication. They imply that, for any finite amount of value, it would be better to wager it all for a vanishingly small probability of some greater value. Counterintuitive as it might be, this *fanaticism* has seemingly compelling independent arguments in its favour. In this paper, I consider perhaps the most *prima facie* compelling such argument: an *Egyptology argument* (an analogue of the Egyptology argument from population ethics). I show that, despite recent objections from Russell (2023) and Goodsell (2021), the argument's premises can be justified and defended, and the argument itself remains compelling.

## Fanaticism

Consider a small probability, perhaps one in 1,000.^{[1]} Which is better: to save the life of one person for sure; or to have a probability of one in 1,000 of saving some very large number of lives, many more than 1,000? Or consider an even smaller probability, perhaps one in one million. Which is better: to save that one life for sure; or to have a probability of one in one million of saving some vast number of lives? At some point, as the probability of success gets closer and closer to zero, it may seem *fanatical* to claim that the latter option is better, *even if* arbitrarily many lives would be saved if it succeeded.

Nonetheless, fanatical verdicts follow from various widely accepted theories of instrumental (moral) betterness. For instance, *expected* (moral) *value theory* says that one option is better than another if it has a greater probability-weighted sum of (moral) value—a greater *expected value*. Combine this with any theory of moral betterness that attributes equal value to each additional life saved, and the expected value of the low-probability option can always be greater—saving one life for sure won’t be as good as saving *N* lives with some tiny probability ε, no matter how tiny ε is, so long as *N* is great enough.

Beyond expected value theory as well, other theories can lead to the same verdict. For instance, take expected *utility* theory: whereby options are compared according to their probability-weighted sum of *utility*, where utility can be any increasing^{[2]} function of moral value. One key difference from expected value theory is that we could let additional units of moral value count for less and less utility and expected utility theory could then exhibit risk aversion. But it would still uphold the fanatical verdict above—as long as the chosen utility function is unbounded (with respect to the number of lives saved), expected utility theory will agree that saving some vast number *N* of lives with probability ε is better than saving one life for sure, no matter how tiny ε is, so long as *N* is great enough. Likewise, Buchak’s (2013) *risk-weighted* expected utility theory—whereby each option’s expected utility is further transformed to account for its riskiness, using a ‘risk’ function—will say the same for many possible risk functions.^{[3]}

### Read the rest of the paper

^{^}Throughout, I will remain agnostic on exactly how probability is interpreted. Are the probabilities I speak of the agent’s subjective degrees of belief, or the probabilities that an idealised agent with the same evidence would assign, or the objective physical chances of particular outcomes? For my purposes, it does not matter

^{^}Strictly speaking, a utility function that sometimes decreases or remains level might be considered compatible with expected utility theory. But such a utility function would lead to violations of Stochastic Dominance (see Section 2), and so would be implausible

^{^}Specifically, it will say so for any risk function that is increasing and continuous.