I wrote an introduction to Expected Value Fanaticism for Utilitarianism.net. Suppose there was a magical potion that almost certainly kills you immediately but offers you (and your family and friends) an extremely long, happy life with a tiny probability. If the probability of a happy life were one in a billion and the resulting life lasted one trillion years, would you drink this potion? According to Expected Value Fanaticism, you should accept gambles like that.
This view may seem, frankly, crazy - but there are some very good arguments in its favor. Basically, if you reject Expected Value Fanaticism, you'll end up violating some very plausible principles. You would have to believe, for example, that what happens on faraway exoplanets or what happened thousands of years ago in history could influence what we ought to do here and now, even when we cannot affect those distant events. This seems absurd - we don't need a telescope to decide what we morally ought to do.
However, the story is a bit more complicated than that... Well, read the article! Here's the link: https://utilitarianism.net/gue.../expected-value-fanaticism/
I don't have a big issue with much of this philosophically, I'm just extremely skeptical about the validity of most small percentages.
My intuition is that small percentages are often greatly overestimated, therefore giving far higher expected values then is really the case. My inclinations is that where uncertainty is greater, numbers are often exaggerated. Examples where I have this intuition is in animal welfare and existential risk. This seems like it should be testable in some cases. Although it might seem like a strange thing to say, I think conservative small percentages are often not conservative enough.
Often I think that pascall's mugging is a mugging as much because the "low" probability stated is actually far higher than reality, than just because the probability is low persay.
I don't have any data to back this up, obviously we overweight many low probabilities psychologically, things like probability of aeroplane crashes and I feel like its the same in calculations. This has almost certainly. been written about before on the forum or in published papers, but I couldn't find it on a quick look.
I am also skeptical of small percentages but more so because I think that the kinds of probability estimates that are close to 0 or 1 tend to be a lot more uncertain (perhaps because they’re based on rare or unprecedented events that have only been observed a few times).
I’m no statistician, but I’m not sure that we can say that small percentages tend to be exaggerated though… For one, I recall reading in Superforecasters that there’s evidence that people tend to underestimate the likelihood of rare events and overestimate the likelihood of common one... (read more)