Epistemic status: Just a thought that I have, nothing too rigorous
The reason Longtermism is so enticing (to me at least), is that the existence of so many future life hangs in the balance right now. It just seems to be a pretty good deed to me, to bring 10^52 people (or whatever the real number will turn out to be) into existence.
This hinges on the belief that Utility scales linearly with the number of QUALYs, so that twice as many people are also twice as morally valuable. My belief in this was recently shaken by this thought experiment:
***
You are a traveling EA on a trip to St. Petersburg. In a dark alley, you meet a Demon with the ability to create Universes and a serious gambling addiction. He says, he was about to create a universe with 10 happy people. But he gives you three fair dice and offers you a bet: You can throw the three dice and if they all come up 6, he refrains from creating a universe. If you roll anything else, he will double the number of people in the universe he will create.
You do the expected value calculation and figure out, that by throwing the dice you will create 696,8 QUALYs in expectation. You take the bet and congratulate yourself on your ethical decision.
After the good deed is done, and the demon has now committed to creating 20 happy people, he offers you the same bet again. Roll the 3 dice: he won't create a universe at 6,6,6 and doubles it at anything else. The demon tells you that he will offer you the same bet repeatedly. You do your calculations and throw the dice again and again, until, eventually, you throw all sixes, and the demon vanishes, without having to create any universe, in a cloud of sulfury mist and leaves you wondering if you should have done anything differently.
***
There are a few ways to weasel out of the demon's bet. You could say, that the strategy “always take the demons bet” has an expected value of 0 QUALYs, and so you should go with some tactic like “Take the first 20 bets, then call it a day”. But I think if you refuse a bet, you should be able to reject this bet without taking into account what bets you have taken in the past or are still taking in the future.
I think the only consistent way to refuse the Demons bets at some point is to have a bounded utility function. You might think it would be enough to have a utility function that does not scale linearly with the number of QUALYs, but logarithmically or something. But in that case, the demon can offer to double the amount of utility, instead of doubling the amount of QUALYs, and we are back in the paradox. At some point, you have to be able to say: “There is no possible universe that is twice as good as the one, you have promised me already”. So at some point, adding more happy people to the universe must have a negligible ethical effect. And once we accept that that must happen at some point, how confident are we, that 10^52 people are that much better than 8billion?
Overall I am still pretty confused about this subject and would love to hear more arguments/perspectives.
To me it seems the main concern is with using expected value maximization, not with longtermism. Rather than being rationally required to take an action with the highest expected value, I think you are probably only rationally required not to take any action resulting in a world that is worse than an alternative at every percentile of the probability distribution. So in this case you would not have to take the bet because at the 0.1st percentile of the probability distribution taking the bet has a lower value than status quo, while at the 99th percentile it has a higher value.
In practice, this still ends up looking approximately like expected value maximization for most EA decisions because of the huge background uncertainty about what the world will look like. (My current understanding is that you can think of this as an extended version of "if everyone in EA took risky high EV options, then the aggregate result will pretty consistently/with low risk be near the total expected value")
See this episode of the 80,000 hours podcast for a good description of this "stochastic dominance" framework: https://80000hours.org/podcast/episodes/christian-tarsney-future-bias-fanaticism/.
Thank you for pointing me to that and getting me to think critically about it. I think I agree with all the axioms.
I think this is misleading. The VNM theorem only says that there exists a function u such that a rational agent's actions maximize E[u]. But u does not have to be "their value function."
Consider a scenario in which there are 3 possible outcomes: A1 = enormous suffering, A2 = neutral, A3= mild joy. Let's say m... (read more)