I work on AI alignment. Right now, I'm using ideas from decision theory to design and train safer artificial agents.
I also do work in ethics, focusing on the moral importance of future generations.
You can email me at thornley@mit.edu.
Oh yep nice point, though note that - e.g. - there are uncountably many reals between 1,000,000 and 1,000,001 and yet it still seems correct (at least talking loosely) to say that 1,000,001 is only a tiny bit bigger than 1,000,000.
But in any case, we can modify the argument to say that S* feels only a tiny bit worse than S. Or instead we can modify it so that S is degrees celsius of a fire that causes suffering that just about can be outweighed, and S* is degrees celsius of a fire that causes suffering that just about can't be outweighed.
Nice post! Here's an argument that extreme suffering can always be outweighed.
Suppose you have a choice between:
(S+G): The most intense suffering S that can be outweighed, plus a population that's good enough to outweigh it G, so that S+G is good overall: better than an empty population.
(S*+nG): The least intense suffering S* that can't be outweighed, plus a population that's n times better than the good population G.
If extreme suffering can't be outweighed, we're required to choose S+G over S*+nG, no matter how big n is. But that seems implausible. S* is only a tiny bit worse than S, and n could be enormous. To make the implication seem more implausible, we can imagine that the improvement nG comes about by extending the lives of an enormous number of people who died early in G, or by removing (non-extreme) suffering from the lives of an enormous number of people who suffer intensely (but non-extremely) in G.
We can also make things more difficult by introducing risk into the case (in this sort of way). Suppose now that the choice is between:
(S+G): The most intense suffering S that can be outweighed, plus a population that's good enough to outweigh it G, so that S+G is good overall: better than an empty population.
(Risky S*+nG): With probability , the most intense suffering S that can be outweighed. With probability , the least intense suffering S* that can't be outweighed. Plus (with certainty) a population that's n times better than the good population G.
We've amended the case so that the move from S+G to Risky S*+nG now involves just a increase in the probability of a tiny increase in suffering (from S to S*). As before, the move also improves the lives of those in the good population G by as much as you like. Plausibly, each increase (for very small ) in the probability of getting S* instead of S (together with an n increase in the quality of G, for very large n) is an improvement. Then with Transitivity, we get the result that S*+nG is better than S+G, and therefore that extreme suffering can always be outweighed.
I think the view that extreme suffering can't always be outweighed has some counterintuitive prudential implications too. It implies that basically we should never think about how happy our choices would make us. Almost always, we should think only about how to minimize our expected quantities of extreme suffering. Even when we're - e.g. - choosing between chocolate and vanilla at the ice cream shop, we should first determine which choice minimizes our expected quantity of extreme suffering. Only if we conclude that these quantities are exactly the same should we even consider which of chocolate and vanilla tastes nicer. That seems counterintuitive to me.
Note also that you can accept outweighability and still believe that extreme suffering is really bad. You could - e.g. - think that 1 second of a cluster headache can only be outweighed by trillions upon trillions of years of bliss. That would give you all the same practical implications without the theoretical trouble.
Nice point, but I think it comes at a serious cost.
To see how, consider a different case. In X, ten billion people live awful lives. In Y, those same ten billion people live wonderful lives. Clearly, Y is much better than X.
Now consider instead Y* which is exactly the same as Y except that we also add one extra person, also with a wonderful life. As before, Y* is much better than X for the original ten billion people. If we say that the value of adding the extra person is undefined and that this undefined value renders the value of the whole change from X to Y* undefined, we get the implausible result that Y* is not better than X. Given plausible principles linking betterness and moral requirements, we get the result that we're permitted to choose X over Y*. That seems very implausible, and so it counts against the claim that adding people results in undefined comparisons.
You should read the post! Section 4.1.1 makes the move that you suggest (rescuing PAVs by de-emphasising axiology). Section 5 then presents arguments against PAVs that don't appeal to axiology.
I think my objections still work if we 'go anonymous' and remove direct information about personal identity across different options. We just need to add some extra detail. Let the new version of One-Shot Non-Identity be as follows. You have a choice between: (1) combining some pair of gametes A, which will eventually result in the existence of a person with welfare 1, and (2) combining some other pair of gametes B, which will eventually result in the existence of a person with welfare 100.
The new version of Expanded Non-Identity is then the same as the above, except it also has available option (3): combine the pair of gametes A and the pair of gametes B, which will eventually result in the existence of two people each with welfare 10.
Narrow views claim that each option is permissible in One-Shot Non-Identity. What should they say about Expanded Non-Identity? The same trilemma applies. It seems implausible to say that (1) is permissible, because (3) looks better. It seems implausible to say that (3) is permissible, because (2) looks better. And if only (2) is permissible, then narrow views imply the implausible-seeming Losers Can Dislodge Winners.
Now consider wide views and Two-Shot Non-Identity, again redescribed in terms of combining pairs of gametes A and B. You first choose whether to combine pair A (which would eventually result in the existence of a person with welfare 1), and then later choose whether to combine pair B (which would eventually result in the existence of a person with welfare 100). Suppose that you know your predicament in advance, and suppose that you choose to combine pair A. Then (your view implies) you're required to combine pair B, even if that choice occurs many decades later, and even though you wouldn't be required to combine pair B if you hadn't (many decades earlier) chose to combine pair A. Now consider a slightly different case: you first choose whether to combine pair C (which would eventually result in the existence of a person with welfare 101), then later choose whether to combine pair B. Suppose that you know your predicament in advance, and suppose that you decline to combine pair C. Many decades later, you face the choice of whether to combine pair B. Your view seems to imply that you're not permitted to do so. There are thus cases where (all else being equal) you're not even permitted to create a person who would enjoy a wonderful life.
Here's my understanding of the dialectic here:
Me: Some wide views make the permissibility of pulling both levers depend on whether the levers are lashed together. That seems implausible. It shouldn't matter whether we can pull the levers one after the other.
Interlocutor: But lever-lashing doesn't just affect whether we can pull the levers one after the other. It also affects what options are available. In particular, lever-lashing removes the option to create both Amy and Bobby, and removes the option to create neither Amy nor Bobby. So if a wide view has the permissibility of pulling both levers depend on lever-lashing, it can point to these facts to justify its change in verdicts. These views can say: it's permissible to create just Amy when the levers aren't lashed because the other options are on the table; it's wrong to create just Amy when the levers are lashed because the other options are off the table.
Me: (Side note: this explanation doesn't seem particularly satisfying. Why does the presence or absence of these other options affect the permissibility of creating just Amy?). If that's the explanation, then the resulting wide view will say that creating just Amy is permissible in the four-button case. That's against the spirit of wide PAVs, so wide views won't want to appeal to this explanation to justfiy their change in verdicts given lever-lashing. So absent some other explanation of some wide views' change in verdicts occasioned by lever-lashing, this implausible-seeming change in verdicts remains unexplained, and so counts against these views.
Oops yes, fundamentals between my and Bruce's cases are very similar. Should have read Bruce's comment!
The claim we're discussing - about the possibility of small steps of various kinds - sounds kinda like a claim that gets called 'Finite Fine-Grainedness'/'Small Steps' in the population axiology literature. It seems hard to convincingly argue for, so in this paper I present a problem for lexical views that doesn't depend on it. I sort of gestured at it above with the point about risk without making it super precise. The one-line summary is that expected welfare levels are finitely fine-grained.