You said "Ruling out Z first seems more plausible, as Z negatively affects the present people, even quite strongly so compared to A and A+." The same argument would support 1 over 2.
Granted, but this example presents just a binary choice, with none of the added complexity of choosing between three options, so we can't infer much from it.
Then you said "Ruling out A+ is only motivated by an arbitrary-seeming decision to compare just A+ and Z first, merely because they have the same population size (...so what?)." Similarly, I could say "Picking 2 is only motivated by an arbitrary decision to compare contingent people, merely because there's a minimum number of contingent people across outcomes (... so what?)"
Well, there is a necessary number of "contingent people", which seems similar to having necessary (identical) people. Since in both cases not creating anyone is not an option. Unlike in Huemer's three choice case where A is an option.
I think ignoring irrelevant alternatives has some independent appeal.
I think there is a quite straightforward argument why IIA is false. The paradox arises because we seem to have a cycle of binary comparisons: A+ is better than A, Z is better than A+, A is better than Z. The issue here seems to be that this assumes we can just break down a three option comparison into three binary comparisons. Which is arguably false, since it can lead to cycles. And when we want to avoid cycles while keeping binary comparisons, we have to assume we do some of the binary choices "first" and thereby rule out one of the remaining ones, removing the cycle. So we need either a principled way of deciding on the "evaluation order" of the binary comparisons, or reject the assumption that "x compared to y" is necessarily the same as "x compared y, given z". If the latter removes the cycle, that is.
Another case where IIA leads to an absurd result is preference aggregation. Assume three equally sized groups (1, 2, 3) have these individual preferences:
The obvious and obviously only correct aggregation would be , i.e. indifference between the three options. Which is different from what would happen if you'd take out either one of three options and make it a binary choice, since each binary choice has a majority. So the "irrelevant" alternatives are not actually irrelevant, since they can determine a choice relevant global property like a cycle. So IIA is false, since it would lead to a cycle. This seems not unlike the cycle we get in the repugnant conclusion paradox, although there the solution is arguably not that all three options are equally good.
There are some "more objective" facts about axiology or what we should do that don't depend on who presently, actually or across all outcomes necessarily exists (or even wide versions of this). What we should do is first constrained by these "more objective" facts. Hence something like step 1.
I don't see why this would be better than doing other comparisons first. As I said, this is the strategy of solving three choices with binary comparisons, but in a particular order, so that we end up with two total comparisons instead of three, since we rule out one option early. The question is why doing this or that binary comparison first, rather than another one, would be better. If we insist on comparing A and Z first, we would obviously rule out Z first, so we end up only comparing A and A+, while the comparison A+ and Z is never made.
I wouldn't agree on the first point, because making Desgupta's step 1 the "step 1" is, as far as I can tell, not justified by any basic principles. Ruling out Z first seems more plausible, as Z negatively affects the present people, even quite strongly so compared to A and A+. Ruling out A+ is only motivated by an arbitrary-seeming decision to compare just A+ and Z first, merely because they have the same population size (...so what?). The fact that non-existence is not involved here (a comparison to A) is just a result of that decision, not of there really existing just two options.
Alternatively there is the regret argument, that we would "realize", after choosing A+, that we made a mistake, but that intuition seems not based on some strong principle either. (The intuition could also be misleading because we perhaps don't tend to imagine A+ as locked in).
I agree though that the classification "person-affecting" alone probably doesn't capture a lot of potential intricacies of various proposals.
In the non-identity problem we have no alternative which doesn't affect a person, since we don't compare creating a person with not-creating it, but creating a person vs creating a different person. Not creating one isn't an option. So we have non-present but necessary persons, or rather: a necessary number of additional persons. Then even person-affecting views should arguably say, if you create one anyway, then a great one is better than a marginally good one.
But in the case of comparing A+ and Z (or variants) the additional people can't be treated as necessary because A is also an option.
we'll have realized it was a mistake to not choose Z over A+ for the people who will then exist, if we had chosen A+.
Let's replace A with A' and A+ with A+'. A' has welfare level 4 instead of 100, and A+' has, for the original people, welfare level 200 instead of 101 (for a total of 299). According to your argument we should still rule out A+' because it's less fair than Z. Even though the original people get 196 points more welfare in A+' than in A'. So we end up with A' and a welfare level of 4. That seems highly incompatible with ethics being about affecting persons.
It seems the relevant question is whether your original argument for A goes through. I think you pretty much agree that ethics requires persons to be affected, right? Then we have to rule out switching to Z from the start: Z would be actively bad for the initial people in S, and not switching to Z would not be bad for the new people in Z, since they don't exist.
Furthermore, it arguably isn't unfair when people are created (A+) if the alternative (A) would have been not to create them in the first place.[1] So choosing A+ wouldn't be unfair to anyone. A+ would only be unfair if we couldn't rule out Z. And indeed, it seems in most cases we in fact can't rule out Z with any degree of certainty for the future, since we don't have a lot of evidence that "certain kinds of value lock-in" would ensure we stay with A+ for all eternity. So choosing A+ now would mean it is quite likely that we'd have to choose between (continuing) A+ and switching to Z in the future, and switching would be equivalent to fair redistribution, and required by ethics. But this path (S -> A+ -> Z) would be bad for the people in initial S, and not good for the additional people in S+/Z who at this point do not exist. So we, in S, should choose A.
In other words, if S is current, Z is bad, and A+ is good now (in fact currently a bit better than A), but choosing A+ would quite likely lead us on a path where we are morally forced to switch from A+ to Z in the future. Which would be bad from our current perspective (S). So we should play it safe and choose A now.
Once upon a time there was a group of fleas. They complained about the unfairness of their existence. "We all are so small, while those few dogs enjoy their enormous size! This is exceedingly unfair and therefore highly unethical. Size should have been distributed equally between fleas and dogs." The dog, which they inhabited, heard them talking and replied: "If it weren't for us dogs, you fleas wouldn't exist in the first place. Your existence depended on our existence. We let you live in our fur. The alternative to your tiny nature would not being larger, but your non-existence. To be small is not less fair than to not be at all." ↩︎
Your argument seems to be:
But that doesn't follow, because in 1 and 2 you did restrict yourself to two options, while there are three options in 3.
X isn't so much bad because it's unfair, but because they don't want to die. After all, fairly killing both people would be even worse.
There are other cases where the situation is clearly unfair. Two people committed the same crime, the first is sentenced to pay $1000, the second is sentenced to death. This is unfair to the people who are about to receive their penalty. Both subjects are still alive, and the outcome could still be changed. But in cases where it is decided whether lives are about to be created, the subjects don't exist yet, and not creating them can't be unfair to them.
Z already seems more fair than A+ before you decide which comes about; you're deciding between them ahead of time, not (necessarily just) entering one (whatever that would mean) and then switching.
Z seeming more fair than A+ arguably depends on the assumption that utility in A+ ought to (and therefore could) be redistributed to increase fairness. Which contradicts the assumption of "aggregate whole lifetime welfare", as this would mean that switching (and increasing fairness) is ruled out from the start.
For example, the argument in these paragraphs mentions "fairness" and "regret", which only seems to make sense insofar things could be changed:
However, I suspect we should pick A instead. With Z available, A+ seems too unfair to the contingent people and too partial to the necessary/present people. Once the contingent people exist, Z would have been better than A+. And if Z is still an option at that point, we’d switch to it. So, anticipating this reasoning, whether or not we can later make the extra people better off later, I suspect we should rule out A+ first, and then select A over Z.
I can imagine myself as one of the original necessary people in A. If we picked A+, I'd judge that to be too selfish of us and too unkind to the extra people relative to the much fairer Z. All of us together, with the extra people, would collectively judge Z to have been better. From my impartial perspective, I would then regret the choice of A+. On the other hand, if we (the original necessary people) collectively decide to stick with A to avoid Z and the unkindness of A+ relative to Z, it's no one else's business. We only hurt ourselves relative to A+. The extra people won't be around to have any claims.
"Once the contingent people exist, Z would have been better than A+." -- This arguably means "Switching from A+ to Z is good" which assumes that switching from A+ to Z would be possible.
The quoted argument for A seems correct to me, but the "unfairness" consideration requires that switching is possible. Otherwise one could simply deny that the concept of unfairness is applicable to A+. It would be like saying it's unfair to fish that they can't fly.
The author spends no time discussing the object level, he just points at examples where Scott says things which are outside the Overton window, but he doesn't give factual counterarguments where what Scott says is supposed to be false.