Hide table of contents

1. Introduction

According to person-affecting views (PAVs) in population ethics, adding happy people to the world is morally neutral. It’s neither good nor bad.

Are PAVs true? The question is important.

If PAVs are true, then the EA community is likely spending way too much time and money on reducing x-risk. After all, a supposed major benefit of reducing x-risk is that it increases the chance that lots of happy people come into existence. If PAVs are true, this ‘benefit’ is no benefit at all.

By contrast, if PAVs are false, then the EA community (and the world at large) is likely spending way too little time and money on reducing x-risk. After all, the future could contain a lot of happy people. So if adding happy people to the world is good, reducing x-risk is plausibly very good.

And if PAVs are false, it’s plausibly very important to ensure that people believe that PAVs are false. In spreading this belief, we reduce the risk of the following non-extinction failure-mode: humanity successfully navigates the transition to advanced AI but then creates way too few happy people.

So it’s important to figure out whether PAVs are true or false. The EA community has made efforts on this front, but the best-known arguments leave something to be desired. In particular, the arguments against PAVs mostly only apply to specific versions of these views.[1] Many other PAVs remain untouched.

Nevertheless, I think there are strong arguments against PAVs in general. In this post, I sketch out some of my favourites.

2. The simple argument

Before we begin, a quick terminological note. In this post, I use ‘happy people’ as shorthand for ‘people whose lives are good overall’ and ‘miserable people’ as shorthand for ‘people whose lives are bad overall.’

With that out the way, let’s start with a simple argument:

The simple argument

1. Some things are good (for example: happiness, love, friendship, beauty, achievement, knowledge, and virtue).

2. By creating happy people, we can bring more of these good things into the world.

3. And the more good things, the better.

C1. Therefore, creating happy people can be good

C2. Therefore, PAVs are false.

2.1. The classic PAV response

Advocates of PAVs reject this simple argument. The classic PAV response begins with the following two claims:[2]

The Person-Affecting Restriction

One outcome can’t be better than another unless it’s better for some person.

Existence Anticomparativism

Existing can’t be better or worse for a person than not-existing.

Each of these two claims seems tough to deny. Consider first the Person-Affecting Restriction. How could one outcome be better than another if it’s not better for anyone? Now consider Existence Anticomparativism. If existing could be better for a person than not-existing, then it seemingly must be that not-existing would be worse for that person than existing. But how can anything be better or worse for a person that doesn’t exist?[3]

So each of the two claims seems plausible, and they together imply that premise 3 of the simple argument is false: sometimes, bringing more good things into the world doesn’t make the world better. Here’s why. By creating a happy person, we bring more good things into the world. But our action isn’t better for this happy person (by Existence Anticomparativism), nor is it better for anyone else (by stipulation), and so it isn’t better for the world (by the Person-Affecting Restriction).

By reasoning in this way, advocates of PAVs can defuse the simple argument and defend their claim that creating happy people isn’t good.

2.2. The problem with the classic PAV response

Now for the problem. The Person-Affecting Restriction and Existence Anticomparativism don’t just together imply that creating happy people isn’t good. They also together imply that:

(a) Creating miserable people isn’t bad.

(b) Creating barely happy people isn’t worse than creating different, very happy people.[4]

Here’s why the Person-Affecting Restriction and Existence Anticomparativism together imply (a). Suppose that we create a miserable person. Our action isn’t worse for this miserable person (by Existence Anticomparativism), nor is it worse for anyone else (by stipulation), and so it isn’t worse for the world (by the Person-Affecting Restriction). So creating miserable people isn’t bad.

And here’s why the Person-Affecting Restriction and Existence Anticomparativism together imply (b). Suppose we have a choice between (i) creating a set of barely happy people, and (ii) creating an entirely different set of very happy people. Suppose that we create the barely happy people. Our action isn’t worse for the very happy people (by Existence Anticomparativism), nor is it worse for anyone else (by stipulation), and so it isn’t worse for the world (by the Person-Affecting Restriction). So creating barely happy people isn’t worse than creating different, very happy people.

But each of (a) and (b) seems false. It certainly seems like creating miserable people is bad, and that creating barely happy people is worse than creating different, very happy people. And that suggests that at least one of our premises is false: either the Person-Affecting Restriction or Existence Anticomparativism. Although these claims each seemed appealing at first, they together imply some very counterintuitive conclusions, so at least one of them must be incorrect.

And if at least one of these claims is incorrect, then the classic PAV response to the simple argument is undercut. After all, the classic response uses both the Person-Affecting Restriction and Existence Anticomparativism to object to premise 3 of the simple argument. If at least one of those claims is incorrect, then the objection to premise 3 no longer works, and so premise 3 (‘the more good things, the better’) is back to looking pretty compelling. And since premises 1 and 2 are hard to doubt, the simple argument as a whole is back to looking pretty compelling.

How might advocates of PAVs respond now? They could modify Existence Anticomparativism. The original claim is: ‘Existing can’t be better or worse for a person than not existing.’ Advocates of PAVs could replace it with ‘Existing can’t be better for a person than not existing.’ Then Existence Anticomparativism and the Person-Affecting Restriction would no longer together imply that creating miserable people isn’t bad. But if advocates of PAVs make this response, then they’ll have to find some way to explain the resulting asymmetry: if existing can be worse for a person than not existing, why can't it be better?[5]

And in any case, modifying Existence Anticomparativism doesn’t help PAVs avoid the other counterintuitive conclusion: creating barely happy people isn’t worse than creating different, very happy people. Advocates of PAVs will have to find some other way of dealing with that. This other counterintuitive conclusion is the famous non-identity problem for PAVs, and I’ll discuss it more below. Before that, let’s consider another argument against PAVs.

3. Tomi’s argument that creating happy people is good

This argument comes from my colleague Tomi Francis.[6] Let's represent lives that are neither good nor bad with a welfare level of 0, and let's represent wonderful lives with a welfare level of 100. Suppose that a hundred people already exist. You’re considering creating ten billion extra people. You have three options: A, B, and C. In A, the hundred already-existing people have welfare level 40, and only they exist. In B, the hundred already-existing people have welfare level 41, and the ten billion extra people also have welfare level 41. In C, the hundred already-existing people have welfare level 40, and the ten billion extra people have welfare level 100.

 One hundred peopleTen billion different people
A40-
B4141
C40100

Here’s the argument. B is better than A, because B is better than A for the hundred already-existing people, and the ten billion extra people all have happy lives. And C is better than B, because moving to C makes a hundred people's lives slightly worse and ten billion people's lives much better. And betterness is transitive: if an outcome X is better than an outcome Y, and Y is better than an outcome Z, then X is better than Z. So since C is better than B, and B is better than A, C is better than A. And C and A are identical except for the extra ten billion people living happy lives in C. Therefore, it’s good to add happy people, and hence PAVs are false.

Tomi’s argument presents a new challenge to PAVs. The argument doesn’t employ any premise like ‘The more good things, the better,’ and so it can’t be defused by the Person-Affecting Restriction and Existence Anticomparativism.

3.1. A PAV response

How might advocates of PAVs respond to Tomi’s argument? One possibility is to claim that betterness is option-set dependent: whether an outcome X is better than an outcome Y can depend on what other outcomes are available as options to choose. In particular, advocates of PAVs could claim:

  • B is better than A when B and A are the only options
  • B is not better than A when C is also an option.

And advocates of PAVs could defend the second bullet-point in the following way: when C is available, B harms (or is unjust to) the ten billion extra people, because these extra people are better off in C.[7] And this harm/injustice prevents B from being better than A.

3.2. A problem with the PAV response

That’s a possible response. I don’t think it’s especially convincing. Choosing B doesn’t seem especially unjust to the ten billion extra people, given that they enjoy the same good welfare level as everyone else. Certainly, it doesn’t seem like the kind of injustice that should lead us to choose A instead, thereby not creating the extra people at all and making the already-existing people worse off.

And choosing B harms the extra ten billion people only in a technical sense of the word, according to which a person is harmed if and only if this person is worse off than they could have been. But this technical sense of the word ‘harm’ differs significantly from our ordinary sense of the word, as is made clear by the following example. Suppose I could give a total stranger £0, £10 or £11. In the technical sense, I’d harm this stranger if I gave them £10, since I leave them worse off than they could have been. But I needn’t be harming them in the ordinary sense, and the same goes for the ten billion extra people in B. Their lives at welfare level 41 could be lives of moderate happiness, with little suffering.

In sum, I think Tomi’s argument presents a real challenge to PAVs.

4. The non-identity problem

Now let’s get back to the non-identity problem. Here’s a recap of how that goes. If the Person-Affecting Restriction and Existence Anticomparativism are both true, then creating a barely happy person is not worse than creating a different, very happy person. That conclusion seems implausible, and so casts doubt on the premises. How might advocates of PAVs respond?

One response is to bite the bullet. Advocates of PAVs can embrace the implausible-seeming conclusion, and thereby hold on to the Person-Affecting Restriction and Existence Anticomparativism. But that’s not as straightforward as it seems, because here’s another, independent argument from Tomi against the implausible-seeming conclusion.

4.1. Tomi’s argument that creating happier people is better

Suppose that Adam already exists. You’re considering creating Eve or Steve. You have three options: D, E, and F. In D, Adam has welfare level 99 and Eve will be created with welfare level 100. In E, Adam has welfare level 100 and Eve will be created with welfare level 99. In F, Adam has welfare level 99 and Steve will be created with welfare level 1.

 AdamEveSteve
D99100-
E10099-
F99-1

Here’s the argument. D is equally good as E, because D and E just swap Adam’s and Eve’s welfare levels, and Adam and Eve are equally morally important. And E is better than F, because E is better for Adam, and it replaces worse-off Steve with better-off Eve. And betterness is transitive in the relevant sense: D is equally good as E, and E is better than F, so D is better than F. And Adam’s welfare level is the same in D as in F; the only difference is that D replaces worse-off Steve with better-off Eve. So creating a very happy person is better than creating a different, barely happy person. Since the combination of the Person-Affecting Restriction and Existence Anticomparativism implies the contrary, at least one of these latter two claims must be false.

4.1.1. A PAV response

So advocates of PAVs can’t just bite the bullet on the non-identity problem. They also have to reckon with Tomi’s argument. How might they do that?

One possibility is to shift gears. So far, we’ve been arguing about the axiological facts: facts about what’s good and bad, better and worse. But advocates of PAVs can claim that it’s the deontic facts that are central to morality: facts about what’s morally permissible and morally required. This shift in gears gives PAVs a little more room to manoeuvre, since one might well think that we’re not always morally required to do what’s best. In particular, PAVs could concede that creating better-off Eve is better than creating worse-off Steve, but nevertheless maintain that we’re morally permitted to create worse-off Steve. Or PAVs could concede that creating happy people is good, but nevertheless maintain that we’re morally permitted not to create them (in cases where all else is equal). Now let’s consider these views.

5. Deontic PAVs

At the start of this post I wrote that, according to person-affecting views (PAVs), adding happy people to the world is neither good nor bad. I can now be more precise and call these ‘axiological PAVs’. Related but distinct are deontic PAVs, which say that (in cases where all else is equal) we’re morally permitted but not required to add happy people to the world. As I noted above, retreating to purely deontic PAVs offers a means of escape from some of the arguments of the previous sections.

But there are other arguments that tell against deontic PAVs. To explain these arguments, let’s first distinguish between two kinds of deontic PAV. Consider the following case:

Non-Identity

(1) Amy 1

(2) Bobby 100

Here option (1) is creating Amy with a barely good life at welfare level 1. Option (2) is creating Bobby with a wonderful life at welfare level 100. The first kind of deontic PAV – a narrow view – says that each option is permissible. We’re morally permitted to create the person with the worse life.[8] The second kind of deontic PAV – a wide view – says that only (2) is permissible. We’re morally required to create the person with the better life.[9]

I’ll sketch out arguments against each of these views in turn. This paper presents the arguments in more detail.

5.1. A trilemma for narrow views

Here’s a problem for narrow views. Consider:

Expanded Non-Identity

(1) Amy 1

(2) Bobby 100

(3) Amy 10, Bobby 10

Here we’ve added a third option to Non-Identity. The first two options are as before: create Amy with a barely good life at welfare level 1 or create Bobby with a wonderful life at welfare level 100. The new third option is to create both Amy and Bobby with mediocre lives at welfare level 10.

Narrow views imply that each of (1) and (2) are permissible when these are the only available options. What should they say when (3) is also an option? I’ll argue that they must say at least one of three implausible things, so that narrow views face a trilemma.

Option (1) remains permissible

The first thing they could say is that option (1) – creating Amy with a barely good life at welfare level 1 – remains permissible when we move from Non-Identity to Expanded Non-Identity. But that claim implies:

Permissible to Choose Dominated Options

There are option sets in which we’re permitted to choose some option X even though there’s some other available option Y that dominates X. That is to say, (i) everyone in X is better off in Y, (ii) everyone who exists in Y but not X has a
good life, and (iii) Y is perfectly equal.

That’s because (1) is dominated by (3): (3) creates only people with good lives, it leads to perfect equality, and it’s better than (1) for Amy: the only person who exists in (1). It thus seems implausible that (1) is permissible.

Option (3) is permissible

Here’s something else that narrow views could say about Expanded Non-Identity: option (3) – creating Amy and Bobby with mediocre lives at welfare level 10 – is permissible. But that claim implies:

Permissible to Do Serious Harm for Mediocre Creation

There are option sets in which we’re permitted to choose some option X even though – relative to some other available option Y – all X does is seriously harm one person and create another person with a mediocre life.

That’s because (3) is mediocre for Amy and much worse than (2) for Bobby: Bobby’s welfare level is 100 in (2) and 10 in (3). And we can imagine variations on Expanded Non-Identity in which Bobby’s welfare level in (2) is arbitrarily high. The higher Bobby’s welfare level in (2), the more implausible it is to claim that we’re permitted to choose (3).

Only option (2) is permissible

Now we can complete the trilemma for narrow views. If neither of (1) and (3) is permissible in Expanded Non-Identity, it must be that only (2) is permissible. But if only (2) is permissible, then narrow views imply:

Losers Can Dislodge Winners:

Adding some option X to an option set can make it wrong to choose a previously-permissible option Y, even though choosing X is itself wrong in the resulting option set.

That’s because narrow views imply that each of (1) and (2) is permissible in Non-Identity. So if only (2) is permissible in Expanded Non-Identity, then adding (3) to our option set has made it wrong to choose (1) even though choosing (3) is itself wrong in Expanded Non-Identity.

That’s a peculiar implication. It’s a deontic version of an old anecdote about the philosopher Sidney Morgenbesser. Here’s how that story goes. Morgenbesser is offered a choice between apple pie and blueberry pie, and he orders the apple. Shortly after, the waiter returns to say that cherry pie is also an option, to which Morgenbesser replies, ‘In that case, I’ll have the blueberry.’

That’s a strange pattern of preferences. The pattern is even stranger in our deontic case. Imagine instead that the waiter is offering Morgenbesser the options in Expanded Non-Identity.[10] Initially the choice is between (1) and (2), and Morgenbesser permissibly opts for (1). Then the waiter returns to say that (3) is also an option, to which Morgenbesser replies, ‘In that case, I’m morally required to switch to (2).’ The upshot is that the waiter can force Morgenbesser’s hand by adding options that are wrong to choose in the resulting option set. And turning the case around, the waiter could expand Morgenbesser’s menu of permissible options by taking wrong options off the table. That seems implausible.

Summarising the trilemma

Now the trilemma for narrow person-affecting views is complete and I can summarise. If these views say that (1) is permissible in Expanded Non-Identity, they imply that it’s Permissible to Choose Dominated Options. If they say that (3) is permissible, they imply that it’s Permissible to Do Serious Harm for Mediocre Creation. And if they say that only (2) is permissible, they imply Losers Can Dislodge Winners. Each of these implications is implausible.

5.2. A trilemma for wide views

Now let’s consider wide views. Recall that these views say that we’re morally required to create the better-off person in cases like Non-Identity:

Non-Identity

(1) Amy 1

(2) Bobby 100

Wide views thus avoid the trilemma above. They can say that only (2) is permissible in Expanded Non-Identity without implying Losers Can Dislodge Winners. However, wide views imply a trilemma of their own. To see how, consider first:

One-Shot Non-Identity

This case is a cosmetic variation of Non-Identity in which Amy’s and Bobby’s existence will be determined by the positions of two levers. By leaving the left lever up, we decline to create Amy. By pulling the left lever down, we create her at welfare level 1. By leaving the right lever up, we create Bobby at welfare level 100. By pulling the right lever down, we decline to create him. Crucially, the levers are lashed together, so our only options are pulling both levers or pulling neither. Wide views thus imply that pulling both levers is wrong. After all, pulling both levers means creating Amy at welfare level 1 and declining to create Bobby at welfare level 100.

Now consider:

Two-Shot Non-Identity

In this case, the levers are no longer lashed together. We first decide whether to pull the first lever, lock that choice in, and then decide whether to pull the second lever.

I now use these cases to argue against wide person-affecting views. Assume – for contradiction – any wide person-affecting view. Per the ‘wide’ part of such views, it’s wrong to pull both levers in One-Shot Non-Identity. Now assume that the wrongness of pulling both levers doesn’t depend on whether the levers are lashed together. Then it’s also wrong to pull both levers in Two-Shot Non-Identity. Assume also that it’s not wrong to pull the first lever in Two-Shot Non-Identity. Then if we’ve pulled the first lever, it must be wrong to pull the second lever. Finally, assume that the wrongness of pulling the second lever doesn’t depend on past choices. Then it must be wrong to pull the second lever regardless of whether we’ve pulled the first lever. But if that’s the case, then we’re required to create Bobby at welfare level 100. After all, that’s what we do by declining to pull the second lever. This verdict is contrary to the ‘person-affecting’ part of wide person-affecting views. We’ve reached a contradiction.

Therefore, advocates of wide views must reject at least one of my argument’s three assumptions. I now argue that doing so commits them to saying at least one of three implausible things.

Wrongness Depends on Lever-Lashing

To reject the first assumption, advocates of wide views must claim that:

Wrongness Depends on Lever-Lashing

The wrongness of pulling both levers (thereby creating Amy and declining to create Bobby) depends on whether the levers are lashed together. When the levers are lashed together, pulling both levers is wrong. When the lashing is cut, pulling both levers is permissible.

This response is a deontic analogue of myopic choice (McClennen 1990, 12). Myopic choosers sometimes do in two steps what they’d never do in one. The response implies that you’re sometimes permitted to do in two steps what you’re forbidden from doing in one.

Like myopic choice, Wrongness Depends on Lever-Lashing is unpromising on its face. Pulling both levers should either be wrong in both cases or permissible in both cases. It shouldn’t matter whether we can pull them one after the other. After all, it doesn’t matter to Amy or Bobby whether you pull the levers one after the other.

Pulling the First Lever is Wrong

To reject the second assumption of my argument, advocates of wide views must claim that:

Pulling the First Lever is Wrong

In Two-Shot Non-Identity, pulling the first lever (thereby creating Amy) is wrong.

That allows advocates of wide views to say that pulling the second lever is permissible. This response takes inspiration from sophisticated choice (McClennen 1990, 12). Sophisticated choosers predict the choices that they’d make at later timesteps and use these predictions to determine the options available to them at earlier timesteps. This process sometimes prevents them from making earlier choices that they’d otherwise have made. The response in question puts a deontic spin on this general idea. Perhaps the most natural way of making it precise is as follows. Since you might later decline to create Bobby, creating Amy exposes you to a risk of creating only Amy: the one course of action that wide views deem wrong in One-Shot Non-Identity. By contrast, if you don’t create Amy, there’s no chance that you’ll create only Amy and hence no chance that you’ll do what’s wrong according to wide views. Therefore, it’s wrong to pull the first lever and create Amy.

This response is implausible. Pulling the first lever creates Amy with a good life at welfare level 1, and it leaves open the possibility of later creating Bobby with a wonderful life at welfare level 100. The response is even more implausible in a minor variant of Two-Shot Non-Identity in which Amy’s welfare level is 99 instead of 1. In this case, it’s especially hard to believe that creating Amy is wrong. And supposing (as seems natural) that creating Bobby can’t undo any prior wrongness of creating Amy, the resulting wide view implies that it’s impossible to create both Amy and Bobby without acting wrongly. That seems very counterintuitive.

Generalising beyond Two-Shot Non-Identity, the wide views in question prohibit creating a person with a good life whenever you’ll later have the chance to create a person with an even better life, even if creating the first person doesn’t preclude creating the second person. In cases where all else is equal, prospective parents are forbidden from having children until they’ve hit the peak of their welfare-providing powers. That verdict seems undesirable.

Wrongness Depends on First Lever

To reject the third assumption of my argument, advocates of wide views must claim that:

Wrongness Depends on First Lever

Pulling the second lever (thereby declining to create Bobby) is wrong if and only if you’ve previously pulled the first lever (thereby creating Amy).

The response is thus a deontic analogue of resolute choice (McClennen 1990, 13). Resolute choosers sometimes turn down options that they might have chosen had their past choices been different. The response implies that you’re sometimes forbidden from choosing options that you could permissibly have chosen had your past choices been different.

The first thing to say about this response is that it retreats from a deontic person-affecting view, at least as I’ve characterised deontic person-affecting views in this post. That’s because the response concedes that there are cases in which (all else equal) we’re required to create people who would enjoy good lives. Two-Shot Non-Identity is one such case. If you’ve previously created Amy, you’re required to create Bobby. This implication won’t be welcomed by those inclined towards person-affecting views. After all, it runs counter to a major motivation for such views: granting broad latitude to those in a position to create good lives.

The second thing to say about the response is more straightforward: it seems implausible to claim that we’re required to create a better-off person if and only if we previously created a worse-off person. To pump intuitions here, suppose that a friend is considering having a child and comes to you for moral advice. Per the response, you not only need to ask your friend the usual questions about the child’s likely quality of life and how the child might affect existing people. You also need to ask your friend about their past procreative choices. If in the past your friend had a child with a worse life than this new child would have, your friend must have the new child to avoid wrongdoing. And now reversing the order of the cases: if in the past your friend declined to have a child with a better life than this new child would have, your friend must not have the new child. This latter implication seems especially implausible. The new child’s life could be wonderful, but if your friend previously declined to have a child with an even better life, your friend is not even permitted to create them. The response thus implies that there are cases in which (all else equal) we are not even permitted to create a person who would enjoy a wonderful life.

5.3. Summarising the case against deontic PAVs

My argument against deontic PAVs is a dilemma over trilemmas. The first fork is Non-Identity: narrow views are those person-affecting views that permit us to create the worse-off person, and wide views are those person-affecting views that require us to create the better-off person. 

The fork for narrow views is a trilemma centred around Expanded Non-Identity. These views imply Permissible to Choose Dominated Options, or Permissible to Do Serious Harm for Mediocre Creation, or Losers Can Dislodge Winners.

The fork for wide views is a trilemma centred around Two-Shot Non-Identity. These views imply Wrongness Depends on Lever-Lashing, or Pulling the First Lever is Wrong, or Wrongness Depends on First Lever.

6. Conclusion

It’s important to figure out whether person-affecting views (PAVs) are true or false. If PAVs are true, we should be spending less on reducing x-risk. If PAVs are false, we (and the world at large) should be spending more on reducing x-risk, and we should be wary of the potential post-AGI failure-mode of creating way too few happy people.

I think that PAVs are false, but I also think that extant arguments against PAVs are weak. In this post, I’ve sketched out some arguments that I like better. I began with the simple argument and laid out problems for the classic PAV response. I then explained two arguments from Tomi Francis. These arguments imply that it’s good to create happy people, and better to create happier people. I then considered two kinds of deontic PAV – narrow views and wide views – and presented arguments against those. Narrow views face a trilemma in my Expanded Non-Identity case. Wide views face a trilemma in my Two-Shot Non-Identity case.

  1. ^
  2. ^

     See Narveson (1967)  for an early version of this response.

  3. ^

     Broome (1999, p.168) makes this argument. Greaves and Cusbert (2022) respond.

  4. ^

     The fact that the Person-Affecting Restriction and Existence Anticomparativism together imply (b) is known as the ‘non-identity problem’.

  5. ^

     Nebel (2019) offers one explanation.

  6. ^

     See his paper for more detail.

  7. ^

     See (for example), Roberts (2011), Meacham (2012), and Frick (2022).

  8. ^
  9. ^
  10. ^

     It’s a very unusual restaurant.

84

5
0

Reactions

5
0

More posts like this

Comments36
Sorted by Click to highlight new comments since:

I'm curating this post. I really enjoyed the argument-rebuttal format used here, and it does a great job of tiling out the common flavours of PAV arguments.

Here's my defense against both of Tomi's arguments. Remember, in PAVs, an outcome can only be better or worse if it is better or worse for someone. The utility of adding a person is undefined. It's not zero. Consider the first problem. We can say that scenario B is better for the 100 existing people. We cannot say that scenario B is better or worse for the ten billion people who do not exist. We therefore cannot say that scenario B is better for the union of these two groups because a positive quantity plus undefined is just undefined. C is, however, better than B for all people together because we are now comparing the same groups.

The same logic applies to the scenario of Adam, Eve and Steve and prevents any issue.

Nice point, but I think it comes at a serious cost.

To see how, consider a different case. In X, ten billion people live awful lives. In Y, those same ten billion people live wonderful lives. Clearly, Y is much better than X. 

Now consider instead Y* which is exactly the same as Y except that we also add one extra person, also with a wonderful life. As before, Y* is much better than X for the original ten billion people. If we say that the value of adding the extra person is undefined and that this undefined value renders the value of the whole change from X to Y* undefined, we get the implausible result that Y* is not better than X. Given plausible principles linking betterness and moral requirements, we get the result that we're permitted to choose X over Y*. That seems very implausible, and so it counts against the claim that adding people results in undefined comparisons.

In my post Population Ethics Without [An Objective] Axiology, I argued that person-affecting views are IMO underappreciated among effective altruists.

Here’s my best attempt at a short version of my argument:

  • The standard critiques of person-affecting views are right in pointing out how person-affecting views don’t give satisfying answers to “what’s best for possible people/beings.”
  • However, they are wrong in thinking that this is a problem.
  • It’s only within the axiology-focused approach (common in EA and utilitarian-tradition academic philosophy) that a theory of population ethics must tell us what’s best for both possible people/beings and for existing (or sure-to-exist) people/beings simultaneously.
  • Instead, I think it’s okay for EAs who find Narveson’s slogan compelling to reason as follows:
    (1) I care primarily about what’s best for existing (and sure-to-exist) people/beings.
    (2) When it comes to creating or not creating people/beings whose existence depends on my actions, all I care about is following some minimal notion of “don’t be a jerk.” That is, I wouldn’t want to do anything that disregards the interests of such possible people/beings according to all plausible axiological accounts, but I’m okay with otherwise just not focusing on possible people/beings all that much.
  • We can think of this stance as analogous to: 
    • The utilitarian parent: “I care primarily about doing what’s best for humanity at large, but I wouldn’t want to neglect my children to such a strong degree that all defensible notions of how to be a decent parent state that I fucked up.”
  • Just like the utilitarian parent had to choose between two separate values (their own children vs humanity at large), the person with person-affecting life goals had to choose between two values as well (existing-and-sure-to-exist people/beings vs possible people/beings).
    • The person with person-affecting life goals: “I care primarily about doing what’s best for existing and sure-to-exist people/beings, but I wouldn’t want to neglect the interests of possible people/beings to such a strong degree that all defensible notions of how to be a decent person towards them state that I fucked up.” 
  • Note that it's not like only advocates of person-affecting morality have to make such a choice. Analogously: 
    • The person with totalist/strong longtermist life goals: “I care primarily about doing what’s best according to my totalist axiology (i.e., future generations whose existence is optional), but I wouldn’t want to neglect the interests of existing people to such a strong degree that all defensible notions of how to be a decent person towards them state that I fucked up.”
  • Anyway, for the person with person-affecting life goals, when it comes to cases like whether it's permissible for them to create individual new people, or bundles of people (one at welfare level 100, the other at 1), or similar cases spread out over time, etc., it seems okay that there isn't a single theory that fulfills both of the following conditions: 
    (1) The theory has the 'person-affecting' properties (e.g., it is the sort of theory that people who find Narveson's slogan compelling would want).
    (2) The theory gives us precise, coherent, non-contradictory guidelines on what's best for newly created people/beings. 
  • Instead, I'd say what we want is to drop (2), and come up with an alternative theory that fulfills only (1) and (3):
    (1) The theory has the 'person-affecting' properties (e.g., it is the sort of theory that people who find Narveson's slogan compelling would want).
    (3) The theory contains some minimal guidelines of the form "don't be a jerk" that tell us what NOT to do when it comes to creating new people/beings. The things it allows us to do are acceptable, even though it's true that someone who cares maximally about possible people/beings on a specific axiological notion of caring [but remember that there's no universally compelling solution here!]) could have done "better". (I put "better" in quotation marks because it's not better in an objectivist moral realist way, just "better" in a sense where we introduce a premise that our actions' effects on possible people/beings are super important.)

What I'm envisioning under (3) is quite similar to how common-sense morality thinks about the ethics of having children. IMO, common-sense morality would say that: 

  • People are free to decide against becoming parents.
  • People who become parents are responsible towards their children. 
  • It's not okay to have a child and then completely abandon them, or to decide to have an unhappy child if you could've chosen a happier child at basically no cost.
  • If the parents can handle it, it's okay for parents to have 8+ children, even if this lowers the resources available per child.
  • The responsibility towards one's children isn't absolute (e.g., if the children are okay, parents aren't prohibited from donating to charity even though the money could further support their children).

The point being: The ethics of having children is more about "here's how not to do it" rather than "here's the only acceptable best way to do it."

--

The longer version of the argument is in my post. My view there relies on a few important premises:

  • Moral anti-realism
  • Adopting a different ethical ontology from “something has intrinsic value”

I can say a bit more about these here.

As I write in the post: 

I see the axiology-focused approach, the view that “something has intrinsic value,” as an assumption in people’s ethical ontology.

The way I’m using it here, someone’s “ontology” consists of the concepts they use for thinking about a domain – how they conceptualize their option space. By proposing a framework for population ethics, I’m (implicitly) offering answers to questions like “What are we trying to figure out?”, “What makes for a good solution?” and “What are the concepts we want to use to reason successfully about this domain?”

Discussions about changing one’s reasoning framework can be challenging because people are accustomed to hearing object-level arguments and interpreting them within their preferred ontology.

For instance, when first encountering utilitarianism, someone who thinks about ethics primarily in terms of “there are fundamental rights; ethics is about the particular content of those rights” would be turned off. Utilitarianism doesn’t respect “fundamental rights,” so it’ll seem crazy to them. However, asking, “How does utilitarianism address the all-important issue of [concept that doesn’t exist within the utilitarian ontology]” begs the question. To give utilitarianism a fair hearing, someone with a rights-based ontology would have to ponder a more nuanced set of questions.

So, let it be noted that I’m arguing for a change to our reasoning frameworks. To get the most out of this post, I encourage readers with the “axiology-focused” ontology to try to fully inhabit[8] my alternative framework, even if that initially means reasoning in a way that could seem strange.

To get a better sense of what I mean by the framework that I'm arguing against, see here:

>Before explaining what’s different about my proposal, I’ll describe what I understand to be the standard approach it seeks to replace, which I call “axiology-focused.”

[...]

The axiology-focused approach goes as follows. First, there’s the search for an axiology, a theory of (intrinsic) value. (E.g., the axiology may state that good experiences are what’s valuable.) Then, there’s further discussion on whether ethics contains other independent parts or whether everything derives from that axiology. For instance, a consequentialist may frame their disagreement with deontology as follows. “Consequentialism is the view that making the world a better place is all that matters, while deontologists think that other things (e.g., rights, duties) matter more.” Similarly, someone could frame population-ethical disagreements as follows. “Some philosophers think that all that matters is more value in the world and less disvalue (“totalism”). Others hold that further considerations also matter – for instance, it seems odd to compare someone’s existence to never having been born, so we can discuss what it means to benefit a person in such contexts.”

In both examples, the discussion takes for granted that there’s something that’s valuable in itself. The still-open questions come afterward, after “here’s what’s valuable.”

In my view, the axiology-focused approach prematurely directs moral discourse toward particular answers. I want to outline what it could look like to “do population ethics” without an objective axiology or the assumption that “something has intrinsic value.”

To be clear, there’s a loose, subjective meaning of “axiology” where anyone who takes systematizing stances[1] on moral issues implicitly “has an axiology.” This subjective sense isn’t what I’m arguing against. Instead, I’m arguing against the stronger claim that there exists a “true theory of value” based on which some things are “objectively good” (good regardless of circumstance, independently of people’s interests/goals).[2]

(This doesn’t leave me with “anything goes.” In my sequence on moral anti-realism, I argued that rejecting moral realism doesn’t deserve any of the connotations people typically associate with “nihilism.” See also the endnote that follows this sentence.[3])

Note also that when I criticize the concept of “intrinsic value,” this isn’t about whether good things can outweigh bad things. Within my framework, one can still express beliefs like “specific states of the world are worthy of taking serious effort (and even risks, if necessary) to bring about.” Instead, I’m arguing against the idea that good things are good because of “intrinsic value.”

So, the above quote described the framework I want to push back against.

The alternative ethical ontology I’m proposing is 'anti-realist' in the sense of: There’s no such thing as “intrinsic value.”

Instead, I view ethics as being largely about interests/goals. 

From that "ethics is about interests/goals" perspective, population ethics seems clearly under-defined. First off, it's under-defined how many new people/beings there will be (with interests and goals). And secondly, it's under-defined which interests/goals new people/beings will have. (This depends on who you choose to create!)

With these building blocks, I can now sketch the summary of my overall population-ethical reasoning framework (this summary is copied from my post but lightly adapted):

  • Ethics is about interests/goals.
  • Nothing is intrinsically valuable, but various things can be conditionally valuable if grounded in someone’s interests/goals.
  • The rule “focus on interests/goals” has comparatively clear implications in fixed population contexts. The minimal morality of “don’t be a jerk” means we shouldn’t violate others’ interests/goals (and perhaps even help them where it’s easy and our comparative advantage). The ambitious morality of “do the most moral/altruistic thing” coincides with something like preference utilitarianism.
  • On creating new people/beings, “focus on interests/goals” no longer gives unambiguous results:[4]
    • The number of interests/goals isn’t fixed
    • The types of interests/goals aren’t fixed
  • This leaves population ethics under-defined with two different perspectives: that of existing or sure-to-exist people/beings (what they want from the future) and that of possible people/beings (what they want from their potential creators).
  • Without an objective axiology, any attempt to unify these perspectives involves subjective judgment calls. (In other words: It likely won't be possible to unify these perspectives in a way that'll be satisfying for anyone.) 
  • People with the motivation to dedicate (some of) their life to “doing the most moral/altruistic thing” will want clear guidance on what to do/pursue. To get this, they must adopt personal (but defensible), population-ethically-complete specifications of the target concept of “doing the most moral/altruistic thing.” (Or they could incorporate a compromise, as in a moral parliament between different plausible specifications.) 
  • Just like the concept “athletic fitness” has several defensible interpretations (e.g., the difference between a 100m sprinter and a marathon runner), so (I argue) does “doing the most moral/altruistic thing.”
  • In particular, there’s a tradeoff where cashing out this target concept primarily according to the perspective of other existing people leaves less room for altruism on the second perspective (that of newly created people/beings) and vice versa.
  • Accordingly, people can think of “population ethics” in several different (equally defensible)[5] ways:
    • Subjectivist person-affecting views: I pay attention to creating new people/beings only to the minimal degree of “don’t be a jerk” while focusing my caring budget on helping existing (and sure-to-exist) people/beings.
    • Subjectivist totalism: I count appeals from possible people/beings just as much as existing (or sure-to-exist) people/beings. On the question “Which appeals do I prioritize?” my view is, “Ones that see themselves as benefiting from being given a happy existence.”
    • Subjectivist anti-natalism: I count appeals from possible people/beings just as much as existing (or sure-to-exist) people/beings. On the question “Which appeals do I prioritize?” my view is, “Ones that don’t mind non-existence but care to avoid a negative existence.”
  • The above descriptions (non-exhaustively) represent “morality-inspired” views about what to do with the future. The minimal morality of “don’t be a jerk” still applies to each perspective and recommends cooperating with those who endorse different specifications of ambitious morality.
  • One arguably interesting feature of my framework is that it makes standard objections against person-affecting views no longer seem (as) problematic. A common opinion among effective altruists is that person-affecting views are difficult to make work.[6] In particular, the objection is that they give unacceptable answers to “What’s best for new people/beings.”[7] My framework highlights that maybe person-affecting views aren’t meant to answer that question. Instead, I’d argue that someone with a person-affecting view has answered a relevant earlier question so that “What’s best for new people/beings” no longer holds priority. Specifically, to the question “What’s the most moral altruistic/thing?,” they answered “Benefitting existing (or sure-to-exist) people/beings.” In that light, under-definedness around creating new people/beings is to be expected – it’s what happens when there’s a tradeoff between two possible values (here: the perspective of existing/sure-to-exist people and that of possible people) and someone decides that one option matters more than the other.

Maybe worth writing this as a separate post (a summary post) you can link to, given its length?

You should read the post! Section 4.1.1 makes the move that you suggest (rescuing PAVs by de-emphasising axiology). Section 5 then presents arguments against PAVs that don't appeal to axiology. 

Sorry, I hate it when people comment on something that has already been addressed.

FWIW, though, I had read the paper the day it was posted on the GPI fb page. At that time, I didn't feel like my point about "there is no objective axiology" fit into your discussion.

I feel like even though you discuss views that are "purely deontic" instead of "axiological," there are still some assumptions from the axiology-based framework that underly your conclusion about how to reason about such views. Specifically, when explaining why a view says that it would be wrong to create only Amy but not Bobby, you didn't say anything that suggests understanding of "there is no objective axiology about creating new people/beings."

That said, re-reading the sections you point to, I think it's correct that I'd need to give some kind of answer to your dilemmas, and what I'm advocating for seems most relevant to this paragraph:

5.2.3. Intermediate wide views

Given the defects of permissive and restrictive views, we might seek an intermediate wide view: a wide view that is sometimes permissive and sometimes restrictive. Perhaps (for example) wide views should say that there’s something wrong with creating Amy and then later declining to create Bobby in Two-Shot Non-Identity if and only if you foresee at the time of creating Amy that you will later have the opportunity to create Bobby. Or perhaps our wide view should say that there’s something wrong with creating Amy and then later declining to create Bobby if and only if you intend at the time of creating Amy to later decline to create Bobby.

At the very least, I owe you an explanation of what I would say here.

I would indeed advocate for what you call the "intermediate wide view," but I'd motivate this view a bit differently.

All else equal, IMO, the problem with creating Amy and then not creating Bobby is that these specific choices, in combination, and if it would have been low-effort to choose differently (or the other way around), indicate that you didn't consider the interests of possible people/beings even to a minimum degree. Considering them to a minimum degree would mean being willing to at least take low-effort actions to ensure your choices aren't objectionable from their perspective (the perspective of possible people/beings). Adding someone with +1 when you could've easily added someone else with +100 just seems careless. If Alice and Bobby sat behind a veil of ignorance, not knowing which of them will be created with +1 or +100 (if someone gets created at all), the one view they would never advocate for is "only create the +1 person." If they favor anti-natalist views, they advocate for creating no one. If they favor totalist views, they'd advocate for creating both. If one favors anti-natalism and the other favors totalism, they might compromise on creating only the +100 person. So, most options here really are defensible, but you don't want to do the one thing that shows you weren't trying at all.

So, it would be bad to only create the +1 person, but it's not "99 units bad" in some objective sense, so this is not always the dominant concern and seems less problematic if we dial up the degree of effort that's needed to choose differently, or when there are externalities like "by creating Amy at +1 instead of Bob at +100, you create a lot of value for existing people." I don't remember if it was Parfit or Singer who first gave this example of delaying pregnancy for a short number of days (or maybe it was three months?) to avoid your future child suffering from a serious illness. There, it seems mainly objectionable not to wait because of how easy it would be to wait. (Quite a few people, when trying to have children, try for years, so a few months is not that significant.)

So, if you're at age 20 and contemplate having a child at happiness level 1, knowing that 15 years later they'll invent embryo-selection therapy to make new babies happier and guarantee happiness level 100, having only the child at 20 is a little selfish, but it's not like "wait 15 years," when you really want a child, is a low-effort accommodation. (Also, I personally think having children is under pretty much all circumstances "a little selfish," at least in the sense of "you could spend your resources on EA instead." But that's okay. Lots of things people choose are a bit selfish.) I think it would be commendable to wait, but not mandatory. (And like Michael ST Jules points out, not waiting is the issue here; after that's happened, it's done, and when you contemplate having a second child 15 years later, it's now a new decision and it no longer matters what you did earlier.)

And although intentions are often relevant to questions of blameworthiness, I’m doubtful whether they are ever relevant to questions of permissibility. Certainly, it would be a surprising downside of wide views if they were committed to that controversial claim.

The intentions are relevant here in the sense of: You should always act with the intention of at least taking low-effort ways to consider the interests of possible people/beings. It's morally frivolous if someone has children on a whim, especially if that leads to them making worse choices for these children than they could otherwise have easily made. But it's okay if the well-being of their future children was at least an important factor in their decision, even if it wasn't the decisive factor. Basically, "if you bring a child into existence and it's not the happiest child you could have, you better have a good reason for why you did things that way, but it's conceivable for there to be good reasons, and then it's okay."

  • We can think of this stance as analogous to: 
    • The utilitarian parent: “I care primarily about doing what’s best for humanity at large, but I wouldn’t want to neglect my children to such a strong degree that all defensible notions of how to be a decent parent state that I fucked up.”

I wonder if we don't mind people privileging their own children because:

  1. People love their kids too damn much and it just doesn't seem realistic for people to neglect their children to help others.
  2. A world in which it is normalised to neglect your children to "focus on humanity" is probably a bad world by utilitarian lights. A world full of child neglect just doesn't seem like it would produce productive individuals who can make the world great. So even on an impartial view we wouldn't want to promote child neglect.

Neither of these points are relevant in the case of privileging existing-and-sure-to-exist people/beings vs possible people/beings:

  1. We don't have some intense biologically-driven urge to help present people. For example, most people don't seem to care all that much that a lot of present people are dying from malaria. So focusing on helping possible people/beings seems at least feasible.
  2. We can't use the argument that it is better from an impartial view to focus on existing-and-sure-to-exist people/beings because of the classic 'future could be super-long' argument.

And when you say that a person with totalist/strong longtermist life goals also chooses between two separate values (what their totalist axiology says versus existing people), I'm not entirely sure that's true. Again, massive neglect of existing people just doesn't seem like it would work out well for the long term - existing people are the ones that can make the future great! So even pure strong longtermists will want some decent investment into present people.

We can't use the argument that it is better from an impartial view to focus on existing-and-sure-to-exist people/beings because of the classic 'future could be super-long' argument.

I'd say the two are tied contenders for "what's best from an impartial view." 

I believe the impartial view is under-defined for cases of population ethics, and both of these views are defensible options in the sense that some morally-motivated people would continue to endorse them even after reflection in an idealized reflection procedure.

For fixed population contexts, the "impartial stance" is arguably better defined and we want equal considering of [existing] interests, which gives us some form of preference utilitarianism. However, once we go beyond the fixed population context, I think it's just not clear how to expand those principles, and Narveson's slogan isn't necessarily a worse justification than "the future could be super-long/big."

 In 5.2.3. Intermediate wide views, you write:

Views of this kind give more plausible verdicts in the previous cases – both the lever case and the enquiring friend case – but any exoneration is partial at best. The verdict in the friend case remains counterintuitive when we stipulate that your friend foresaw the choices that they would face. And although intentions are often relevant to questions of blameworthiness, I’m doubtful whether they are ever relevant to questions of permissibility. Certainly, it would be a surprising downside of wide views if they were committed to that controversial claim.

Rather than intentions as mere plans, I imagine this more like precommitment (maybe resolute choice?[1]), i.e. binding yourself (psychologically or physically) to deciding a certain way in the future and so preventing your future self from deviating from your plan. Precommitment is also a natural solution to avoid being left behind as Parfit's hitchhiker:

Suppose you're out in the desert, running out of water, and soon to die - when someone in a motor vehicle drives up next to you. Furthermore, the driver of the motor vehicle is a perfectly selfish ideal game-theoretic agent, and even further, so are you; and what's more, the driver is Paul Ekman, who's really, really good at reading facial microexpressions. The driver says, "Well, I'll convey you to town if it's in my interest to do so - so will you give me $100 from an ATM when we reach town?"

Now of course you wish you could answer "Yes", but as an ideal game theorist yourself, you realize that, once you actually reach town, you'll have no further motive to pay off the driver. "Yes," you say. "You're lying," says the driver, and drives off leaving you to die.

In this case, your expectation to pay in town has to be accurate to ensure you get the ride, and if you can bind yourself to paying, then it will be accurate.[2]

 

I think this also gives us a solution to this point:

And if permissibility doesn’t depend on past choices, then it’s also wrong to pull the second lever in cases where we didn’t previously pull the first lever.

If you created Amy and had failed to bind yourself into creating Bob by the time you created Amy, then the mistake was made in the past, not now, and you're now free to create or not create Bobby. After having created Amy, you have to condition on the state of the world (or your evidence about it), in which Amy already exists. She is no longer contingent, only Bob is.

Similarly, with Parfit's hitchhiker, the mistake was made when negotiating before being driven, if you didn't bind yourself to paying when you get to town. But if you somehow already made it into town, then you don't have to pay the driver anymore, and it's better not to (by assumption).[3]

  1. ^

    I originally only wrote resolute choice, not precommitment, and then edited it to precommitment. I think precommitment is clearer and what I intended to describe. I'm less sure about resolute choice, but it is related.

  2. ^

    I imagine you can devise similar problems for impartial views. Both you and the driver could both be impartial or even entirely unselfish, but have quite different moral views about what's best and disagree on how to best use your $100. Then this becomes a problem of cooperation or moral trade.

  3. ^

    If the driver is in fact 100% accurate, then you should expect to pay if you made it into town; you won't actually be empirically free to choose either way. Maybe the driver isn't 100% accurate, though, so you got lucky this time, and now don't have to pay.

In Parfit's case, we have a good explanation for why you're rationally required to bind yourself: doing so is best for you.

Perhaps you're morally required to bind yourself in Two-Shot Non-Identity, but why? Binding yourself isn't better for Amy. And if it's better for Bobby, it seems that can only be because existing is better for Bobby than not-existing, and then there's pressure to conclude that we're required to create Bobby in Just Bobby, contrary to the claims of PAVs.

And suppose that (for whatever reason) you can't bind yourself in Two-Shot Non-Identity, so that the choice to create Bobby (having previously created Amy) remains open. In that case, it seems like our wide view must again make permissibility depend on lever-lashing or past choices. If the view says that you're required to create Bobby (having previously created Amy), permissibility depends on past choices. If the view says that you're permitted to decline to create Bobby (having previously created Amy), permissibility depends on lever-lashing (since, on wide views, you wouldn't be permitted to pull both levers if they were lashed together).

In Parfit's case, we have a good explanation for why you're rationally required to bind yourself: doing so is best for you.

The more general explanation is that it's best according to your preferences, which can also reflect or just be your moral views. It's not necessarily a matter of personal welfare, narrowly construed. We have similar thought experiments for total utilitarianism. As long as you

  1.  expect to do more to further your own values/preferences with your own money than the driver would to further your own values/preferences with your money
  2. don't disvalue breaking promises (or don't disvalue it enough), and
  3. can't bind yourself to paying and know this,

then you'd predict you won't pay and be left behind.

Perhaps you're morally required to bind yourself in Two-Shot Non-Identity, but why?

Generically, if and because you hold a wide PAV, and it leads to the best outcome ahead of time on that view. There could be various reasons why someone holds a wide PAV. It's not about it being better for Bobby or Amy. It's better "for people", understood in wide person-affecting terms.

One rough argument for wide PAVs could be something like this, based on Frick, 2020 (but without asymmetry):

  1. If a person A existed, exists or will exist in an outcome,[1] then the moral standard of "A's welfare" applies in that outcome, and its degree of satisfaction is just A's lifetime (or future) welfare.
  2. Between two outcomes, X and Y, if 1) standard x applies in X and standard y applies in Y (and either x and y are identical standards or neither applies in both X and Y), 2) standards x and y are of "the same kind", 3) x is at least as satisfied in X as y is in Y, and 4) all else is equal, then X  Y (X is at least as good as Y).
    1. If keeping promises matters in itself, then it's better to make a promise you'll keep than a promise you'll break, all else equal.
    2. With 1 (and assuming different people result in "the same kind" of welfare standards with comparable welfare), "Just Bobby" is better than "Just Amy", because the moral standard of Bobby's welfare would be more satisfied than the moral standard of Amy's welfare.
    3. This is basically Pareto for standards, but anonymous/insensitive to the specific identities of standards, as long as they are of "the same kind".
  3. It's not better (or worse) for a moral standard to apply than to not apply, all else equal.
    1. So creating Bobby isn't better than not doing so, unless we have some other moral standard(s) to tell us that.[2]
    2. This is similar to Existence Anticomparativism.

And suppose that (for whatever reason) you can't bind yourself in Two-Shot Non-Identity, so that the choice to create Bobby (having previously created Amy) remains open. In that case, it seems like our wide view must again make permissibility depend on lever-lashing or past choices.

I would say the permissibility of choices depends on what options are still available, and so can change if options that were available before become unavailable. "Just Amy" can be impermissible ahead of time because "Just Bobby" is still available, and then become permissible after "Just Bobby" is no longer available. If Amy already exists as you assume, then "Just Bobby" is no longer available. I explain more here.

I guess that means it depends on lever-lashing? But if that's it, I don't find that very objectionable, and it's similar to Parfit's hitchhiker.

  1. ^

    Like B-theory of time or eternalism.

  2. ^

    This would need to be combined with the denial of many particular standards, e.g. total welfare as the same standard of standards of "the same kind" across all populations. If we stop with only the standards in 1, then we just get anonymous Pareto, but this leaves many welfare tradeoffs between people incomparable. We could extend in various ways, e.g. for each set S of people who will ever exist in an outcome, the moral standard of S's total welfare applies, but it's only of "the same kind" for set of people with the same number of people.

Also, if your intention wasn't really binding and you did abandon it, then you undermine your own ability to follow through on your own intentions, which can make it harder for you to act rightly and do good in the future. But this is an indirect reason.

Despite my specific responses, I want to make a general comment that I agree that these seem like good arguments against many person-affecting views, according to my own intuitions, which are indeed person-affecting. They also leave the space for plausible (to me) person-affecting accounts pretty small.

I think some of the remaining views, e.g. using something like Dasgupta's approach with resolute choice precommitments as necessary, can still be (to me) independently justified, too, but they also need to face further scrutiny.

I think an earlier comment you made on another post about Tomi's argument in section 3 helped me realize that something like Dasgupta's approach would be needed, and lots of person-affecting views would get ruled out.

Thanks! I'd like to think more at some point about Dasgupta's approach plus resolute choice. 

I wrote a bit more about Dasgupta's approach and how to generalize it here.

In section 3, you illustrate with Tomi's argument:

 One hundred peopleTen billion different people
A40-
B4141
C40100

And in 3.1, you write:

How might advocates of PAVs respond to Tomi’s argument? One possibility is to claim that betterness is option-set dependent: whether an outcome X is better than an outcome Y can depend on what other outcomes are available as options to choose. In particular, advocates of PAVs could claim:

  • B is better than A when B and A are the only options
  • B is not better than A when C is also an option.

And advocates of PAVs could defend the second bullet-point in the following way: when C is available, B harms (or is unjust to) the ten billion extra people, because these extra people are better off in C. And this harm/injustice prevents B from being better than A.

And in 3.2 explain why this isn't a good response. I mostly agree.

I think a better response is based on reasoning like the following:

If I were a member of A (and the hundred people are the same hundred people in A, B and C) and were to choose to bring about B, then I would realize that C would have been better for all of the now necessary people (including the additional ten billion), so would switch to C if able, or regret picking B over C. But C is worse than A for necessary people, so anticipating this reasoning from B to C, I rule out B ahead of time to prevent it.

In this sense, we can say B is not better than A when C is also an option.[1]

Something like Dasgupta’s method (Dasgupta, 1994 and Broome, 1996) can extend this. The idea is to first rule out any option that is impartially worse in a binary choice (pairwise comparison) than another option with exactly the same set of people (or the same number of people, under a wide view). This rules out B, because C is better than it. This leaves a binary choice between A and C. Then you pick any that is best for the necessary people (or rank them based on how good they are for necessary people), and A and C are equivalent now, so either is fine.

(This can also be made asymmetric.)

  1. ^

    Or B is not more choiceworthy than A when C is also an option, if we want to avoid axiological claims?

Taken as an argument that B isn't better than A, this response doesn't seem so plausible to me. In favour of B being better than A, we can point out: B is better than A for all of the necessary people, and pretty good for all the non-necessary people. Against B being better than A, we can say something like: I'd regret picking B over C. The former rationale seems more convincing to me, especially since it seems like you could also make a more direct, regret-based case for B being better than A: I'd regret picking A over B.

But taken as an argument that A is permissible, this response seems more plausible. Then I'd want to appeal to my arguments against deontic PAVs.

A steelman could be to just set it up like a hypothetical sequential choice problem consistent with Dasgupta's approach:

  1. Choose between A and B
  2. If you chose B in 1, choose between B and C.

or

  1. Choose between A and (B or C).
  2. If you chose B or C in 1, choose between B and C.

In either case, "picking B" (including "picking B or C") in 1 means actually picking C, if you know you'd pick C in 2, and then use backwards induction.

The fact that A is at least as good as (or not worse than and incomparable to) B could follow because B actually just becomes C, which is equivalent to A once we've ruled out B. It's not just facts about direct binary choices that decide rankings ("betterness"), but the reasoning process as a whole and how we interpret the steps.

At any rate, I don’t think it’s that important whether we interpret the rankings as "betterness", as usually understood, with its usual sensitivities and only those. I think you've set up a kind of false dichotomy between permissibility and betterness as usually understood. A third option is rankings not intended to be interpeted as betterness as usual. Or, we could interpret betterness more broadly.

Having separate rankings of options apart from or instead of strict permissibility facts can still be useful, say because we want to adopt something like a scalar consequentialist view over those rankings. I still want to say that C is "better" than B, which is consistent with Dasgupta's approach. There could be other options like A, with the same 100 people, but everyone gets 39 utility instead of 40, and another where everyone gets 20 utility instead. I still want to say 39 is better than 20, and ending up with 39 instead of 40 is not so bad, compared to ending up with 20, which would be a lot worse.

In 5.2.3. Intermediate wide views, you write:

If permissibility doesn’t depend on lever-lashing, then it’s also wrong to pull both levers when they aren’t lashed together.

Why wouldn't permissibility depend on lever-lashing under the intermediate wide views? The possible choices, including future choices, have to be considered together ahead of time. Lever-lashing restricts them, so it's a different choice situation. If we're person-affecting, we've already accepted that how we rank two options can depend on what others are available (or we rejected transitivity).

EDIT: I fleshed out an intermediate view here that I think avoids the objections in the post.

Yes, nice point. I argue against this kind of dependence in footnote 16 of the paper. Here's what I say there:

Here’s a possible reply, courtesy of Olle Risberg. What we’re permitted to do depends on lever-lashing, but not because lever-lashing precludes pulling the levers one after the other. Instead, it’s because lever-lashing removes the option to create both Amy and Bobby, and removes the option to create neither Amy nor Bobby. If we have the option to create both and the option to create neither, then creating just Amy is permissible. If we don’t have the option to create both or the option to create neither, then creating just Amy is wrong. 

This reply might have some promise, but it won’t appeal to proponents of wide views. To see why, consider the following four-button case. By pressing button 1, we create just Amy with a barely good life. By pressing button 2, we create just Bobby with a wonderful life. By pressing button 3, we create both Amy and Bobby. By pressing button 4, we create neither Amy nor Bobby. The reply implies that it’s permissible to create just Amy. That verdict doesn’t contradict the letter of wide views (at least given my definition in this paper), but it certainly contradicts their spirit.

EDIT: Actually my best reply is that just Amy is impermissible whenever just Bobby is available, ahead of time considering all your current and future options (and using backwards induction). The same reason applies for all of the cases, whether buttons, levers, or lashed levers.

EDIT2: I think I misunderstood and was unfairly harsh below.


I do still think the rest of this comment below is correct in spirit as a general response, i.e. a view can make different things impermissible for different reasons. I also think you should have followed up to your own reply to Risberg or anticipated disjunctive impermissibility in response, since it seems so obvious to me, given its simplicity and I think it’s a pretty standard way to interpret (im)permissibility. Like I would guess Risberg would have pointed out the same (but maybe you checked?). Your response seems uncharitable/like a strawman.

Still, the reasons are actually the same in the cases here, but for a more sophisticated reason that seems easier to miss, i.e. considering all future options ahead of time.

‐--------

I agree that my/Risberg's reply doesn't help in this other case, but you can have different replies for different cases. In this other case, you just use the wide view's solution to the nonidentity problem, which tells you to not pick just Amy if just Bobby is available. Just Amy is ruled out for a different reason.

And the two types of replies fit together in a single view, which is a wide view considering the sequences of options ahead of time and using backwards induction (everyone should use backwards induction in (finite) sequential choice problems, anyway). This view will give the right reply when it's needed.

Or, you could look at it like if something is impermissible for any reason (e.g. via either reply), then it is impermissible period, so you treat impermissibility disjunctively. As another example, someone might say each of murder and lying are impermissible and for different reasons. The impermissibility of lying wouldn't "make" murder permissible. Different replies for different situations.

My understanding of a standard interpetation of (im)permissibility is that options are by default permissible, but then reasons rule out some options as impermissible. Reasons don't "make" options permissible; they can only count against. So, impermissibility is disjunctive, and permissibiliy is conjunctive.

[This comment is no longer endorsed by its author]Reply

Here's my understanding of the dialectic here:

Me: Some wide views make the permissibility of pulling both levers depend on whether the levers are lashed together. That seems implausible. It shouldn't matter whether we can pull the levers one after the other.

Interlocutor: But lever-lashing doesn't just affect whether we can pull the levers one after the other. It also affects what options are available. In particular, lever-lashing removes the option to create both Amy and Bobby, and removes the option to create neither Amy nor Bobby. So if a wide view has the permissibility of pulling both levers depend on lever-lashing, it can point to these facts to justify its change in verdicts. These views can say: it's permissible to create just Amy when the levers aren't lashed because the other options are on the table; it's wrong to create just Amy when the levers are lashed because the other options are off the table.

Me: (Side note: this explanation doesn't seem particularly satisfying. Why does the presence or absence of these other options affect the permissibility of creating just Amy?). If that's the explanation, then the resulting wide view will say that creating just Amy is permissible in the four-button case. That's against the spirit of wide PAVs, so wide views won't want to appeal to this explanation to justfiy their change in verdicts given lever-lashing. So absent some other explanation of some wide views' change in verdicts occasioned by lever-lashing, this implausible-seeming change in verdicts remains unexplained, and so counts against these views.

Ah, I should have read more closely. I misunderstood and was unnecessarily harsh. I'm sorry.

I think your response to Risberg is right.

I would still say that permissibility could depend on lever-lashing (in some sense?) because it affects what options are available, though, but in a different way. Here is the view I'd defend:

Ahead of time, any remaining option or sequence of choices that ends up like "Just Amy" will be impermissible if there's an available option or sequence of choices that ends up like "Just Bobby" (assuming no uncertainty). Available options/sequences of choices are otherwise permissible by default.

Here are the consequences in your thought experiments:

  1. In the four button case, the "Just Amy" button is impermissible, because there's a "Just Bobby" button.
  2. In the lashed levers case, it's impermissible to pull either, because this would give "Just Amy", and the available alternative is "Just Bobby".
  3. In the unlashed levers case, 
    1. Ahead of time, each lever is permissible to pull and permissible to not pull, as long as you won't pull both (or leave both pulled, in case you can unpull). Ahead of time, pulling both levers is impermissible, because that would give "Just Amy", and "Just Bobby" is still available. This agrees with 1 and 2.
    2. But if you have already pulled one lever (and this is irreversible), then "Just Bobby" is no longer available (either Amy is/will be created, or Bobby won't be created), and pulling the other is permissible, which would give "Just Amy". "Just Amy" is therefore permissible at this point.

As we see in 3.b., "Just Bobby" gets ruled out, and then "Just Amy" becomes permissible after and because of that, but only after "Just Bobby" is ruled out, not before. Permissibility depends on what options are still available, specifically if "Just Bobby" is still available in these thought experiments. "Just Bobby" is still available in 2 and 3.a.

In your post, you wrote:

Pulling both levers should either be permissible in both cases or wrong in both cases.

This is actually true ahead of time, in 2 and 3.a, with pulling both together impermissible. But already having pulled a lever and then pulling the other is permissible, in 3.b. 

Maybe this is getting pedantic and off-track, but "already having pulled a lever" is not an action available to you, it's just a state of the world. Similarly, "pulling both levers" is not an action available to you after you pulled one; you only get to pull the other lever. "Pulling both levers" (lashed or unlashed) and "pulling the other lever, after already having pulled one lever" have different effects on the world, i.e. the first creates Amy and prevents Bobby, while the second only does one of the two. I don't think it's too unusual to be sensitive to these differences. Different effects -> different evaluations.

Still, the end state "Just Amy" itself later becomes permissible/undominated without lever-lashing, but is impermissible/dominated ahead of time or with lever lashing.

interesting article!

I think my issue with the argument in section 3 is that it puts real and hypothetical people on the same footing, which is the very thing that PAV rejects. 

If you label the left half of the table "100 real people" and the right half "ten billion hypothetical people", then from the perspective of a PAV in world A, B is preferable to A, but C is worse than B, because the hypothetical people don't count. If you think we'll end up in world B, then triggering world B is worth it because it makes existing people happy, but if you think world B will turn into world C later, then we're back to neutral because ultimately it makes no difference to real people.  

But if someone has already gone ahead and brought about world B, then we have a different equation: now both sides of the table are talking about real people, so C becomes preferable. The 10 billion don't enter the moral equation until they already exist (or are sure to exist). 

The other side to this I'd say is this: deciding not to bring someone into existence is always morally neutral. But if you do decide to bring someone into existence, then you have obligations towards them to make their life worth living. 

Yes, nice points. If one is committed to contingent people not counting, then one has to say that C is worse than B. But it still seems to me like an implausible verdict, especially if one of B and C is going to be chosen (and hence those contingent people are going to become actual). 

It seems like the resulting view also runs into problems of sequential choice. If B is best out of {A, B, C}, but C is best out of {B, C}, then perhaps what you're required to do is initially choose B and then (once A is no longer available) later switch to C, even if doing so is costly. And that seems like a bad feature of a view, since you could have costlessly chosen C in your first choice.

I think you'd still just choose A at the start here if you're considering what will happen ahead of time and reasoning via backwards induction on behalf of the necessary people. (Assuming C is worse than A for the original necessary people.)

If you don't use backwards induction, you're going to run into a lot of suboptimal behaviour in sequential choice problems, even if you satisfy expected utility theory axioms in one-shot choices.

Coming back to this, since I've become more sympathetic to (asymmetric) narrow person-affecting views recently, because of this and sympathies to actualism.

5.1. A trilemma for narrow views

Here’s a problem for narrow views. Consider:

Expanded Non-Identity

(1) Amy 1

(2) Bobby 100

(3) Amy 10, Bobby 10

(...)

Only option (2) is permissible

Now we can complete the trilemma for narrow views. If neither of (1) and (3) is permissible in Expanded Non-Identity, it must be that only (2) is permissible. But if only (2) is permissible, then narrow views imply:

Losers Can Dislodge Winners:

Adding some option X to an option set can make it wrong to choose a previously-permissible option Y, even though choosing X is itself wrong in the resulting option set.[10]

That’s because narrow views imply that each of (1) and (2) is permissible in One-Shot Non-Identity. So if only (2) is permissible in Expanded Non-Identity, then adding (3) to our option set has made it wrong to choose (1) even though choosing (3) is itself wrong in Expanded Non-Identity.

That’s a peculiar implication. It’s a deontic version of an old anecdote about the philosopher Sidney Morgenbesser. Here’s how that story goes. Morgenbesser is offered a choice between apple pie and blueberry pie, and he orders the apple. Shortly after, the waiter returns to say that cherry pie is also an option, to which Morgenbesser replies, ‘In that case, I’ll have the blueberry.’

I suspect this is a misleading analogy. In the case of pies, you haven't given any reason why they would change their mind, and it's hard to imagine one to which anyone would be sympathetic (but maybe someone could have reasons, and then it's not my place to judge them!). That could explain its apparent peculiarity. It's just not very psychologically plausible, because people don't think of pies or food in that way in practice.

But we have an argument for why we would change our mind in the expanded non-identity case: we follow the logic of narrow person-affecting views (with those implications), to which we are sympathetic. If the reasons for such a narrow person-affecting view seem to someone to be good, then the implications shouldn't seem peculiar.

 

The pattern is even stranger in our deontic case.

I'd say it's less strange, because we already have a more psychologically plausible explanation, i.e. person-affecting intuitions. Why do you think it's stranger?

 

Imagine instead that the waiter is offering Morgenbesser the options in Expanded Non-Identity.[11] Initially the choice is between (1) and (2), and Morgenbesser permissibly opts for (1). Then the waiter returns to say that (3) is also an option, to which Morgenbesser replies, ‘In that case, I’m morally required to switch to (2).’ The upshot is that the waiter can force Morgenbesser’s hand by adding options that are wrong to choose in the resulting option set. And turning the case around, the waiter could expand Morgenbesser’s menu of permissible options by taking wrong options off the table. That seems implausible.

I think this is too quick, and, from my perspective, i.e. with my intuitions, a mistake.

  1. I don't find the implications implausible or very counterintuitive (perhaps for the reasons below).
  2. A different way of framing this is that the waiter is revealing information about which options are permissible. The waiter has private information, i.e. whether or not a given option will be available, which decides which ones are permissible. In general, when someone has private information about your options (or their consequences), they can force you to reevaluate your options and force your hand by revealing the info. The narrow person-affecting response is a special case of that. So, your argument would prove too much: it would say it's implausible to have your hand forced by the revelation of private information, which is obviously not true. (And I think there's no Dutch book or money pump with foreseeable loss here; you just have to be a sophisticated reasoner and anticipate what the waiter will do, and recognize what your actual option set will be.)
  3. Another framing is basically the one by Lukas, or the object version of preferentialism/participant model of Rabinowicz & Österberg, 1996. You're changing the perspectives or normative stances you take, depending on who comes to exist. It's not surprising that you would violate the independence of irrelevant alternatives in certain ways, when you have to shift perspectives like this, and it just follows on specific views.
  4. In general, I think it's somewhat problematic/uncharitable to call something implausible or that it "seems implausible" and end the discussion there, because people vary substantially in what they find implausible, counterintuitive, etc.. When someone does this, I get the impression that they take their arguments to be more universally appealing (or "objective") than they actually are. Unless they make clear they're speaking only for themselves. Maybe "seems" should normally be understood as speaking only for yourself and your own intuitions, but I'd find this less frustrating if it were made explicit.

 

I do wonder if your example suggests that in practice you should often or usually act like you hold a wide view, though. If you're indifferent between (1) Amy at 1 and (2) Bobby at 100 when they are (so far) the only two options, you should anticipate that (3) or similar options might become available, and so opt for (2) in case.

Larry Temkin has noted an independent reason for doubting the person-affecting restriction stated in section 2.1. Suppose on a wellbeing scale of 1-100 we can create either

A. Kolya, Lev and Maksim, each on 50 or

B. Katya on 40, Larissa on 50 and Maria on 60.

Many would think A better than B, either because it is more equal or because it is better for the worse-off (understood de dicto). But it is not better for any particular person.

Interesting, thanks! I hadn't come across this argument before.

It's in his book Inequality, chapter 9. Ingmar Persson makes a similar argument about the priority view here: https://link.springer.com/article/10.1023/A:1011486120534.

Looking forward to reading this. A quick note: in 3. Tomi’s argument that creating happy people is good your introductory text doesn't match what is in the table.


 

Thanks, fixed now!

Super interesting, Elliott (though, of course, you must be wrong!) 

Ha, thanks!

Executive summary: The author presents several arguments against person-affecting views in population ethics, concluding that creating happy people is good and creating happier people is better.

Key points:

  1. If person-affecting views are true, the EA community may be spending too much on reducing existential risk. If false, much more should be spent.
  2. The simple argument against person-affecting views says creating happy people brings more good things into the world. Classic responses to this argument lead to other counterintuitive conclusions.
  3. Tomi Francis provides two additional arguments showing that creating happy people is good and creating happier people is better.
  4. Narrow person-affecting views face a trilemma in the author's Expanded Non-Identity case, with each option leading to an implausible implication.
  5. Wide person-affecting views make permissibility depend on seemingly irrelevant factors in the author's Two-Shot Non-Identity sequential choice case.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

More from EJT
Curated and popular this week
Relevant opportunities