How bad would it be to cause human extinction? ‘'If we do not soon destroy ourselves’, write Carl Sagan and Richard Turco, ‘but instead survive for a typical lifetime of a successful species, there will be humans for another 10 million years or so. Assuming that our lifespan and numbers do not much grow over that period, the cumulative human population—all of us who have ever lived—would then reach the startling total of about a quadrillion (a 1 followed by 15 zeros). So, if nuclear winter could work our extinction, it is something like a million times worse (a quadrillion divided by a billion) than the direct effects of nuclear war--in terms of the number of people who would thereby never live.'
You may agree that that this would be far worse than killing ‘only’ eight billion people and makes it much more important to avoid even the risk of doing so. That’s certainly the view of leading longtermists. But then you’ve probably had the experience of arguing with people who don’t accept this claim at all. Trying to derive it from total utilitarianism—seemingly the most straightforward approach—runs into notorious difficulties. Many philosophers deny it. Instead, like many laypeople, they accept what John Broome calls the ‘intuition of neutrality’: ‘for a wide range of levels of lifetime wellbeing, between a bad life and a very good life, we intuitively think that adding a person at that level is neutral.'
Broome thinks the intuition of neutrality must be wrong, and offers some proofs. I think there’s a simpler reason to doubt it. (N.B.: I'll bracket the effects of our survival on non-humans.) Suppose a government is considering developing vaccines against two strains of flu. If the first mutates and crosses into the human population, it will kill seven billion people immediately. After that, most people will develop immunity, but it will still kill ten million people a year for the next thousand years. If the second virus mutates and crosses into the human population, it will kill everybody on earth. Each virus is estimated to have a 1/1000 chance of mutating.
Most of us will agree—I hope—that the government shouldn’t discount the ten billion future deaths that the first virus would cause just because they would arrive in the future. It should count the expected deaths from an outbreak as 17 million (1/1000 x 17 billion). In contrast, the expected deaths if the second virus breaks out are only 8 million (1/1000 x 8 billion). If additional human lives have no value in themselves, that implies that the government would have more reason to take precautionary measures against a virus that would kill most of us than one that would kill all of us, even if the probabilities were equal. If it could only afford to develop vaccines against one of them, it should choose the first.
That seems to me a reductio. Do you agree? Or am I missing something?
Postscript: Judging by the first two comments on this post, I must have failed to make myself clear. I believe the second scenario is at least as bad as the first, and that this undermines the ‘intuition of neutrality’. See my reply below.
Thanks--that's very helpful. On a wide person-affecting view, A would be worse, but if we limit our analysis to present/necessary people, then outcome B would be worse. That had not occurred to me, probably because I find narrow person-affecting views so implausible.
However, it doesn't seem very damaging to my argument. If we take a hardcore narrow person-affecting view, the extra ten billion deaths shouldn’t count at all in our assessment. But surely that's very hard to believe.
Alternatively, if we adopt what Parfit calls a 'two tier view', then we’d give some weight to the deaths of the contingent people in scenario A, but less than to the deaths of present/necessary people. Even if we discounted them by a factor of five, however, scenario A would still be worse than scenario B. What is more, we can adjust the numbers:
Scenario A: Seven billion necessary people die immediately and ten million die annually for the next 10,000 years for a total of 107 billion. Most of the future people are contingent.
Scenario B: Eight billion die at once. All are necessary people.
On the two tier-view, deaths of necessary people would have to be more than a hundred times as bad as those of contingent ones for B to be worse. That is hard to believe.
Bottom line: