How bad would it be to cause human extinction? ‘'If we do not soon destroy ourselves’, write Carl Sagan and Richard Turco, ‘but instead survive for a typical lifetime of a successful species, there will be humans for another 10 million years or so. Assuming that our lifespan and numbers do not much grow over that period, the cumulative human population—all of us who have ever lived—would then reach the startling total of about a quadrillion (a 1 followed by 15 zeros). So, if nuclear winter could work our extinction, it is something like a million times worse (a quadrillion divided by a billion) than the direct effects of nuclear war--in terms of the number of people who would thereby never live.'
You may agree that that this would be far worse than killing ‘only’ eight billion people and makes it much more important to avoid even the risk of doing so. That’s certainly the view of leading longtermists. But then you’ve probably had the experience of arguing with people who don’t accept this claim at all. Trying to derive it from total utilitarianism—seemingly the most straightforward approach—runs into notorious difficulties. Many philosophers deny it. Instead, like many laypeople, they accept what John Broome calls the ‘intuition of neutrality’: ‘for a wide range of levels of lifetime wellbeing, between a bad life and a very good life, we intuitively think that adding a person at that level is neutral.'
Broome thinks the intuition of neutrality must be wrong, and offers some proofs. I think there’s a simpler reason to doubt it. (N.B.: I'll bracket the effects of our survival on non-humans.) Suppose a government is considering developing vaccines against two strains of flu. If the first mutates and crosses into the human population, it will kill seven billion people immediately. After that, most people will develop immunity, but it will still kill ten million people a year for the next thousand years. If the second virus mutates and crosses into the human population, it will kill everybody on earth. Each virus is estimated to have a 1/1000 chance of mutating.
Most of us will agree—I hope—that the government shouldn’t discount the ten billion future deaths that the first virus would cause just because they would arrive in the future. It should count the expected deaths from an outbreak as 17 million (1/1000 x 17 billion). In contrast, the expected deaths if the second virus breaks out are only 8 million (1/1000 x 8 billion). If additional human lives have no value in themselves, that implies that the government would have more reason to take precautionary measures against a virus that would kill most of us than one that would kill all of us, even if the probabilities were equal. If it could only afford to develop vaccines against one of them, it should choose the first.
That seems to me a reductio. Do you agree? Or am I missing something?
Postscript: Judging by the first two comments on this post, I must have failed to make myself clear. I believe the second scenario is at least as bad as the first, and that this undermines the ‘intuition of neutrality’. See my reply below.
Maybe I'm misunderstanding, but if
then, conditional on the given virus mutating
2 kills more present/necessary people, so we'd want to prevent it.
EDIT: It looks like you pointed out something similar here.
I don't think it's true that other things are equal on the intuition of neutrality, after saying there are more deaths in A than B. The lives and deaths of the contingent/future people in A wouldn't count at all on symmetric person-affecting views (narrow or wide). On some asymmetric person-affecting views, they might count, but the bad lives count fully, while the additional good lives only offset (possibly fully offset) but never outweigh the additional bad lives, so the extra lives and deaths need not count on net.
On the intuition of neutrality, there a... (read more)