How bad would it be to cause human extinction? ‘'If we do not soon destroy ourselves’, write Carl Sagan and Richard Turco, ‘but instead survive for a typical lifetime of a successful species, there will be humans for another 10 million years or so. Assuming that our lifespan and numbers do not much grow over that period, the cumulative human population—all of us who have ever lived—would then reach the startling total of about a quadrillion (a 1 followed by 15 zeros). So, if nuclear winter could work our extinction, it is something like a million times worse (a quadrillion divided by a billion) than the direct effects of nuclear war--in terms of the number of people who would thereby never live.'
You may agree that that this would be far worse than killing ‘only’ eight billion people and makes it much more important to avoid even the risk of doing so. That’s certainly the view of leading longtermists. But then you’ve probably had the experience of arguing with people who don’t accept this claim at all. Trying to derive it from total utilitarianism—seemingly the most straightforward approach—runs into notorious difficulties. Many philosophers deny it. Instead, like many laypeople, they accept what John Broome calls the ‘intuition of neutrality’: ‘for a wide range of levels of lifetime wellbeing, between a bad life and a very good life, we intuitively think that adding a person at that level is neutral.'
Broome thinks the intuition of neutrality must be wrong, and offers some proofs. I think there’s a simpler reason to doubt it. (N.B.: I'll bracket the effects of our survival on non-humans.) Suppose a government is considering developing vaccines against two strains of flu. If the first mutates and crosses into the human population, it will kill seven billion people immediately. After that, most people will develop immunity, but it will still kill ten million people a year for the next thousand years. If the second virus mutates and crosses into the human population, it will kill everybody on earth. Each virus is estimated to have a 1/1000 chance of mutating.
Most of us will agree—I hope—that the government shouldn’t discount the ten billion future deaths that the first virus would cause just because they would arrive in the future. It should count the expected deaths from an outbreak as 17 million (1/1000 x 17 billion). In contrast, the expected deaths if the second virus breaks out are only 8 million (1/1000 x 8 billion). If additional human lives have no value in themselves, that implies that the government would have more reason to take precautionary measures against a virus that would kill most of us than one that would kill all of us, even if the probabilities were equal. If it could only afford to develop vaccines against one of them, it should choose the first.
That seems to me a reductio. Do you agree? Or am I missing something?
Postscript: Judging by the first two comments on this post, I must have failed to make myself clear. I believe the second scenario is at least as bad as the first, and that this undermines the ‘intuition of neutrality’. See my reply below.
Do you intend for the population to recover in B, or extinction with no future people? In the post, you write that the second virus "will kill everybody on earth". I'd assume that means extinction.
If B (killing 8 billion necessary people) does mean extinction and you think B is better than A, then you prefer extinction to extra future deaths. And your argument seems general, e.g. we should just go extinct now to prevent the deaths of future people. If they're never born, they can't die. You'd be assigning negative value to additional deaths, but no positive value to additional lives. The view would be antinatalist.
Or, if you think B is just no worse than A (equivalent or incomparable), then extinction is permissible, in order to prevent the deaths of future people.
If you allow population recovery in B, then (symmetric) wide person-affecting views can say B is better than A, although it could depend on how many future/contingent people will exist in each scenario. If the number is the same or larger in B and dying earlier is worse than dying later, then B would be better. If it's lower in B, then you may need to discount some of the extra early deaths in A.