Many (e.g. Parfit, Bostrom, and MacAskill) have convincingly argued that we should care for future people (longtermism) and thus extinction is as bad as the loss of 10^35 lives or possibly much more bc there might be 10^35 humans yet to be born.
I believe with medium confidence, these numbers are far too high and that when fertility patterns are fully accounted for, 10^35 might become 10^10—approximately the current human population. I believe with much stronger confidence that EAs should be explicit about the assumptions underlying numbers like 10^35 because concern for future people is necessary but insufficient for such claims.
I first defend these claims before offering some ancillary thoughts about implications of longtermism EAs should take more seriously.
Extinction isn’t that much worse than 1 death
The main point is that if you kill a random person, you kill off the rest of the rest of their descendants too. And since the average person is responsible for ~10^35/(current human population) of the future lives, their death is ~10^10 times less bad than extinction.
The general response to this is a form of Malthusianism—that after a death, human population regains its level since fertility increases. Given that current fertility rates are below 2 in much of the developed world, I have low confidence this claim is true. More importantly, you need high credence in a type of Malthusianism to bump up the 10^10 number significantly. If Malthusianism is 99% likely to be correct, extinction is only 10^12 times worse than one death--if X is harm of extinction and X is arbitrarily large: there is a 99% chance you can treat one death as infinitely less bad as extinction but a 1% chance it’s 10^10 times worse and 0.99(0 * X) + 0.01(1/10^10 * X) = 1/10^12 * X.
There are many other claims one could make regarding the above. Some popular ones include digital people, simulated lives, and artificial uteruses. I don’t have developed thoughts on how these technologies interact with fertility rates. The same point about needing high credence from above does apply though. And more importantly, if any of these or other claims are the lynchpin for arguments about why extinction should be a main priority, EAs should make the point more explicitly because none of these claims is that obvious. Even Malthusianism type claims should be made more explicit.
Finally, I think arguments for why extinction might be less than 10^10 times worse are often ignored. I’ll point out two. First, it seems that people can have large positive externalities on others’s lives and also future people’s lives by sharing ideas; less people means the externality from each life is less. Second, insecurity that might result from seeing another’s death might lower fertility and thus lower future lives.
Other Implications of longtermism
I'd like to end by zooming out on longtermism as a whole. The idea that future people matter is a powerful claim and opens a deep rabbit hole. In my view, EAs have found the first exit out of the rabbit hole—that extinction might be really bad—and left even more unintuitive implications buried below.
A few of these:
- Fertility might be an important cause area. If you can raise the fertility rate by 1% for one generation, you increase total future population by 1%, if you assume away Malthusianism and similar claims. If you can affect a longterm shift in fertility rates (for example, through genetic editing), you could do much, much better— 100% x [1.01^n - 1] times better, where n is the number of future generations, which is a very large number.
- Maybe we should prioritize young lives over older lives. Under longtermism, the main value most people have is their progeny. If there are 10^35 more people left to live, saving the life of someone who will have kids is > 10^25 times more valuable than saving the life of someone who won’t.
- Abortion might be a great evil. See 1…no matter your view on whether an unborn baby is a life, banning abortion could easily affect a significant and longterm increase in the fertility rate.
Ah nice, thanks for explaining! I'm not following all the calculations still, but that's on me, and I think they're probably right.
But I don't think your argument is actually that relevant to what we should do, even if it's right. That's because we don't care about how good our actions are as a fraction/multiple of what our other options are. Instead, we just want to do whatever leads to the best expected outcomes.
Suppose there was a hypothetical world where there was a one in ten chance the total figure population was a billion, and 90% chance the population was two. And suppose we have two options: save one person, or save half the people.
In that case, the expected value of saving half the people would be 0.9*1 + 0.1*500,000,000 = about 50,000,001. That's compared to the expected value of 1 of saving one person. Imo, this is a strong reason for picking the "save half the people option".
But the expected fraction of people saved by the options is quite different. The "save half" option always results in half being saved. And the expected value of the "save one" option is also very close to half: 0.9*0.5 + 0.1*1/1,000,000,000. Even though the two interventions look very similar from this perspective, I think it's basically irrelevant - expected value is the relevant thing.
What do you think? I might well have made a mistake, or misunderstood still.