How much weight should we give the long-term future, given that nobody may be around to experience it? Both economists and philosophers see extinction risk as a rationale for discounting future costs and benefits. David Thorstad has recently claimed it poses a major challenge to longtermism Thorstad’s papers raise important issues and are very much worth reading, but ultimately the argument fails. In a post last week, I raised some doubts about his reasoning about space settlement. Here, I’ll offer a broader critique.

The standard approach in economics is to discount the future at an annual rate to account for the possibility that we might all be dead. The Stern Review on the Economics of Climate Change estimated the probability at 0.1% per year, which implied barely a 90% chance of surviving a century, a 99.3% chance of going extinct within 5000 years, and less than one chance in 22,000 of surviving 10,000. If this is right, then longtermism is in trouble. But while the first figure seems reasonably (if depressingly) plausible, the second and the third are intuitively much too low. What has gone wrong?

Discounting at a constant rate would be appropriate if we were sure that there was a fixed rate of extinction per year. But for several reasons, we’re not. Risks like asteroid strikes or nuclear war are in an important sense cumulative—the longer the period over which we’re exposed to them, the greater the probability that they’ll materialise. But others we appear to face are transition risks, as Owen Cotton-Barrat points out here. Artificial superintelligence may kill us all, but if we make it through, the risk may not persist. It’s true that even if we survive the transition, some level of risk might persist unless a benign form of ASI achieves a permanent advantage over all competitors. Nevertheless, treating ASI as if it’s a ‘state risk’ like nuclear war exaggerates the probability that a catastrophe will materialise.

Moreover, whereas we know with considerable confidence that asteroids and nukes pose catastrophic threats, some suspected x-risks may not exist at all. It’s not a sure thing that we will ever develop superintelligent AI. The risk that ASI is possible isn’t a ‘state risk’ that we run again and again. Instead, as Talbot Page puts it, ‘there is only one “trial.” Either the catastrophic hypothesis holds or it does not.' ASI is in this sense a risk involving two disjunctive possibilities: if either (a) it’s impossible or (b) we survive the transition, it won’t kill us. While we shouldn’t allow this sort of reasoning to make us complacent, it does suggest we have a decent chance of surviving AI in the long run. 

Third, if a benign ASI ‘singleton’ does materialise, it might neutralise the other existential risks. Thorstad rightly points out that it’s not enough to eliminate some x-risks if others persist, and it’s not enough to suppress them temporarily—it has to be permanent. A benign singleton might do that. Even if the odds are against this, it doesn’t seem wildly unlikely. The same is true of space settlement. There may well be unforeseen developments with the same effect. We could reasonably assign a subjective probability of five or ten percent to the possibility that something will ensure our long-term survival. As Nick Beckstead puts it, ‘[g]iven the great uncertainty involved, including uncertainty about what people will do to prepare for [existential] risks, it would seem overconfident to have a very high probability or a very low probability that humans will survive for [a] full billion years.’ If so, then longtermists don’t need to appeal to tiny probabilities to make their case. 

Fourth, even if extinction risk were constant, the annual probability might be much lower than 0.1%. Yew-Kwang Ng suggests that Stern exaggerated it by at least a factor of 10. If we find Stern’s and Ng’s estimates equally credible, we will end up with a discount rate much closer to what Ng’s estimate would call (for an intuitive explanation, see pp. 152-53 here).  That would significantly increase the value of addressing any particular x-risk, as Thorstad himself acknowledges.

Finally, for extinction risk to justify discounting x-risks, it really does have to be extinction. ASI could cook our goose, but most x-risks aren’t like that. Thermonuclear war might kill most people on the planet, but it probably won’t kill everybody. If it caused the collapse of civilisation, we might survive for a very long time before an asteroid or supervolcano polished us off. Some things we do now, such as emitting greenhouse gases, could make future people's condition better or worse. Biological weapons could conceivably kill all humans, but even if they did, other species would likely survive. Some of our policies will affect them for thousands of years. That’s sufficient to justify being very concerned about the long-term future, even if we ignore the impact of our actions on human beings. 

31

1
1

Reactions

1
1

More posts like this

Comments5
Sorted by Click to highlight new comments since:

A benign singleton might do that. Even if the odds are against this, it doesn’t seem wildly unlikely. The same is true of space settlement. There may well be unforeseen developments with the same effect. We could reasonably assign a subjective probability of five or ten percent to the possibility that something will ensure our long-term survival.

Huh? Why five percent? Why not 0.5%? Why not 50%?

Thanks! Just my subjective judgement. I feel pretty confident that 0.5% would be too low. I'd be more open to the view that 5-10% isn't high enough. If the latter is true, then that would strengthen my argument. I'd be interested what other people think.

Thanks for writing this! I find it really striking how academic critics of longtermism (both Thorstad and Schwitzgebel spring to mind here) don't adequately consider model uncertainty. It's something I also tried to flag in my old post on 'X-risk agnosticism'.

Tarsney's epistemic challenge paper is so much better, precisely because he gets into higher-order uncertainty (over possible values for the crucial parameter "r" which includes the persisting risk of extinction, in the far future, despite our best efforts).

Thanks, Richard! I've just had a look at your post and see you've anticipated a number of the points I made here. I'm interested in the problem of model uncertainty, but most of the treatments of it I've found have been technical, which isn't much help to a maths illiterate like me. Some of the literature on moral uncertainty is relevant, and there’s an interesting treatment in Toby Ord’s, Rafaela Hillerbrand’s and Anders Sandberg’s paper here. But I’d be glad to learn of other philosophical treatments if you or others can recommend any.

I've raised related points here, and also here with followup, about how exponential decay with a fixed decay rate is not a good model to use for estimating long-term survival probability.

Curated and popular this week
Relevant opportunities