In October of 2018, I developed a question series on Metaculus related to extinction events spanning risks from nuclear war, bio-risk, risks from climate change and geo-engineering, Artificial Intelligence risk, and risks from nanotechnology failure modes. Since then, these questions have accrued over 3,000 predictions (ETA: as of today, there the number is around 5,000).
Catastrophes were defined as a reduction in the human population of at least 10% in any period of 5 years or less. (Near) extinction is defined as an event that reduces the human population by at least 10% within 5 years, and by at least 95% within 25 years.
Here's a summary of the results as they stand today (September 24, 2023), ordered by risk of near extinction:
Global catastrophic risk | Chance of catastrophe by 2100 | Chance of (near) extinction by 2100 |
Artificial Intelligence | 6.16% | 3.39% |
Other risks | 1.52% | 0.13% |
Biotechnology or bioengineered pathogens | 1.52% | 0.07% |
Nuclear war | 2.86% | 0.06% |
Nanotechnology | 0.02% | 0.01% |
Climate change or geo-engineering | 0.00% | 0.00% |
Natural pandemics | 0.62% | N/A |
These predictions are generated by aggregating forecasters' individual predictions based on their track records. Specifically, the predictions are weighted by a function of the forecasters' level of 'skill', where 'skill' is estimated with data on relative performance on a number (typically many hundreds) of resolved forecasts.
If we assume that these events are independent, the predictions suggest that there's at a ~17% chance of catastrophe, and a ~1.9% chance of (near) extinction by the end of the century. Admittedly, independence is likely to be an inappropriate assumption, since, for example, some catastrophes could exacerbate other global catastrophic risks.[1]
Interestingly, the predictions indicate that although nuclear risk and bioengineered pathogens are most likely to result in a major catastrophe, an AI failure mode is by far the biggest source of extinction-level risk—it is at least 5-times more likely to cause near extinction than all other risks combined.
Links to all the questions on which these predictions are based may be found here.
For reference, these were the estimates when I first posted this (19 Jun 2022):
Global catastrophic risk | Chance of catastrophe by 2100 | Chance of (near) extinction by 2100 |
Artificial Intelligence | 3.06% | 1.56% |
Other risks | 1.36% | 0.11% |
Biotechnology or bioengineered pathogens | 2.21% | 0.07% |
Nuclear war | 1.87% | 0.06% |
Nanotechnology | 0.17% | 0.06% |
Climate change or geo-engineering | 0.51% | 0.01% |
Natural pandemics | 0.51% | n/a |
Late to the thread, but one further thing I'd note is that it's entirely possible for multiple different global catastrophe scenarios to occur by 2100. E.g., a global catastrophe in 2030 due to nuclear conflict and another in 2060 due to bioengineering. From a skim, I think the relevant Metaculus questions are about "by 2100" rather than "the first global catastrophe by 2100", so they're not mutually exclusive.
So if it was the case that the individual questions added to 14% and the total question added to 14% (which Christian's answer suggests it isn't, but I haven't checked), that wouldn't necessarily mean a ~0% chance of catastrophe from something else (though it's at least weak evidence of that, e.g. because if the total question had a forecast twice as high as the sum of the individual questions, that would be evidence in favour of the likelihood of some other catastrophe).