Hi Ryan,
Thanks for the comment!
Regarding “extinction”:
Just to be clear, the primary outcome we looked at (after considering various definitions and getting agreement from some key ‘concerned’ people) was “existential catastrophe,” defined as either extinction or “unrecoverable collapse,” with the latter defined as “(a) a global GDP of less than $1 trillion annually in 2022 dollars for at least a million years (continuously), beginning before 2100; or (b) a human population remaining below 1 million for at least a million years (continuously), beginning before 2100.”
However, we also sanity checked (see p. 14) our findings by asking about the probability that more than 60% of humans would die within a 5-year period before 2100. The median concerned participant forecasted 32%, and the median skeptic forecasted 1%. So, this outcome was considered much more likely by skeptics (median of 1% vs. 0.12% for existential catastrophe). But, a very large gap between the groups still existed. And it also did not seem that focusing on this alternative outcome made a major difference to crux rankings when we collected a small amount of data on it. So, for the most part we focus on the “existential catastrophe” outcome and expect that most of the key points in the debate would still hold for somewhat less extreme outcomes (with the exception of the debate about how difficult it is to kill literally everyone, though that point is relevant to at least people who do argue for high probabilities on literal extinction).
We also had a section of the report ("Survey on long-term AI outcomes") where we asked both groups to consider other severe negative outcomes such as major decreases in human well-being (median <4/10 on an "Average Life Evaluation" scale) and 50% population declines.
Do you have alternative “extremely bad” outcomes that you wish had been considered more?
Regarding “displacement” (footnote 10 on p. 6 for full definition):
We added this question in part because some participants and early readers wanted to explore debates about “AI takeover,” since some say that is the key negative outcome they are worried about rather than large-scale death or civilizational collapse. However, we found this difficult to operationalize and agree that our question is highly imperfect; we welcome better proposals. In particular, as you note, our operationalization allows for positive ‘displacement’ outcomes where humans choose to defer to AI advisors and is ambiguous in the ‘AI merges with humans’ case.
Your articulations of extremely advanced AI capabilities and energy use seem useful to ask about also, but do not directly get at the “takeover” question as we understood it.
Nevertheless, our existing ‘displacement’ question at least points to some major difference in world models between the groups, which is interesting even if the net welfare effect of the outcome is difficult to pin down. A median year for ‘displacement’ (as currently defined) of 2045 for the concerned group vs. 2450 for the skeptics is a big gap that illustrates major differences in how the groups expect the future to play out. This helped to inspire the elaboration on skeptics’ views on AI risk in the “What long-term outcomes from AI do skeptics expect?” section.
Finally, I want to acknowledge that one of the top questions we wished we asked related to superintelligent-like AI capabilities. We hope to dig more into this in follow-up studies and will consider the definitions you offered.
Thanks again for taking the time to consider this and propose operationalizations that would be useful to you!
(Below written by Peter in collaboration with Josh.)
It sounds like I have a somewhat different view of Knightian uncertainty, which is fine—I’m not sure that it substantially affects what we’re trying to accomplish. I’ll simply say that, to the extent that Knight saw uncertainty as signifying the absence of “statistics of past experience,” nuclear war strikes me as pretty close to a definitional example. I think we make the forecasting challenge easier by breaking the problem into pieces, moving us closer to risk. That’s one reason I wanted to add conventional conflict between NATO and Russia as an explicit condition: NATO has a long history of confronting Russia and, by and large, managed to avoid direct combat.
By contrast, the extremely limited history of nuclear war does not enable us to validate any particular model of the risk. I fear that the assumptions behind the models you cite may not work out well in practice and would like to see how they perform in a variety of as-similar-as-possible real world forecasts. That said, I am open to these being useful ways to model the risk. Are you aware of attempts to validate these types of methods as applied to forecasting rare events?
On the ignorance prior:
I agree that not all complex, debatable issues imply probabilities close to 50-50. However, your forecast will be sensitive to how you define the universe of "possible outcomes" that you see as roughly equally likely from an ignorance prior. Why not define the possible outcomes as: one-off accident, containment on one battlefield in Ukraine, containment in one region in Ukraine, containment in Ukraine, containment in Ukraine and immediately surrounding countries, etc.? Defining the ignorance prior universe in this way could stack the deck in favor of containment and lead to a very low probability of large-scale nuclear war. How can we adjudicate what a naive, unbiased description of the universe of outcomes would be?
As I noted, my view of the landscape is different: it seems to me that there is a strong chance of uncontrollable escalation if there is direct nuclear war between Russia and NATO. I agree that neither side wants to fight a nuclear war—if they did, we’d have had one already!— but neither side wants its weapons destroyed on the ground either. That creates a strong incentive to launch first, especially if one believes the other side is preparing to attack. In fact, even absent that condition, launching first is rational if you believe it is possible to “win” a nuclear war, in which case you want to pursue a damage-limitation strategy. If you believe there is a meaningful difference between 50 million dead and 100 million dead, then it makes sense to reduce casualties by (a) taking out as many of the enemy’s weapons as possible; (b) employing missile defenses to reduce the impact of whatever retaliatory strike the enemy manages; and (c) building up civil defenses (fallout shelters etc.) such that more people survive whatever warheads survive (a) and (b). In a sense “the logic of nuclear war” is oxymoronic because a prisoner’s dilemma-type dynamic governs the situation such that, even though cooperation (no war) is the best outcome, both sides are driven to defect (war). By taking actions that seem to be in our self-interest we ensure what we might euphemistically call a suboptimal outcome. When I talk about “strategic stability,” I am referring to a dynamic where the incentives to launch first or to launch-on-warning have been reduced, such that choosing cooperation makes more sense. New START (and START before it) attempts to boost strategic stability by establishing nuclear parity (at least with respect to strategic weapons). But its influence has been undercut by other developments that are de-stabilizing.
Thank you again for the thoughtful comments, and I’m happy to engage further if that would be clarifying or helpful to future forecasting efforts.
Thanks for the reply and the thoughtful analysis, Misha and Nuño, and please accept our apologies for the delayed response. The below was written by Peter in collaboration with Josh.
First, regarding the Rodriguez estimate, I take your point about the geometric mean rather than arithmetic mean and that would move my probability of risk of nuclear war down a bit — thanks for pointing that out. To be honest, I had not dug into the details of the Rodriguez estimate and was attempting to remove your downward adjustment from it due to "new de-escalation methods" since I was not convinced by that point. To give a better independent estimate on this I'd need to dig into the original analysis and do some further thinking of my own. I'm curious: How much of an adjustment were you making based on the "new de-escalation methods" point?
Regarding some of the other points:
Peter says: No, I live in Washington, DC a few blocks from the White House, and I’m not suggesting evacuation at the moment because I think conventional conflict would precede nuclear conflict. But if we start trading bullets with Russian forces, odds of nuclear weapons use goes up sharply. And, yes, I do believe risk is higher in Europe than in the United States. But for the moment, I’d happily attend a conference in London.
Thanks, Ryan, this is great. These are the kinds of details we are hoping for in order to inform future operationalizations of “AI takeover” and “existential catastrophe” questions.
For context: We initially wanted to keep our definition of “existential catastrophe” closer to Ord’s definition, but after a few interviews with experts and back-and-forths we struggled to get satisfying resolution criteria for the “unrecoverable dystopia” and (especially) “destruction of humanity’s longterm potential” aspects of the definition. Our ‘concerned’ advisors thought the “extinction” and “unrecoverable collapse” parts would cover enough of the relevant issues and, as we saw in the forecasts we’ve been discussing, it seems like it captured a lot of the risk for the ‘concerned’ participants in this sample. But, we’d like to figure out better operationalizations of “AI takeover” or related “existential catastrophes” for future projects, and this is helpful on that front.
Broadly, it seems like the key aspect to carefully operationalize here is “AI control of resources and power.” Your suggestion here seems like it’s going in a helpful direction:
We’ll keep reflecting on this, and may reach out to you when we write “takeover”-related questions for our future projects and get into the more detailed resolution criteria phase.
Thanks for taking the time to offer your detailed thoughts on the outcomes you’d most like to see forecasted.