Hide table of contents
This is a linkpost for https://bit.ly/NukeTechDevs

This is a blog post, not a research report, meaning it was produced relatively quickly and is not to Rethink Priorities' typical standards of substantiveness and careful checking for accuracy.

Summary   

This post is a shallow exploration of some technological developments that might occur and might increase risks from nuclear weapons - especially existential risk or other risks to the long-term future. This is one of many questions relevant to how much to prioritize nuclear risk relative to other issues, what risks and interventions to prioritize within the nuclear risk area, and how that should change in future. But note that, due to time constraints, this post isn’t comprehensive and was less thoroughly researched and reviewed than we’d like. 

For each potential development, we provide some very quick, rough guesses about how much and in what ways the development would affect the odds and consequences of nuclear conflict (“Importance”), the likelihood of the development in the coming decade or decades (“Likelihood/Closeness”), and how much and in what ways thoughtful altruistic actors could influence whether and how the technology is developed and used (“Steerability”). 

These tentative bottom line beliefs are summarized in the table below:

Category

Technological Development

Importance          

Likelihood / Closeness

Steer-ability             

Bomb types and production methods

Radiological weapons

Medium                                  

Medium/

High

Medium/

Low

Pure fusion bombs

Medium

Medium/

Low

Medium

High-altitude electromagnetic pulse (HEMP)

Medium

Medium/

Low

Medium/

Low

Neutron bombs

Low

Medium/

Low

Medium

Methods for production and design

Atomically precise manufacturing (APM)

High

Low

Medium

AI-assisted production/design

Medium/

High

Medium/

Low

Medium

Other developments in methods for production/design

?

?

?

Delivery systems

Hypersonic missiles/glide vehicles

Medium/Low

Medium/

High

Medium/

Low

More accurate nuclear weapons

Medium

Medium

Medium/

Low

Long-range conventional strike capabilities

Medium/Low

Medium/

High

Medium/

Low

Detection and defense

Better detection of nuclear warhead platforms, launchers, and/or delivery vehicles

Medium/High

Medium/

High

Medium/

Low

Missile defense systems

Medium

Medium

Medium/

Low

AI and cyber

Advances in AI capabilities

Medium/

High

Medium/

High

Medium

Cyberattack (or defense) capabilities

Medium/

High

Medium/

High

Medium

Advances in autonomous weapons

Medium

Medium/

High

Medium

More integration of AI with NC3 systems

Medium

Medium

Medium

Non-nuclear warmaking advances

Anti-satellite weapons (ASAT)

Medium/

Low

Medium

Medium/

Low

“Space planes” and other (non-ASAT) space capabilities

Medium/

Low

Medium

Medium/

Low


 Note that: 

  • Each “potential technological development” is really more like a somewhat wide area in which a variety of different types and levels of development could occur, which makes the ratings in the above table less meaningful and more ambiguous.
  • “Importance” is here assessed conditional on the development occurring, so will overstate the importance of thinking about or trying to steer unlikely developments. 
  • In some cases (e.g, “More accurate nuclear weapons”), the “Importance” score accounts for potential risk-reducing effects as well. 
  • “Likelihood/Closeness” is actually inelegantly collapsing together two different things, making our ratings of developments on that criterion less meaningful. E.g., one development could be moderately likely to occur quite soon and moderately likely to occur never, while another is very likely to occur in 15-25 years but not before then.
  • Some of the topics this post discusses involve or are adjacent to information hazards (especially attention hazards), as is the case with much other discussion of technological developments that might occur and might increase risks. We ask that readers remain mindful of that when discussing or writing about this topic.

Epistemic status & directions for further research

In 2021, Michael did some initial research for this post and wrote an outline and rough notes. But he pivoted away from nuclear risk research before having time to properly research and draft this. We (Michael and Will) finished a rough version of this post in 2022, since that seemed better than it never being published at all, but then didn’t get around to publishing till 2023. As such, this is just a very incomplete starting point, may contain errors, and may be outdated. It could be quite valuable for someone to spend more time: 

  • learning about other possible technological developments worth paying attention to,
  • doing more thorough and careful research on the developments we discuss,
  • thinking more about their implications for how much to prioritize nuclear risk reduction and what interventions and policies to pursue, and/or
  • talking to and getting feedback from various experts

See also Research project idea: Technological developments that could increase risks from nuclear weapons.

How to engage with this post

The full post can be found here. It’s ~23,000 words, much of which is extensive quotes without added commentary from us. Some quotes are relevant to multiple sections and hence are repeated. But each section or subsection should make sense by itself, so readers should feel free to read only the sections that are of interest to them, to skim, and to skip repeated quotes and “Additional notes” sections. You can navigate to sections using the links in the summary table above.

Scope of this post

This post is focused on what potential technological developments could increase risks from nuclear weapons. As such, this post is not necessarily claiming that these technologies will be net harmful overall, nor that nuclear risk will increase in future overall; both of those claims are plausibly true and plausibly false, and we don’t assess them here. Here are some further notes on what this post is vs isn’t focusing on and claiming:

  • We’re not claiming any of these developments are guaranteed - in fact, we think several are unlikely or would only happen a long time from now.
  • We don’t address things that could increase risk from nuclear weapons but that aren’t technological developments.
  • We mostly focus on possible new technological developments, setting aside proliferation of existing technologies or changes in how those technologies are deployed.
    • However, sometimes these lines are blurry, and we did end up discussing some things that may be more like deployment than development (e.g., integration of AI with NC3).
  • We don’t focus on discussing ways these technological developments might also decrease nuclear risks, whether their net effect might be a decrease in nuclear risk (even if they could also have important risk-increasing effects), or other technological developments that could decrease nuclear risks.
    • In a few places we do touch on those points, but we didn’t set out to do so, and thus there’s far more that could be usefully said than what we’ve said in this post. 
  • We had hoped to discuss what could and should be done to influence whether and how these technological developments occur and are used, and what other implications these potential developments might have for what risks and interventions to prioritize in the nuclear risk space. But ultimately we ran out of time, and hence this post only contains extremely preliminary and patchy discussion of those questions. 

Acknowledgements

Michael’s work on this post was supported by Rethink Priorities (though he ended up pivoting to other topics before having time to get this up to RP's usual standards). Will helped with the research and editing in a personal capacity. We’re grateful to Ben Snodin, Damon Binder, Fin Moorhouse, and Jeffrey Ladish for feedback on an earlier draft. Mistakes are our/Michael’s own.

If you are interested in RP’s work, please visit our research database and subscribe to our newsletter

Comments3


Sorted by Click to highlight new comments since:

An in-our-view interesting tangential point: It might decently often be the case that a technological development initially increases risks but then later increases risk by a smaller margin or even overall reduces risks. 

  • One reason this can happen is that developments may be especially risky in the period before states or other actors have had time to adjust their strategies, doctrine, procedures, etc. in light of the development.
  • Another possible reason is that a technology may be riskiest in the period when it is just useful enough to be deployed but not yet very reliable.
  • Geist and Lohn (2018) suggest this might happen, for the above two reasons, with respect to AI developments and nuclear risk:
    • “Workshop participants agreed that the riskiest periods will occur immediately after AI enables a new capability, such as tracking and targeting or decision support about escalation. During this break-in period, errors and misunderstandings are relatively likely. With time and increased technological progress, those risks would be expected to diminish. If the main enabling capabilities are developed during peacetime, then it may be reasonable to expect progress to continue beyond the point at which they could be initially fielded, allowing time for them to increase in reliability or for their limitations to become well understood. Eventually, the AI system would develop capabilities that, while fallible, would be less error-prone than their human alternatives and therefore be stabilizing in the long term”

Just remembered that Artificial Intelligence and Nuclear Command, Control, & Communications: The Risks of Integration was written and published after I initially drafted this, so Will and I's post doesn't draw on  or reference this, but it's of course relevant too.

This is a good post. Thank you for sharing. I disagree somewhat with your framework, because I think it is extremely important to differentiate factors that increase the likelihood of armed conflicts between nuclear powers and factors that increase the risk of nuclear escalation given a conventional conflict. I think that you've over-focused on the latter, and that drivers of the former are fairly important 

For example, your analysis of UAVs and UUVs doesn't consider a risk I find highly salient: mutual misunderstanding of escalatory strength. That is, if the US shoots down an uncrewed Chinese intelligence balloon over US airspace, the escalatory action was China sending the balloon at all. If the US had shot down a crewed Chinese stealth fighter, reactions would have been very different. This holds even if the capabilities of the fighter and the balloon were identical.

Now, if the sole impact of UAVs is that there's a new step on the escalation ladder, that would probably be slightly beneficial. But if there's a step on the escalation ladder and Chinese and American political leadership disagree on where that step is, the potential for a situation to turn into a shooting conflict that takes lives increases substantially.

A similar point about escalation uncertainty can be raised by cybersecurity capabilities: militaries across the globe have made some steps towards defining how they think about cyberattacks. I believe that the most explicit statement on the topic comes from the French, but there are also advantages to strategic ambiguity, and genuine uncertainty about how publics in both authoritarian and democratic states would react to a cyberattack that, say, impaired the power grid.

Cybersecurity has the additional problem of, in the view of some experts, having incentives towards more provocative action, with a bias towards attackers under some circumstances.

As always, I do not speak for my employer or the US government.

[comment deleted]2
0
0
Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier