Hide table of contents

This series of posts outlines how I think about catastrophic biological risks. My goal here is to share my worldview in a straightforward and compressed form rather than trying to persuade a skeptical audience, although I do share some of my reasoning. Part I describes my views on the sources of risk and what that implies for how to prioritize response to a biological catastrophe. Part II, describes sources of risk and what that implies for prevention efforts.

How should we prioritize prevention efforts like strengthening the biological weapons convention, lab biosafety, or DNA synthesis screening guidelines? That question hinges on how much risk is coming from different sources. This post outlines how I think about prevention priorities and the risk share from several sources.

TLDR: for catastrophes that kill a substantial fraction of humans on Earth, deliberate misuse is much more likely than well-intentioned accidents. If we first set aside AI as an actor, a first cut risk breakdown might be something like:

  • 1-10% risk coming from well-intentioned scientists accidentally creating something terrible
  • 5-25% risk coming from state bioweapons programs
  • 65-95% risk coming from non-state bad actors (here we’re assuming these bad actors are humans, likely with substantial help from AI in future)

Adding AIs as potential bad actors themselves complicates this, and should be considered a separate category that might end up dominating all three of these categories. Unlike the previous post which can draw on physical first principles, my risk breakdown here relies more on vibes and heuristics and is far more uncertain.

States vs. non-states vs civilian accidents

Generally speaking, we can break down ‘actors’ that could cause a catastrophic biological event into three categories:

  • Civilian accidents: accidents caused by well-intentioned scientists
  • States: state-backed biological weapons programs. This category can be further divided into intentional deployment vs accidents/unauthorized deployment
  • Non-states: terrorists

My best guess for the risk breakdown between these three categories is something like:

  • ~5% risk coming from civilian accidents
  • ~15% risk coming from states
  • ~80% risk coming from non-states

I wouldn’t take these numbers too literally.

Civilian accidents

Well-intentioned scientists are unlikely to accidentally create doomsday weapons, since successfully creating such doomsday weapons would (I claim) require a substantial degree of malicious intent and fine tuning.

While there are also well-intentioned scientists focused on things like gain of function research, these would typically be done on pathogens that could not greatly exceed the damage of a natural pandemic (since they are attempting to emulate what would happen to a natural pathogen, rather than optimizing a lot of harmful traits at once).

One notable exception to this is that a number of well-intentioned scientists were eventually aiming to create mirror bacteria. Out of all of the well-intentioned science projects I’ve heard about as candidates for potential catastrophes, mirror bacteria is the only one that actually seems like it could cause a catastrophe without any additional malicious intent. I personally believe that mirror bacteria was unique in this respect and that recent efforts have dramatically reduced the chances that a well-intentioned scientist goes all the way to creating mirror bacteria. Still, mirror bacteria is an existence proof that there could be threats in this category, so I still give the ‘well-intentioned accident’ ~5% risk weight out of humility about other unknowns.

Notably, well-intentioned scientists are likely to create technologies that could enable catastrophic risk, even if the creations themselves are harmless. For example, designing better viral vectors intended for gene therapy could provide more general insights for how to get viruses to evade the immune system.

State weapon programs

State weapons programs historically have been more interested in non-transmissible weapons that would not cause a global catastrophe, and one might imagine that a country wouldn’t want to deploy a transmissible weapon with ‘blowback’ risk to their own populations. However, I don’t find this line of argument exceptionally reassuring since some countries have in fact pursued biological weapons that were highly transmissible and incurable (e.g. the Soviet program developing genetically engineered smallpox).

There is also a big difference between a state weapons program developing something catastrophic vs intentionally deploying something catastrophic. Even if a weapons program were to develop something catastrophic, it is unlikely they would intentionally deploy it—I would put the probability as being analogous to intentional nuclear weapons use. More concerning would be accidental releases, since weapons programs historically have had lots of accidents (more than 80% of my risk from state weapons programs would be coming from unauthorized/accidental release[1]).

Bioterrorism

Historically, bioterrorism has been extremely rare and fortunately wimpy.[2] My view is that this has mostly been a function of technological limitations. The majority of the risk is here because there are in fact large numbers of people/groups that would be motivated to cause catastrophe if the technology were widely available, and one could imagine AI dramatically increasing accessibility to biological weapons in the future.

Implications for prevention priorities

If this risk breakdown by actor is correct, it suggests that we should prioritize interventions that make it more difficult for relatively poorly resourced bad actors to get access to the possible biological weapons of the future. That suggests things like DNA synthesis screening, better AIxBio guardrails, and lobbying for governments to spend more resources on intelligence and disruption efforts. It’s also a nice coincidence that these interventions targeted at non-state actors might also be among the most tractable interventions, especially compared to efforts targeted against state weapons programs (e.g. setting up verification of the biological weapons convention).

Another implication relates to how to trade off risk between civilian accidents and malicious intent. Sometimes I’ll hear project proposals along the lines of ‘we should do lab work to explore dangerous thing X and get it published, so that way scientists won’t accidently do X.’ In general, the information hazard tradeoff typically isn’t worth it because there are not severe risks associated with the well-intentioned research (the one big exception being mirror bacteria).


  1. Although the strongest counterargument is that intentional deployment of catastrophic biological weapons would also likely correlate with intentional deployment of nuclear weapons, and perhaps the existential risk is highest in worlds where all of these catastrophes are happening at once. ↩︎
  2. Some notable bioterrorism examples include spraying salmonella on a salad bar and using the attenuated vaccine strain of anthrax. ↩︎

21

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
More from ASB
Curated and popular this week
Relevant opportunities