Hide table of contents

Our mission is to protect humanity against biological catastrophes, including those that could lead to human extinction or cause similarly bad outcomes. This series of posts outlines how I think about these most extreme types of risks. My goal here is to share my worldview in a straightforward and compressed form rather than trying to persuade a skeptical audience, although I do share some of my reasoning.

Part I describes my views on the sources of risk and what that implies for how to prioritize response to a biological catastrophe. Part II, describes sources of risk and what that implies for prevention efforts.

TLDR: We can solve the vast majority of existential biological risk by ubiquitously deploying simple and cheap countermeasures like PPE, air filters, UVC, etc. My takes are:

  • >99% of existential risk is from engineered threats, <1% natural threats
  • >95% of existential risk is targeting humans, <5% targeting agriculture or the environment
  • The space of possible human-targeted attacks is basically infinite, so predicting cures or vaccines in advance is hopeless…
  • …but the space of physical pathways into a human body is both finite and small, which means, if this is true…
  • …we can solve potentially >95% of biological existential risk by ubiquitously deploying simple and cheap countermeasures that physically block and protect this finite number of pathways.

Note that there are plenty of catastrophes that wouldn’t qualify as ‘existential’ but still be horrific under any normal measure. For these non-existential catastrophes, I would put more probability on natural pandemics and a bit more on agricultural threats. For any biological catastrophe that kills more than 100 million people though, my risk breakdown would actually look fairly similar to the ones I give for existential risk, albeit with less confidence.

Direct vs. indirect existential risk

Some biological catastrophes could directly cause human extinction. Mirror bacteria is one possible example of this: Rather than a pandemic limited to human-to-human transmission, a worst-case scenario might have it spreading and persisting ubiquitously in the solid or dust—causing lethal infections in anybody that goes outside unprotected.

Other biological catastrophes could indirectly cause an existential catastrophe. Examples here might be a virus that kills a big enough fraction of humanity to the point where rebuilding industrial civilization is impossible,[1] or scenarios where AI uses biological weapons as part of a takeover attempt.

My current take is that most of the existential biological risk likely flows from ‘indirect’ pathways, although it’s hard to be confident. As I’ve learned more over the years I’ve updated towards believing that biological threats, even in exotic cases, are unlikely to cause direct human extinction. Conversely, some of the indirect scenarios are harder to rule out, especially those involving AI takeover, multipolar AI scenarios, and/or AI bargaining/threats.

Natural vs. engineered

Naturally-arising diseases are extremely unlikely to cause human extinction. Natural pathogens optimize for evolutionary fitness, not killing everybody or collapsing civilization. The fact that humanity has already existed hundreds of thousands of years also allows us to derive a firm upper bound on natural extinction risk that suggests the probability should be something less than 1 in 1 million (e.g. old paper I wrote on this here).[2] I also think similar arguments apply to non-extinction level catastrophic risk, although to a lesser extent.

I generally assume >99% of biological risks that could threaten human extinction are coming from technology rather than nature. Mirror bacteria provides one concrete example of the type of extreme risk that could be enabled with advances in technology. For less extreme catastrophes below 100 million deaths, the ratio is not as heavily skewed, for example naturally occurring flu pandemics could easily kill over 1-10 million people (indeed, seasonal flu may kill something like half a million people every year worldwide[3]).

Targeting human bodies vs. agriculture vs environment

We can classify all biological risk based on what it targets:

  • Attacks that target human bodies
  • Attacks that target agriculture
  • Attacks that target the habitability of Earth

Understanding the relative risks between these categories is critical for prioritization: one could imagine investing in things like respiratory protection and vaccine platforms only to later realize that the most dangerous risks were coming from environmental catastrophe.

Examples of attacks that target human bodies would include anything that directly infects and kills people, e.g. pandemic viruses or mirror bacteria, but could also include more exotic things like a gene drive that causes infertility.[4] The category encompasses anything where the bad thing happens within a human body, such that if a human can avoid physical contact with the pathogen/construct, they are safe (one basis for why we’re bullish on physical defenses).

Examples of attacks that target agriculture could include something like a wheat rust that infected all major crops, or something like mirror bacteria that infected most plants.[5] I don’t think we can entirely rule out agricultural threats, but there are a number of reasons to view them as substantially less concerning. My colleague Adin Richards describes in depth one possible countermeasure out of several in a 132-page report on resilience to agricultural attacks, noting that we can already produce enough food using natural gas to keep hundreds of millions of people alive in an emergency—even in an unrealistic worst-case scenario where all of the crops disappeared instantly.[6] There are also more mundane arguments that reduce the relative risk of these agricultural scenarios (e.g. agricultural pathogens tend to spread slowly, we have a lot of diversity in crops overall, we can engineer new crops to stay alive but we can’t simply engineer new humans).

Examples of attacks that target the habitability of Earth would be anything that made the planet uninhabitable for humans directly, for example something that stopped photosynthesis enough to threaten our oxygen supply. While these outcomes sound terrifying in the abstract, all attacks of this form can be bounded from physical first principles and running the concrete numbers provides reassuring results. For example, even if all photosynthesis magically stopped tomorrow, we’d still have over 1,000 years[7] worth of oxygen left—to say nothing about how such an attack would be essentially impossible in the first place. Overall I put less than 1% of the risk on this type of attack.

I believe these three categories are exhaustive.[8] Any biological threat must target at least one of these three categories to pose an existential threat. Notably there could be threats that target multiple categories, for example mirror bacteria that infects both humans and crops.

Overall my take is that the vast majority of biological risk comes from pathogens that directly infect and kill humans. Specifically I’d say that >95% of the risk is coming from attacks that target humans (both direct and indirect risk), with less than 5% of the risk being attacks that target agriculture, and less than 1% of attacks that target the environment. This dramatically simplifies the problem of biological threats and the strategy for dealing with them: despite the fact that the category of human-targeted threats still encompasses a near-infinite number of possibilities, the fact that all of them must physically enter and infect human bodies still constitutes a powerful constraint that we can exploit.

The space of human-targeted biological threats is vast and unpredictable

As I discussed in the 4-pillars post, one way of tackling the space of human-targeted biological threats would be to enumerate the types of pathogens that might be threatening (e.g. viruses, bacteria, fungi, etc), and then enumerate the subtypes (e.g. adenoviruses, coronaviruses, paramyxoviruses, etc), and then analyze the degree of risk posed by each category subtype, and the develop specialized medical countermeasures in a prioritized way.

I believe this is a fundamentally doomed approach against future deliberate threats. Categorizations here tend to be inherently fragile—for example mirror bacteria would have likely been left off of a list like this. Similarly, even for things on the list, all it takes is one breakthrough or concerted effort to break assumptions around how dangerous it might be or the effectiveness of medical countermeasures.

There are numerous examples of such assumptions being broken that I won’t describe publicly, but here are some weaponization attempts that are already public and commonly cited from the Soviet biological weapons program:

  • Modifying bacteria to cause an autoimmune response, killing the victim with something akin to multiple sclerosis
  • Developing a strain of plague resistant to over 15 kinds of antibiotics
  • Attempting to develop hybrids of smallpox and Ebola

There are simply too many ways that a creative person could go about killing somebody with a biological weapon, and there are plenty of more exotic ways of causing harm as well.[9] Indeed, the space of possible diseases is vast, and even if we were able to solve some large fraction of them in advance, a determined adversary could simply switch to alternative attacks.[10]

In other words, aside from some very rare exceptions (e.g. mirror bacteria), we think that attempts to predict and counter specific biological threats in a piecemeal way is a bad strategy. Even things like broad spectrum antivirals or pan-coronavirus vaccines might be hopelessly narrow or weak against a determined adversary.

Can we affordably block ~all human-targeted biological risk? (and thus ~all catastrophic bio risk?)

For biological threats that infect humans, we can think about two modes of transmission:

  1. Human-to-human transmission (e.g. COVID, smallpox)
  2. Environment-to-human transmission (e.g. mirror bacteria, zoonotic diseases, anthrax spores)

Causing outright human extinction might require a substantial amount of environment-to-human transmission—otherwise people can rely on basic social distancing for survival. However, attacks based on human-to-human transmission are likely much easier to conduct and could still be exceptionally destructive (especially if they were able to spread without people noticing and/or were part of a more general AI takeover attempt). Ultimately we need to solve both types of risk.

For either type of transmission, we can enumerate every pathway by which a pathogen could enter a human body, e.g. ingestion, inhalation, skin breaks, etc, and analyze what it would take to ensure a pathogen couldn’t physically enter. This enumeration simplifies the problem—turning a near-infinite set of possible biological risks into a small list of physically determined entryways that we can physically defend.

After doing this analysis, we believe the vast majority of catastrophic risk relies on respiratory transmission because this transmission pathway is the hardest to block. There are good first principles reasons for this—it’s relatively easy to sterilize or protect alternative pathways like water, food, surfaces, skin, etc. It also matches what we see empirically, since respiratory diseases still cause pandemics in wealthy countries whereas public health measures have eliminated or drastically reduced other transmission pathways like water-borne disease.

Given this, there is basically a simple two-step strategy that ought to work against the majority of biological threats that target humans:

  1. Detect the threat
    1. Projects like the SecureBio CASPER project use metagenomic sequencing to detect even unknown threats, and these efforts should be scaled further.
  2. Block the transmission
    1. For human-to-human threats, we think a PAPR or well-fitted elastomeric respirator for each person we’re trying to protect would dramatically cut off this pathway (and plausibly other types of improvised PPE or regular disposable N95s could be sufficient as well).
    2. For environment-to-human threats, things are much harder (people need to eventually take their masks off to eat and sleep), but some combination of air filters, fans, bleach, glycols, and other cheap materials is likely to be effective. We have an internal project focused on this and are likely to spin that out into a new organization soon.

Implicitly there is a third step in the plan, in which the emergency situation of people wearing PPE would give way to more permanent solutions like specific medical countermeasures and vaccines, more permanent buildings that have transmission suppression technologies, etc., allowing society to return to normal.[11] Although this is a critically important step, we’ve deprioritized it compared to the first two steps since they are non-negotiable prerequisites that need to be done in advance, whereas during a crisis we will have more resources and information that could be focused on developing countermeasures against the specific threat that is unfolding. I write about this more in the 4-pillars blog post.

A surprisingly hopeful conclusion

Back in 2020 Carl Shulman outlined a similar ‘win condition’ against biological threats that has driven a lot of my thinking since then. The basic idea was that after AGI had created industrial abundance, we would be able to use that incredible wealth and technology to quickly create the equivalent of BSL-4 cities if needed, so that humanity could be resilient even to mirror-bacteria style threats. Although this meant that there was a promising ‘endpoint’ to shoot for, it didn’t seem plausible that humanity could afford the sort of universal physical defenses that would be required until after an AGI-enabled industrial explosion.

The most positive update I’ve had over the past 18 months is that we might not need to wait for this industrial takeoff to foreclose a lot of this risk.[12] Previously I assumed that survival might require sci-fi levels of protection against things like mirror bacteria, things akin to spacesuits and airlocks. After running more numbers, we’re now feeling like existing equipment like PAPRs, elastomeric respirators, and good air filtration seem sufficient to protect an individual or household, even against something as bad as mirror bacteria. The question now is whether we can protect enough individuals to protect society as a whole, and it seems like scaling these interventions to protect tens of millions of people and essential workers might be feasible even with today’s philanthropic budgets.


  1. I don’t put a huge amount of weight on scenarios where civilization fails to recover, for reasons similar to those that Lusia outlines here ↩︎
  2. Note that in the case of disease, this picture is complicated by modern population density, interconnectedness, and other factors that could increase the emergence and spread of pathogens (an argument made by David Manheim here and Seth Baum here). Many endemic diseases are in fact surprisingly modern given the requirement of large population densities (nice book about that here). Still even with these factors, claiming that there is more than a 0.01% chance of human extinction due to natural pandemics this century would require modern conditions to raise this risk by over 100-fold over historical baseline, which seems highly unlikely. Germ theory, large populations, basic sanitation, and medical countermeasures have likely reduced this risk far more than the disadvantages of interconnectedness have increased it. ↩︎
  3. Estimate from CDC/WHO ↩︎
  4. I picked one of the silliest examples. In general I’m not super worried about infertility attacks, or gene drives operating within humans (given how slow the generation time is). ↩︎
  5. An attack that turned crops poisonous would be considered a human-targeted attack if the threat was people dying directly, as opposed to generally causing food shortages. In practice I expect this would be detected pretty quickly and not be the type of thing that would directly kill a large fraction of the human population. ↩︎
  6. Attacks that target both humans and agriculture are substantially more scary, since this argument relies on productive people that are staying alive, building natural gas fermenters, distributing food, etc. What this suggests though is that the main thing to protect is the human lives, and as long as we can keep the humans safe, we should be able to get people fed. ↩︎
  7. We would actually have millions of years left since all the animals that would rely on photosynthesis food chains would die off, so the oxygen consumption would be dramatically reduced. ↩︎
  8. A fourth category for strict completeness would be anti-material attacks, like some biological organism that ate plastic. This would only be an existential threat if it turned out that we needed plastic (or whatever else the targeted material was) for a bright and flourishing future, and it was somehow difficult to sterilize or protect that material, or produce alternative materials that were less vulnerable. ↩︎
  9. One could imagine trying to cause infertility or hallucinations with a pathogen instead of outright death, although I’m not actually concerned about these more exotic types of attacks. ↩︎
  10. This is why I’m most bullish on medical countermeasure strategies that rely more on being able to rapidly create new countermeasures in response to a threat, rather than attempting to anticipate a threat ahead of time. ↩︎
  11. Or at least hopefully close to normal—in something like a mirror bacteria situation a lot of the Earth’s ecosystems would be irrecoverably damaged. ↩︎
  12. An intelligence explosion will likely enable the design and deployment of biological weapons before it enables widespread industrial automation. This means there is a window of vulnerability during a takeoff where we may have the means to kill ourselves without the widespread physical protection needed to protect ourselves, so the fact that we could potentially get defenses up before the early stages of an intelligence explosion is good news. ↩︎

33

1
0
1

Reactions

1
0
1

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
More from ASB
Curated and popular this week
Relevant opportunities