Recently, I have heard an increasing number of prominent voices mention the risk of AI helping to create catastrophic bioweapons. This has been mentioned in the context of AI safety, and not in the context of biosecurity. Thus, it seems anecdotally that people see a significant portion of AI risk being that of an AI somehow being instrumental in causing a civilization-threatening pandemic. That said, I have failed to find any even precursory exploration of how much of AI risk is this risk of AI being instrumental in creating catastrophic bioweapons. Does anyone know of any attempts to quantify the "overlap" between AI and bio? Or could someone please try to do so?
One reason to quantify this overlap is if bio+AI is used as a primary or at least prominent example to the public, it seems useful to have some analysis underpinning such statements. Even showing that bio+AI is actually just a small portion of AI risk might be helpful so that when using this example, the person using this example can also mention that this is just one of many ways AI could end up contributing to harming us. Or if it is indeed a large portion of AI risk, state this with a bit more clarity.
Other reasons to have such analysis might be:
- Assisting grantmakers allocate funds. For example, if there is a large overlap, it might be that grantmakers currently investing in AI safety also might want to donate to biosecurity interventions likely to help in an "AI assisted pandemic"
- Help talent decide what problem to work on. It might, for example be that policy experts worried about AI Safety also want to focus on legislation and policy around biosecurity
- Perhaps foster more cooperation between AI and biosecurity professionals
- Perhaps a quantification here could help both AI safety experts and biosec professionals know what type of scenarios to prepare for? For example, it could put more emphasis in AI safety work on trying to prevent AI from becoming too capable in biology (e.g. by removing such training material).
- Probably other reasons I have not had time to think about
Relatedly, and I would be very careful in drawing conclusions from this, I just went through the Metaculus predictions for the Ragnarök question series and found that these add up to 132%. Perhaps this indicate overlaps between the categories, or perhaps it is just an effect of the different forecasters for the different questions (there seems to be large variation in how many people have forecasted on each question). Let us assume for the sake of the argument that the "extra" 32% very roughly represents overlap between the different categories. Then, with very little understanding of the topic, I might guess that perhaps half of the biorisk of 27% would also resolve as AI caused catastrophe, so roughly 15%. This means 15% of the bio risk is the same as 15% of the AI risk which would reduce the 32% excess (132%-100%) to about 32%-15%=~15%. Perhaps the remaining 15% are overlaps between AI and nuclear, and possibly other categories. However, this would mean almost half the AI risk is biorisk. This seems suspiciously high but at least it could explain why so many prominent voices uses the example of AI + bio when talking about how AI can go wrong. Moreover, if all these extra 32% are indeed overlaps with AI, it means there is almost no "pure" AI catastrophe risk which seems suspicious. Anyways, these are the only numbers I have come across that at least points towards some kind of overlap between AI and bio.
Thanks for any pointers or thoughts on this!
I strongly agree with this, contra titotal. To explain why, I'll note that there are several disjunctive places that this risk plays out.
First, misuses of near human AGI systems or narrow AI could be used by sophisticated actors to enhance their ability to create bioweapons. This might increase that risk significantly, but there are few such actors, and lots of security safeguards. Bio is hard, and near-human-level AI isn't a magic bullet for making it easy. Narrow AI that accelerates the ability to create bioweapons also accelerates a lot of defensive technologies, and it seems very, very implausible that something an order of magnitude worse than natural diseases would be found. That's not low risk, but it's not anything like half the total risk.
Second, misuse or misalignment of human level AI systems creating Bostromian speed superintelligences or collective superintelligences creates huge risks, but these aren't specific to biological catastrophes, and they don't seem dominant, humanity is vulnerable is so many ways that patching one route seems irrelevant. And third, this is true to a far greater extent for misaligned ASI.
In the near term, misuse via bio doesn't pose existential risks, because synthetic bio is fundamentally harder than people seem to assume. Making a bioweapon is very hard, making one significantly worse than what previous natural diseases and bioweapons were capable of is even harder, and the critical path isn't addressed with most of the capabilities that narrow AI I expect is possible before AGI could plausibly do.
After that, I think that the risk from powerful systems is disjunctive, and any of a large number of different things could allow a malign act... (read more)