In July 2024, a flawed software update from CrowdStrike cascaded through interconnected systems worldwide, grounding flights, disrupting hospitals, and paralysing financial institutions. At the time I thought the incident offered a glimpse of what AI-related crises might look like: rapid, cross-border, and deeply entangled with critical infrastructure. Yet as policymakers and researchers continue to debate AI governance frameworks, I can't help but feel that the overwhelming focus remains on prevention—building guardrails, establishing red lines, and designing evaluation protocols to ensure dangerous capabilities never emerge. This is, undoubtedly, important work. Yet it seems to me that what is lacking from too many AI governance discussions is a serious plan for when prevention fails.
Failing to plan for failure would constitute a serious vulnerability our collective approach to AI risk. The history of complex technological systems suggests that prevention, however robust, is never sufficient. Nuclear power plants have safety protocols measured in redundancies, yet Chernobyl and Fukushima still occurred. Financial markets operate under extensive regulatory oversight, yet the 2008 crisis tore through supposedly fire-walled institutions. The question is not whether AI incidents will occur; early evidence suggests they already are, from coding assistants deleting databases without instruction to chatbots allegedly contributing to instances of self-harm - and there is nothing about the collective trajectory of the sum of advanced AI labs capacity for self restraint (...think Grok's latest image generation debacle) to suggest we will not encounter major safety failures. The question will soon become whether we possess the institutional capacity to respond effectively when these failures do occur.
The Prevention-Response Asymmetry
In 'The Precipice', Ord argues that there is an asymmetry between our existing efforts to prevent existential risks in comparison to their potential impact.[1] He's right to highlight the need to do more to prevent catastrophic outcomes, but in current AI governance efforts it seems the implementation of this sentiment has been focused on finding avenues to prevent the event occurring in the first place. Substantial intellectual and institutional resources flow toward anticipating risks and establishing pre-deployment safeguards (from limiting compute, to pre-deployment red-teaming etc.) while comparatively little attention is paid to post-incident response mechanisms. The EU AI Act, NIST's AI Risk Management Framework, and the Hiroshima AI Process all represent important steps toward prevention, but none provides a coherent framework for coordinating responses to AI incidents that cross national borders or affect multiple sectors simultaneously.
Preventing a high-impact incident in the first place is certainly the optimum outcome...but events seem to be moving ever faster. While AI chiefs highlight the ever-looming risks AIs may pose, they simultaneously continue to produce increasingly advanced models. While I hope we can chart a near perfect, incident free course to AGI I'm increasingly sceptical that this is possible.
Given this, I find myself particularly compelled by the position of Jess Whittlestone and Tommy Shaffer of the Centre for Long Term Resilience which accepts that 'AI models could be widely proliferated within a few years' and that we should be 'undertaking a wide range of measures to anticipate, prevent, prepare and respond to this scenario.'[2] Without such taking this more balanced, comprehensive approach incidents (when they inevitably do occur) could cause avoidable catastrophic harms.
Yet once one accepts the need to better prepare for our response to future AI incidents, the practical challenge that begins to take shape is not merely technical but fundamentally institutional: who decides that an AI incident has become an international emergency? Who speaks to the public when false messages flood their feeds? Who keeps channels open between governments if normal communication lines are compromised?
What Emergency Response Coordination Teaches Us
I've spent the past 5 years working in the field of emergency humanitarian response, which has, albeit in very different contexts, grappled with this question of how to pre-position systems that are able to operate under pressure and epistemic uncertainty. Humanitarian coordination mechanisms, pandemic preparedness frameworks, and disaster response protocols all offer instructive lessons for how to prepare for and reduce the impact of a high-impact event ahead of time. Key features of these systems range from pre-established legal frameworks and oversight, to multilateral policy coordination, standardised incident reporting, monitoring and early warning, operational protocols, trusted communication systems, and recovery and accountability mechanisms.
Take for example the humanitarian cluster system, developed after the inadequate international response to the 2004 Indian Ocean tsunami. This model designates lead agencies for specific sectors—shelter, health, telecommunications—creating clear accountability and coordination mechanisms that activate during emergencies. Each cluster maintains pre-positioned capacity, standardised operating procedures, and established relationships with governments and other responders. The system's strength lies not in preventing crises but in ensuring that when they occur, response mechanisms exist that do not need to be invented in the moment of chaos.
AI governance currently lacks equivalent structures. There is no designated international body responsible for coordinating responses to AI incidents, no standardised protocols for incident reporting across jurisdictions, and no pre-positioned technical capacity to diagnose and contain AI-related harms. My hope is that we won't wait for our own 2004 Tsunami AI equivalent - a high-impact event with harm multiplied by a failure to prepare - before we begin to put these systems in place.
I'm hugely reassured that the Future Society and others have begun identifying this gap, calling for an AI emergency playbook that borrows tools from existing crisis response frameworks and adapts them to AI's unique characteristics. More voices need to join this area of work and quickly, because as proliferation and adoption of advanced AI systems expands, so do does our time-horizon for preparation shrink.
This is perhaps the greatest challenge we face in our collective efforts to avoid x-AI risk - AI is moving fast...much faster than humans are used to adapting. If we were to successfully build and implement such a playbook, it would need to account for AI's capacity to operate at speeds exceeding human response times, the difficulty of attributing AI-driven incidents, and the deep integration of AI systems across critical infrastructure.
The Speed and Attribution Problems
AI crises present challenges that differ in important respects from the traditional emergencies from where can draw valuable lessons. First is the question of speed. Agentic AI systems can act with limited or no real-time human intervention, meaning AI-driven failures or attacks could escalate to crisis levels far faster than previously experienced in other domains. A traditional industrial accident unfolds over hours or days; an AI system malfunction or coordinated misuse could propagate across interconnected networks in minutes. This speed differential demands new approaches to detection and response that can match the pace of AI-driven events.
We also face a substantial attribution problem. In many cases, the first signs of an AI emergency would likely resemble a generic outage or security failure. We have seen in recent turbulent times how clouded the information space can become as society works to understand the nature and cause of events amongst competing narratives (take for example the competing portrayals of the recent Iranian uprising). In the case of a major AI emergency it is likely that only later, if at all, would it become clear that AI systems had played a material role. This diagnostic uncertainty complicates response efforts: different types of incidents—operational safety failures, malicious use, cascading technical failures—may require fundamentally different response protocols. The Future Society's advocacy for an AI playbook usefully distinguishes between operational safety and reliability incidents, security and privacy incidents, and malicious use incidents, each requiring distinct institutional responses.
Perhaps most concerning for those thinking of the potential scale of impact of an AI emergency is the problem of interdependence. High market concentration among providers of frontier AI models and essential cloud infrastructure creates potent single points of failure. Flaws or outages in one dominant system could trigger simultaneous disruptions across many sectors—precisely the scenario the CrowdStrike incident foreshadowed. Response mechanisms must therefore account not only for AI-specific risks but also for their interaction with an already fragile global system characterised by climate volatility, geopolitical tension, and supply chain vulnerabilities.
Building Crisis Preparedness Capacity
What would meaningful AI crisis preparedness look like in practice? Several elements seem essential. First, governments should designate national AI emergency contact points—officials with authority and expertise who can be reached around the clock to coordinate responses to AI incidents. This mirrors established in the humanitarian space, where we have 24/7 emergency teams with designated points of contact facilitate rapid international communication during crises.
Second, emergency powers should be reviewed to determine whether they adequately cover AI infrastructure. Legal authority to intervene firmly in a crisis (i.e. mandated system shutdowns) may not exist or may be fragmented across agencies with competing mandates. The time to identify and address these gaps is before a crisis occurs, not during one...although the desire to develop sufficient powers must always, particularly now, be balanced with concerns and risks of authoritarian capture.
Third, incident reporting mechanisms require standardisation and enforcement. The OECD's AI Incidents Monitor represents a promising start, but voluntary reporting will likely prove insufficient when incidents involve reputational or legal risks for developers. Effective reporting requires both incentives for disclosure and protections for those who report—the kind of whistleblower frameworks that civil society organisations have identified as crucial for AI governance.
Fourth, response protocols should be developed and exercised through regular drills and simulations. Emergency responders in other domains routinely test their systems under realistic conditions; AI governance should adopt similar practices. Such exercises would identify gaps in coordination, build relationships among responders, and create institutional memory that proves invaluable during actual crises.
Finally, international coordination mechanisms need strengthening. The United Nations offers a natural anchor for AI emergency preparedness, providing wider inclusion than alliance-based frameworks and adding legitimacy to extraordinary measures that might be required during crises...nevertheless I recognise that its legitimacy may be waining in the current climate. Having seen the role that neutral intermediary institutions are able to play in maintaining critical emergency channels and activities in Sudan across conflict lines, I believe that some form of multilateral approach is critical. Whether its the UN or some still to be created institution, there is a real need for a trusted, multilateral body to coordinate AI emergency response across major powers.
Concluding Thoughts
The measure of AI governance will ultimately be how we respond on our worst day. Prevention remains essential—we should continue investing in safety research, evaluation protocols, and responsible deployment practices. But prevention alone is insufficient. Maybe I'm wrong in my belief that some form of major AI emergency will take place in the next 20 years, but the history of complex systems suggests that som forms of substantial failures are inevitable; the question for me is whether we possess the institutional capacity to contain and recover from them.
Currently, the world has no coherent plan for an AI emergency. Building one requires learning from domains that have confronted analogous challenges—humanitarian response, pandemic preparedness, nuclear safety—while adapting those lessons to AI's unique characteristics. The institutional infrastructure for effective crisis response cannot be built overnight; it must be developed, tested, and refined over time. The greatest responsibility of those involved in AI governance is to ensure sufficient action is taken to prevent catastrophic outcomes resulting from this incredible technological development. We may not be able to firewall ourselves from high-impact AI events - but if we are to live up to this responsibility, we'd better be prepared for them.

Executive summary: The author argues that AI governance overemphasises prevention while neglecting crisis preparedness, and concludes that building institutional capacity for rapid, coordinated response to AI incidents is essential because failures are inevitable in complex AI-integrated systems.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.