Introduced to the Senate this past June, the Global Catastrophic Risk Management Act details a Federal plan for addressing existential threats. Global catastrophic risk is defined here as “the risk of events or incidents consequential enough to significantly harm, set back, or destroy human civilization at the global scale.” Here is a brief summary of what the bill entails.


Within 90 days of the bill being enacted, the President establishes an Interagency Committee on Global Catastrophic Risk. The Committee consists of several members of the Executive Branch, and is co-chaired by a senior representative of the President and the Deputy Administrator of the Federal Emergency Management Agency for Resilience.


The purpose of the Committee is to submit a report on global catastrophic risk to Congress within a year of enactment. The report includes a comprehensive list of potential threats over the next 30 years, technical and lay descriptions of each threat, cumulative and individual likelihoods according to expert estimates, and expert-informed analyses of the most likely threats. It also includes a review of the effectiveness of early warning, a forecast of if and why these risks are likely to increase or decrease in the next 30 years, and an explanation of any limiting factors in the assessment. Finally, the report includes proposals for improving assessment and recommendations for legislative action.


Within 180 days of this report being submitted to Congress, the President submits a followup report on the ability of the government to maintain function in the event of a global catastrophe. This will assess the government's plans for maintaining essential functions during catastrophes and appointing successors after an official’s death, among other needs. The report will include a budget proposal and recommendations for legislative action if appropriate.


In addition to these two reports, the President and the Committee will also develop a strategy for providing for the basic needs of Americans in the event of a global catastrophe. This strategy assumes that multiple levels of critical infrastructure are incapacitated, the military is preoccupied with an armed or cyber conflict, and State and local governments are unable to provide for these needs alone. The strategy also works to enhance individual resilience by improving awareness and domestic supply chains, and includes efforts to seek humanitarian aid from international allies.


Within 90 days of this strategy being issued, the President produces a plan to enact it. This includes specific actions the President will take to ensure that the government is capable of implementing the strategy, that the public is educated on these matters, that strategic objectives are met, and that foreign adversaries are not able to undermine the plan. Within one year of this, the Department of Homeland Security leads a national exercise in executing the plan, involving State, local, and Tribal governments, information sharing centers, and owners of critical infrastructure. Within one year of the exercise, the President submits a review of the exercise to Congress.

 

The act is sponsored by Senator Rob Portman of Ohio, and is cosponsored by Senators Gary Peters of Michigan, Maggie Hassan of New Hampshire, John Cornyn of Texas. Its current status is “ordered reported,” meaning the committee reviewing this bill sent it to the Senate as a whole for review.

35

0
0

Reactions

0
0

More posts like this

Comments8


Sorted by Click to highlight new comments since:

The GCRMA was included in the the final National Defense Authorization Act for FY2023 which became law in December 2022. The text is altered a little from the draft version, but can be read here: https://www.congress.gov/117/bills/hr7776/BILLS-117hr7776enr.pdf#page=1290  I have blogged about it here: https://adaptresearchwriting.com/2023/02/05/us-takes-action-to-avert-human-existential-catastrophe-the-global-catastrophic-risk-management-act-2022/ Not sure why there isn't much discussion about it. It seems like something every country could replicate, and then the Chairs of each nation's risk assessment committee could meet to coordinate. 

Thanks for the update! Also surprised I haven't seen more discussion.

Thanks. Great post, btw. May I translate a part of it? and why don't you post it here on EA forum?

Yes, feel free to translate whatever you like. And ahh, I'm a bit selective about what I post on here. It's just they way I've decided to curate things. I don't mind people linking to it though. 

That's awesome! Thanks for the update!

Does anyone have any idea of how politically feasible it is for the bill to pass?

It's hard to judge whether this bill will go anywhere (I hope it does!); it seems to have gotten very little press coverage.

If we can't get a strong bipartisan consensus on reducing GCRs, then our governance system is broken.

From what I can tell, support is bipartisan but small. It was sponsored by a Republican, Senator Rob Portman from Ohio, and is cosponsored by two Democrats and one other Republican.

Curated and popular this week
 ·  · 10m read
 · 
Regulation cannot be written in blood alone. There’s this fantasy of easy, free support for the AI Safety position coming from what’s commonly called a “warning shot”. The idea is that AI will cause smaller disasters before it causes a really big one, and that when people see this they will realize we’ve been right all along and easily do what we suggest. I can’t count how many times someone (ostensibly from my own side) has said something to me like “we just have to hope for warning shots”. It’s the AI Safety version of “regulation is written in blood”. But that’s not how it works. Here’s what I think about the myth that warning shots will come to save the day: 1) Awful. I will never hope for a disaster. That’s what I’m trying to prevent. Hoping for disasters to make our job easier is callous and it takes us off track to be thinking about the silver lining of failing in our mission. 2) A disaster does not automatically a warning shot make. People have to be prepared with a world model that includes what the significance of the event would be to experience it as a warning shot that kicks them into gear. 3) The way to make warning shots effective if (God forbid) they happen is to work hard at convincing others of the risk and what to do about it based on the evidence we already have— the very thing we should be doing in the absence of warning shots. If these smaller scale disasters happen, they will only serve as warning shots if we put a lot of work into educating the public to understand what they mean before they happen. The default “warning shot” event outcome is confusion, misattribution, or normalizing the tragedy. Let’s imagine what one of these macabrely hoped-for “warning shot” scenarios feels like from the inside. Say one of the commonly proposed warning shot scenario occurs: a misaligned AI causes several thousand deaths. Say the deaths are of ICU patients because the AI in charge of their machines decides that costs and suffering would be minimize
 ·  · 14m read
 · 
This is a transcript of my opening talk at EA Global: London 2025. In my talk, I challenge the misconception that EA is populated by “cold, uncaring, spreadsheet-obsessed robots” and explain how EA principles serve as tools for putting compassion into practice, translating our feelings about the world's problems into effective action. Key points:  * Most people involved in EA are here because of their feelings, not despite them. Many of us are driven by emotions like anger about neglected global health needs, sadness about animal suffering, or fear about AI risks. What distinguishes us as a community isn't that we don't feel; it's that we don't stop at feeling — we act. Two examples: * When USAID cuts threatened critical health programs, GiveWell mobilized $24 million in emergency funding within weeks. * People from the EA ecosystem spotted AI risks years ahead of the mainstream and pioneered funding for the field starting in 2015, helping transform AI safety from a fringe concern into a thriving research field. * We don't make spreadsheets because we lack care. We make them because we care deeply. In the face of tremendous suffering, prioritization helps us take decisive, thoughtful action instead of freezing or leaving impact on the table. * Surveys show that personal connections are the most common way that people first discover EA. When we share our own stories — explaining not just what we do but why it matters to us emotionally — we help others see that EA offers a concrete way to turn their compassion into meaningful impact. You can also watch my full talk on YouTube. ---------------------------------------- One year ago, I stood on this stage as the new CEO of the Centre for Effective Altruism to talk about the journey effective altruism is on. Among other key messages, my talk made this point: if we want to get to where we want to go, we need to be better at telling our own stories rather than leaving that to critics and commentators. Since
 ·  · 32m read
 · 
Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, Taiwan eventually democratized in the 1990s and today is one of the riche