Introduction
In contemporary American society, issues become politicized almost by default. Topics like public health, which historically were politically neutral, are now the subject of extreme political animosity.
Fortunately, AI safety has pretty much evaded this trend up until now. While there are vocal movements on each side of the issue, the major political parties do not have clear stances on it.
If AI safety becomes a partisan issue, there is potential for very bad outcomes. In the worst case, if one party becomes completely opposed to AI safety measures, then long-term, there is little hope of averting dangers from autonomous models or models enabling bad actors. Thus, preventing AI safety from becoming politicized is an urgent priority.
In this post, I’ll begin by exploring how likely politicization is to result in disaster. I’ll then suggest some tentative measures to prevent it, drawing lessons from issues in the past that avoided substantial politicization and relevant findings from the political communication literature.
(Note: I employed LLMs for identifying and summarizing relevant research and crafting recommendations)
Main Takeaways
Here are the suggestions that seem to be the most important or underexplored:
- Aim for a neutral relationship with the AI ethics community, neither functioning as part of the same movement nor appearing opposed to it.
- Create a confidential incident database for the AI labs.
- Host deliberative expert forums.
- Get additional advice from experts on politicization.
How Serious Would Politicization Be?
The danger in politicization is that opposition to AI safety becomes part of one party’s ideology. However, how extreme would we expect this opposition to be, and to what extent is it likely to be implemented in legislation?
A relevant framework is Baumgartner & Jones’ punctuated-equilibrium framework (from Agendas and Instability in American Politics). The basic idea is that issues can be in one of two states:
- Closed Policy Subsystem: A small set of actors (agencies, committees, interest groups) dominate an issue, defining it in technocratic, non-controversial terms. In this phase, small, incremental adjustments are implemented in policy.
- Macropolitical Arena: The issue becomes a subject of partisan debate. Sudden, dramatic policy changes occur based on ideological justifications.
Some things that can move an issue into the macropolitical arena:
- Dramatic events (crises, scandals, disasters)
- Media reframing with a new moral or symbolic narrative
- Social movements or advocacy coalitions
We can see how this pattern played out for gun control. Prior to the 1980s, gun control was primarily framed as a crime prevention and public safety issue discussed by policymakers and experts. It was only after an NRA convention in 1977, the so-called “Revolt at Cincinnati”, in which the NRA changed its focus from hunting, conservation, and marksmanship to defending the right to bear arms, that the issue became a deeply partisan and identity-driven political debate. This led to more extreme laws on each side of the issue, like permitless carry, stand-your-ground laws, gun sanctuary laws, red-flag laws, and expanded background checks. (there was one earlier example of a permitless carry law in Vermont)
Analogously, if AI safety becomes highly politicized, there is potential for extreme changes in policy, and in particular for disastrous weakening of AI safety regulations.
Returning to the gun safety example, we must also note that among both parties, there is widespread support for certain regulations, e.g., preventing people under domestic violence restraining orders from purchasing guns. Similarly, if politicization occurs for AI safety, even the side generally opposed to it would almost certainly support certain AI safety measures.
Opinions vary on what measures would be required to survive the dawn of AGI; the more measures you think are necessary, the more likely it is that politicization would be a disaster.
Preventing Politicization
What can we do to make politicization less likely? I’ll offer some tentative suggestions. There are two sources I’ll be drawing from
- Past issues that could have potentially become politicized but did not or largely did not, like ozone layer protection, nuclear arms control (as a domestic issue), technical safety regulation (aviation, nuclear power, civil engineering), and early UN charters
- Findings from the political communication literature
I’ll write in the second person, speaking to the EA community.
Framing and Presentation
Use Clear, Universal Language
Present the issue in straightforward, universal terms, alluding to shared values like concern for future generations. Insofar as you can, frame the issue as a technical problem; the success of ozone layer protection demonstrates that the “clear-cut moral issue” and “technical problem” frames can coexist and support each other.
Avoid Association with Existing Culture-War Issues
The media constantly attempts to fit new issues into existing rhetorical frames. In this case, frames to be wary of include “socialism vs. capitalism” (should we regulate tech companies or allow them to proceed unhindered?) and “Silicon Valley vs. Washington.”
The field of AI ethics (e.g., preventing model biases) is left-coded, and to me, association with this field seems like one of the most plausible roads to near-term politicization. I think the best path forward is to walk a fine line between being associated with AI ethics and appearing opposed to it or as competing for attention with it.
Encourage a Wide Range of Voices to Agree on a Framing
We want to prevent any group from becoming the symbolic “owner” of AI Safety. To achieve this, we can:
- Encourage the release of cross-partisan expert statements and bipartisan policy convenings or advisory panels.
- Facilitate diverse reporting; when research centers or governments release AI safety-related information, provide early access/interviews to journalists from different political leanings.
Shared Reality
One factor that prevented politicization in the case of the ozone hole was clear, immediate evidence that there was a problem. How can we foster a similar state of affairs for AI safety?
Objective Measurements
Standardized safety evals (like HELM, METR, ARC tests) provide objective data that can inform discussions.
Disseminating Information Among Labs
In the context of preventing politicization, sharing safety information between labs is valuable because AI safety experts who are consulted will have a better sense of the big picture, and so there’s less opportunity for different labs to support different partisan spins.
To accomplish this, one approach would be to create a shared incident database similar to the Aviation Safety Reporting System (ASRS). (One such database exists: https://incidentdatabase.ai/, but for this purpose, it would be useful to have a confidential database with incidents reported by the labs themselves.)
Disseminating Information Among Politicians
If politicians are informed of clear-cut evidence of the current state of affairs, it’s harder for partisan spins to arise. Some concrete suggestions:
- Hold annual bipartisan briefings.
- Pass legislation that requires companies to report safety incidents to the government.
Disseminating Information Among the Public
Finally, how can we directly inform the public about the current state of affairs?
- Convey in simple terms existing cases where AI has exhibited misalignment.
- Although the ability of current models to enable bad actors is a serious issue that ideally could be communicated to the public, the risk of alerting said actors to this possibility probably means it should be avoided.
- Develop independent organizations that provide updates on AI safety that can be picked up by mainstream news sources, e.g., https://safe.ai/newsletter
Efforts by Insiders
When solutions and standards can be crafted by experts before an issue enters the political arena, there is a better chance of avoiding politicization. In the case of nuclear safety regulation and technical safety regulation, this was accomplished through expert communities, and in the case of ozone layer protection, the industry cooperated in crafting a solution.
Most of the leading AI labs have safety teams, but there are further steps we can take to involve the industry in working towards positive outcomes:
Deliberative Expert Forums
Deliberative expert forums gather experts and often lay citizens to discuss a complex issue before it becomes the subject of partisan debate. These have several positive outcomes:
- Shaping the framing of the problem
- Providing legitimacy for subsequent policy
- Incubating networks of trusted experts who later move into policy roles
At least one of these has already occurred: https://deliberativecitizenship.org/deliberative-forum-on-artificial-intelligence/. More research would be useful to determine how much of an impact these are likely to have.
Self-Regulation
We can gather together different AI labs to try to establish standards. This has occurred in several cases, e.g., the UK AI Safety Summit in 2023, the AI Seoul Summit in 2024, and the Paris AI Action Summit in early 2025, with varying levels of success developing safety frameworks and risk-threshold agreements.
Conclusion
As I mentioned at the beginning of this post, I am not an expert on politicization, and all of these suggestions are tentative. I think there’s much more potential for productive work in this area, and I’m hopeful that the EA community can draw on experts in political communication and politicization to skillfully navigate this domain.
