Our community is not prepared for an AI crash. We're good at tracking new capability developments, but not as much the company financials. Currently, both OpenAI and Anthropic are losing $5 billion+ a year, while under threat of losing users to cheap LLMs.
A crash will weaken the labs. Funding-deprived and distracted, execs struggle to counter coordinated efforts to restrict their reckless actions. Journalists turn on tech darlings. Optimism makes way for mass outrage, for all the wasted money and reckless harms.
You may not think a crash is likely. But if it happens, we can turn the tide.
Preparing for a crash is our best bet.[1] But our community is poorly positioned to respond. Core people positioned themselves inside institutions – to advise on how to maybe make AI 'safe', under the assumption that models rapidly become generally useful.
After a crash, this no longer works, for at least four reasons:
- The 'inside game' approach is already failing. To give examples: OpenAI ended its superalignment team, and Anthropic is releasing agents. The US is demolishing the AI Safety Institute, and its UK counterpart was renamed the AI Security Institute. The AI Safety Summit is now called the AI Action Summit. Need we go on?
- In the economic trough, skepticism of AI will reach its peak. People will dismiss and ridicule us for talking about risks of powerful AI. I'd say that promoting the “powerful AI” framing to an audience that contains power-hungry entrepreneurs and politicians never was a winning strategy. But it sure was believable when ChatGPT took off. Once OpenAI loses more money than it can recoup through VC rounds and its new compute provider goes bankrupt, the message just falls flat.
- Even if we change our messaging, it won't be enough to reach broad-based public agreement. To create lasting institutional reforms (that powerful tech lobbies cannot undermine), various civic groups that often oppose each other need to reach consensus. Unfortunately, AI Safety is rather insular, and lacks experienced bridgebuilders and facilitators who can listen to the concerns of different communities, and support coordinated action between them.
- To overhaul institutions that are failing us, more confrontational tactics like civil disobedience may be needed. Such actions are often seen as radical in their time (e.g. as civil rights marches were). The AI Safety community lacks the training and mindset to lead such actions, and may not even want to associate itself with people taking such actions. Conversely, many of the people taking such actions may not want to associate with AI Safety. The reasons are various: safety researchers and funders collaborated with the labs, while neglecting already harmed communities, and ignoring the value of religious worldviews.
As things stand, we’ll get caught flat-footed.
One way to prepare is to fund a counter-movement outside of AI Safety. I'm assisting experienced organisers making plans. I hope to share details before a crash happens.[2]
- ^
Preparing for a warning shot is another option. This is dicey though given that: (1) we don’t know when or how it will happen (2) a convincing enough warning shot implies that models are already gaining the capacity for huge impacts, making it even harder to prepare for the changed world that results (3) in a world with such resourceful AI, the industry could still garner political and financial backing to continue developing supposedly safer versions, and (4) we should not rely on rational action following a (near-)catastrophe, given that even tech with little upside has continued to be developed after being traced back to maybe having caused a catastrophe (e.g. virus gain-of-function research).
Overall, I’d prefer to not wait until the point that lots of people might die before trying to restrict AI corporations. I think campaigning in an early period of industry weakness is a better moment than campaigning when the industry gains AI with autonomous capabilities. Maybe I'm missing other options (please share), but this is why I think preparing for a market crash is our best bet.
- ^
We’re starting to see signs of investments not being able to swell further. E.g. OpenAI’s latest VC round is led by an unrespectable firm that must lend money to invest at a staggering valuation of $300 billion. Also, OpenAI buys compute from CoreWeave, a debt-ridden company that recently had a disappointing IPO. I think we're in the late stage of the bubble, which is most likely to pop by 2027.
Thanks for the link to your thoughts on why you think it's likely that there will be a crash. I think you underestimate the likelihood of the US government propping up AI companies. Just because they didn't invest money in the Stargate expansion doesn't mean they aren't reserving the option to do so later if necessary. It seems clear that Elon Musk is personally very invested in AI. Even aside from his personal involvement the fact that China/DeepSeek is in the mix points towards even a normal government offering strong support to American companies in this race.
If you believe that the US government will prop up AI companies to virtually any level they might realistically need by 2029, then i don't see a crash happening.
Thanks for reading and your thoughts.
I disagree, but I want to be open to changing my mind if we see e.g. the US military ramping up contracts, or the US government propping up AI companies with funding at the level of say the $280 billion CHIPS Act.
Just a heads up that this was posted on April Fool's day, but it seems like a serious post. You might want to add a quick disclaimer at the top for today :)
Haha, I was thinking about that. The timing was unfortunate.