[Edit: I've updated this post on October 24 in response to some feedback]
NIMBYs don’t call themselves NIMBYs. They call themselves affordable housing advocates or community representatives or environmental campaigners. They’re usually not against building houses. They just want to make sure that those houses are affordable, attractive to existing residents, and don’t destroy habitat for birds and stuff.
Who can argue with that? If, ultimately, those demands stop houses from being built entirely, well, that’s because developers couldn’t find a way to build them without hurting poor people, local communities, or birds and stuff.
This is called politics and it’s powerful. The most effective anti-housebuilding organisation in the UK doesn’t call itself Pause Housebuilding. It calls itself the Campaign to Protect Rural England, because English people love rural England. CPRE campaigns in the 1940s helped shape England’s planning system. As a result, permission to build houses is only granted when it’s in the “public interest”; in practice it is given infrequently and often with onerous conditions.[1]
The AI pause folks could learn from their success. Instead of campaigning for a total halt to AI development, they could push for strict regulations that aim to ensure new AI systems won’t harm people (or birds and stuff).
This approach has two advantages. First, it’s more politically palatable than a heavy-handed pause. And second, it’s closer to what those of us concerned about AI safety ideally want: not an end to progress, but progress that is safe and advances human flourishing.
I think NIMBYs happen to be wrong about the cost-benefit calculation of strong regulation. But AI safety people are right. Advanced AI systems pose grave threats and we don’t know how to mitigate them.
Maybe ask governments for an equivalent system for new AI models. Require companies to prove to planners their models are safe. Ask for:
- Independent safety audits
- Ethics reviews
- Economic analyses
- Public reports on risk analysis and mitigation measures
- Compensation mechanisms for people whose livelihoods are disrupted by automation
- And a bunch of other measures that plausibly limit the AI risks
In practice, these requirements might be hard to meet. But, considering the potential harms and meaningful chance something goes wrong, they should be. If a company developing an unprecedentedly large AI model with surprising capabilities can’t prove it’s safe, they shouldn’t release it.
This is not about pausing AI.
I don’t know anybody who thinks AI systems have zero upside. In fact, the same people worried about the risks are often excited about the potential for advanced AI systems to solve thorny coordination problems, liberate billions from mindless toil, achieve wonderful breakthroughs in medicine, and generally advance human flourishing.
But they’d like companies to prove their systems are safe before they release them into the world, or even train them at all. To prove that they’re not going to cause harm by, for example, hurting people, disrupting democratic institutions, or wresting control of important sociopolitical decisions from human hands.
Who can argue with that?
[Edit: Peter McIntyre has pointed out that Ezra Klein made a version of this argument on the 80K podcast. So I've been scooped - but at least I'm in good company!]
- ^
“Joshua Carson, head of policy at the consultancy Blackstock, said: “The notion of developers ‘sitting on planning permissions’ has been taken out of context. It takes a considerable length of time to agree the provision of new infrastructure on strategic sites for housing and extensive negotiation with councils to discharge planning conditions before homes can be built.”” (Kollewe 2021)
Stephen - this all sounds reasonable, if the actual goal of 'Pause AI' (which I strongly support) was just to shape policy and regulation.
But, IMHO, that's not the actual goal. Policy and regulation are weak, slow, noisy, often ineffective ways to de-fang dangerous technologies.
From my perspective, some key implicit benefits of the Pause AI are (1) raising public awareness about the extinction risks from AI, and (2) promoting public stigmatization of the AI industry, to undermine its funding, talent pool, status, prestige, and public support. As I argued here, moral stigmatization can be much stronger, faster, and effective than formal regulation.
(Note that a lot of Pause AI organizers and supporters might disagree with me about these benefits, and might argue that Pause AI really is all about getting better formal regulation. That's fine, and I respect their view. But what I see on social media platforms such as Twitter/X is that Pause AI is succeeding much better at the consciousness-raising and moral-stigmatizing than it is at policy updating -- which in my opinion is actually a good thing.)
I think the only actually feasible way to slow down the AI arms race is through global moral stigmatization of the AI industry. No amount of regulation will do it. No amount of clever policy analysis will do it. No amount of 'technical AI alignment work' will help.
The EA movement needs to be crystal clear about the extinction risks the AI industry is imposing on humanity, without our consent. The time for playing nice with the AI industry is over. We need to call them out as evil, and help the public understand why their reckless hubris could end us all.