[Edit: I've updated this post on October 24 in response to some feedback]

NIMBYs don’t call themselves NIMBYs. They call themselves affordable housing advocates or community representatives or environmental campaigners. They’re usually not against building houses. They just want to make sure that those houses are affordable, attractive to existing residents, and don’t destroy habitat for birds and stuff.

Who can argue with that? If, ultimately, those demands stop houses from being built entirely, well, that’s because developers couldn’t find a way to build them without hurting poor people, local communities, or birds and stuff.

This is called politics and it’s powerful. The most effective anti-housebuilding organisation in the UK doesn’t call itself Pause Housebuilding. It calls itself the Campaign to Protect Rural England, because English people love rural England. CPRE campaigns in the 1940s helped shape England’s planning system. As a result, permission to build houses is only granted when it’s in the “public interest”; in practice it is given infrequently and often with onerous conditions.[1]

Oh, you want to build houses? Why do you hate sheep and trees so much?

The AI pause folks could learn from their success. Instead of campaigning for a total halt to AI development, they could push for strict regulations that aim to ensure new AI systems won’t harm people (or birds and stuff).

This approach has two advantages. First, it’s more politically palatable than a heavy-handed pause. And second, it’s closer to what those of us concerned about AI safety ideally want: not an end to progress, but progress that is safe and advances human flourishing.

I think NIMBYs happen to be wrong about the cost-benefit calculation of strong regulation. But AI safety people are right. Advanced AI systems pose grave threats and we don’t know how to mitigate them.

Maybe ask governments for an equivalent system for new AI models. Require companies to prove to planners their models are safe. Ask for:

  • Independent safety audits
  • Ethics reviews
  • Economic analyses
  • Public reports on risk analysis and mitigation measures
  • Compensation mechanisms for people whose livelihoods are disrupted by automation
  • And a bunch of other measures that plausibly limit the AI risks

In practice, these requirements might be hard to meet. But, considering the potential harms and meaningful chance something goes wrong, they should be. If a company developing an unprecedentedly large AI model with surprising capabilities can’t prove it’s safe, they shouldn’t release it.

This is not about pausing AI.

I don’t know anybody who thinks AI systems have zero upside. In fact, the same people worried about the risks are often excited about the potential for advanced AI systems to solve thorny coordination problems, liberate billions from mindless toil, achieve wonderful breakthroughs in medicine, and generally advance human flourishing.

But they’d like companies to prove their systems are safe before they release them into the world, or even train them at all. To prove that they’re not going to cause harm by, for example, hurting people, disrupting democratic institutions, or wresting control of important sociopolitical decisions from human hands.

Who can argue with that?

[Edit: Peter McIntyre has pointed out that Ezra Klein made a version of this argument on the 80K podcast. So I've been scooped - but at least I'm in good company!]

  1. ^

    “Joshua Carson, head of policy at the consultancy Blackstock, said: “The notion of developers ‘sitting on planning permissions’ has been taken out of context. It takes a considerable length of time to agree the provision of new infrastructure on strategic sites for housing and extensive negotiation with councils to discharge planning conditions before homes can be built.”” (Kollewe 2021)

161

13
9
1

Reactions

13
9
1

More posts like this

Comments20
Sorted by Click to highlight new comments since:

It  seems that the successful opposition to previous technologies was indeed explicitly against that technology, and so I'm not sure the softening of the message you suggest is actually necessarily a good idea. @charlieh943  recent case study into GM crops highlighted some of this (https://forum.effectivealtruism.org/posts/6jxrzk99eEjsBxoMA/go-mobilize-lessons-from-gm-protests-for-pausing-ai - he suggests emphasising the injustice of the technology might be good); anti-SRM activists have been explictly against SRM (https://www.saamicouncil.net/news-archive/support-the-indigenous-voices-call-on-harvard-to-shut-down-the-scopex-project), anti-nuclear activists are explicitly against nuclear energy and many more. Essentially, I'm just unconvinced that 'its bad politics' is necessarily supported by case studies that are most relevant to AI.

Nonetheless, I think there are useful points here both about what concrete demands could look like, or who useful allies could be, and what more diversified tactics could look like. Certainly, a call for a morotorium is not necessarily the only thing that could be useful in pushing towards a pause. Also, I think you make a point that a 'pause' might not be the best message that people can rally behind, although I reject the opposition. I think, in a similar way to @charlieh943 that emphasising injustice may be one good message that can be rallied around. I also think a more general 'this technology is dangerous and allowing companies to make it are dangerous' may also be a useful rallying message, which I have argued for in the past https://forum.effectivealtruism.org/posts/Q4rg6vwbtPxXW6ECj/we-are-fighting-a-shared-battle-a-call-for-a-different

Gideon - nice comment. I agree that it's quite tricky to identify specific phrases, messages, narratives, or policies that most people would rally around. 

A big challenge is that in our hyper-partisan, polarized social media world, even apparently neutral-sounding concepts such as 'injustice' or 'freedom' get coded as left, or right, respectively. 

So, the more generic message 'this technology is dangerous', or 'this tech could hurt our kids', might have broader appeal. (Although, even a mention of kids might get coded as leaning conservative, given the family values thing.)

This feels pretty connected to Evan’s RSPs are pauses done right

Except that RSPs don't concern with long-term economic, social, and political implications. The ethos of AGI labs is to assume, for the most part, that these things will sort out themselves, and they only need to check technical and momentary implications, i.e., do "evals".

The public should push for "long-term evals", or even mandatory innovation in political and economic systems coupled with the progress in AI models.

The current form of capitalism is simply unprepared for autonomous agents, no amount of RLHF and "evals" will fix this.

I agree that "regulation" may be easier to advocate for than a "pause". Our results (see below) are in that direction, though the difference is perhaps not so big as one might imagine (and less stark than the "NIMBY" case), though I would expect this to depend on the details of the regulation and the pause and their presentation.

  1. Pause on AI Research. Support for a pause on AI research outstrips opposition. We estimate that 51% of the population would support, 25% would oppose, 20% remain neutral, and 4% don’t know (compared to 58-61% support and 19-23% opposition across different framings in YouGov’s polls). Hence, support is robust across different framings and surveys. The slightly lower level of support in our survey may be explained by our somewhat more neutral framing.
  2. Should AI be regulated (akin to the FDA)? Many more people think AI should be regulated than think it should not be. We estimate that 70% believe Yes, 21% believe No, and 9% don’t know. 

Hi Stephen, thank you for this piece.

I wonder about how relevant this case study is: housing doesn't have significant geopolitical drivers, and construction companies are much less powerful than AI firms. Pushing the 'Overton Window' towards onerous housing restrictions strikes me as significantly more tractable than shifting the Overton window towards a global moratorium to AI development, as PauseAI people want. A less tractable issue might require more radical messaging. 

If we look at cases which I think are closer analogues for AI protests (e.g. climate change etc.), protests often used maximalist rhetoric (e.g. Extinction Rebellion calling for a net-zero target of 2025 in the UK) which brought more moderate policies (e.g. 2050 net-zero target) into the mainstream. 

In short, I don't think we should generalise from one issue (NIMBYs), which is different in many ways from AI, to what might look like good politics for AI safety people. 

Great explanation, but I’m not convinced that his this works.

Creating a lot of bureaucratic hassle is a great way to ensure AGI isn’t built by the most responsible companies, but either by actors who don’t care about a law or who are in countries where the regulation hasn’t passed.

You can say exactly the same about Pause AI.

I guess ideally we have both: PauseAI and those simply pushing for strict regulations. PauseAI, being at the extreme, functions as an outrider that can push the Overton Window.

James - just to recalibrate -- I don't see 'Pause AI' as the extreme.  Given the very high level of public concern about AI risk, plenty of normal people would see 'Stop AI by any means necessary' as quite reasonable, whereas some might view it as slightly too extreme. But an indefinite pause seems like a very moderate, centrist position, all things considered.

Oh I agree. But in this post I think PauseAI was couched as being at the extreme end? 

[edit: confused about the downvotes this has got instead of the disagree-votes]

I like that EA/AI Safety has come round to recognising that the development of AGI is inherently political, and I think posts like this are a good part of this trend.

I also like that this post is written in a clear, non-jargony way. Not every forum post has to be super technical!

A bit concerned that spreading knowledge of the Town and Country Planning Act (1947) might be a social infohazard :P

The second half of the article seems to be like it's edging towards the narcissism of small differences. It seems that this is more about how to frame messaging or what specific policy choice is right. It's at least in the 'mistake theory' bucket of politics, but I wouldn't be surprised if some PauseAI advocates (or perhaps other anti-Pause groups) are beginning to think that AI might become more of a 'conflict theory' zone

Main disagreement is around "The AI pause folks could learn from this approach." I really think the field of AI Safety/AI Governance has a lot to a learn from the AI pause folks. For example, Holly Elmore is putting skin in the game, and honestly acting more credibly from my point of view than someone like Dario Amodei. People at Frontier labs might have been keeping their xrisk estimates quiet a few years ago, but I don't like the fact that we know that Sam and Dario both have non trivial estimates of doom (in relatively short timeframes I'd wager) and didn't mention this to the US Senate under Oath. The very simple "if you think it really could kill everyone, don't build it" is going to steamroll a lot of arguments from xRisk concerned labs imho.

JWS -- I appreciate your point that when Sam Altman and Dario Amodei gave testimony under oath to the US Senate, and they failed to honestly reveal their estimates of the likelihood that AGI/ASI could kill everyone, that was arguably one of the most egregious acts of perjury (by omission of crucial information) in US history.

It's a major reason why I simply don't trust them on the AI safety issue. 

A bit concerned that spreading knowledge of the Town and Country Planning Act (1947) might be a social infohazard :P

I you might have been right in the 1950s, but by now I think the cat is firmly out of the bag on this one. 

Nice post, Stephen!

From my perspective, talking about "a pause" can still be helpful because I think we should be aiming to use a significant fraction of the 1 billion years we have of habbitable Earth to do AI safety research (even just 0.1 % would be 1 million years). I also tend to agree with David Thorstad that extinction risks are greatly exagerated, and that they can be mitigated without advanced AI, such that there is no rush to develop it. Of course, I simultaneuously agree advanced AI is crucial for a flourishing longterm future! One can reasonably argue a long pause like the one I am suggesting is utterly intractable, but I am not so confident. I have barely though about these matters, but I liked the post Muddling Along Is More Likely Than Dystopia:

Summary: There are historical precedents where bans or crushing regulations stop the progress of technology in one industry, while progress in the rest of society continues. This is a plausible future for AI.

Vasco - A pause of a million years to do AI safety research, before developing ASI, sounds like lunacy at first glance -- except I think that actually, it's totally reasonable on the cosmic time scale you mentioned.

Development of bad ASI could hurt not just humanity, but intelligent life throughout the galaxy and the local cluster. This imposes a very heavy moral duty to get AI right. 

If other intelligence aliens could vote on how long our AI Pause should be, they might very well vote for an extremely long, very risk-averse pause. And I think it's worth trying to incorporate their likely preferences into whatever decisions we make.

Stephen - this all sounds reasonable, if the actual goal of 'Pause AI' (which I strongly support) was just to shape policy and regulation. 

But, IMHO, that's not the actual goal. Policy and regulation are weak, slow, noisy, often ineffective ways to de-fang dangerous technologies. 

From my perspective, some key implicit benefits of the Pause AI are (1) raising public awareness about the extinction risks from AI, and (2) promoting public stigmatization of the AI industry, to undermine its funding, talent pool, status, prestige, and public support. As I argued here, moral stigmatization can be much stronger, faster, and effective than formal regulation. 

(Note that a lot of Pause AI organizers and supporters might disagree with me about these benefits, and might argue that Pause AI really is all about getting better formal regulation. That's fine, and I respect their view. But what I see on social media platforms such as Twitter/X is that Pause AI is succeeding much better at the consciousness-raising and moral-stigmatizing than it is at policy updating -- which in my opinion is actually a good thing.)

I think the only actually feasible way to slow down the AI arms race is through global moral stigmatization of the AI industry. No amount of regulation will do it. No amount of clever policy analysis will do it. No amount of 'technical AI alignment work' will help. 

The EA movement needs to be crystal clear about the extinction risks the AI industry is imposing on humanity, without our consent. The time for playing nice with the AI industry is over. We need to call them out as evil, and help the public understand why their reckless hubris could end us all.

Politics is really important, so thank you for recognizing that and adding to discussion about Pause.

But this post confuses me. You start by talking about how protests are stronger when they are centered on something people care about rather than simply policy advocacy. Which, I don't know if I agree with, but it's an argument that you can make. But then you shift toward advocating for regulation rather than pause. Which is also just policy advocacy, right? And I don't understand why you'd expect it to have better politics than a pause. Your points about needing companies to prove they are safe is pretty much the same point that Holly Elmore has been making, and I don't know why they apply better to regulation than a Pause.

If I could give this post 20 upvotes I would. 

Being relatively new to the EA community, this for me is the single biggest area of opportunity to make the community more impactful.

Communication within the EA community (and within the AI Safety community) is wonderful, clear, crisp, logical, calm, proportional. If only the rest of the world could communicate like that, how many problems we'd solve.

But unfortunately, many, maybe even most people, react to ideas emotionally, their gut reaction outweighing or even preventing any calm, logical analysis. 

And it feels like a lot of people see EA's as "cold and calculating" because of the way we communicate - with numbers and facts and rationale. 

There is a whole science of communication (in which I'm far from an expert) which looks at how to make your message stick, how to use storytelling to build on humans' natural desire to hear stories, how to use emotion-laden words and images instead of numbers, and so on. 

For example: thousands of articles were written about the tragic and perilous way migrants would try to cross the Mediterranean to get to Europe. We all knew the facts. But few people acted. Then one photograph of a small boy who washed up dead on the beach almost single-handedly engaged millions of people to realise that this was inhumane, that we can't let this go on. (in the end, it's still going on). The photo was horrible and tragic, but was one of thousands of similar tragedies - yet this photo did more than all the numbers. 

We could ask ourselves what kind of images might represent the dangers of AI in a similar emotional way. In 2001 A Space Odyssey, Stanley Kubrick achieved something like this. He captured the human experience of utter impotence to do anything against a very powerful AI. It was just one person, but we empathised with that person, just like we empathised with the tragic boy or with his parents and family. 

What you're describing is how others have used this form of communication - very likely fine-tuned in focus groups - to find out how to make their message as impactful as possible, as emotional as possible. 

EA's need to learn how to do this more. We need to separate the calm, logical discussion about what is the best course of action from the challenge of making our communication effective in bringing that about. There are some groups who do this quite well, but we are still amateurs compared to the (often bad guys) pushing alternative viewpoints using sophisticated psychology and analysis to fine-tune their messaging. 

(full disclosure: this is part of what I'm studying for the project I'm doing for the BlueDot AI Safety course)

Sadly I couldn't respond to this post two weeks ago but here I go.

First of all I'm not sure I understand your position, but I think that you believe that if we push for other types of regulation either:

  • that would be enough to be safe from dangerous AI or
  • we'll be able to slow down AI development enough to develop measures to be safe from dangerous AI

I'm confused between the two because you write

Advanced AI systems pose grave threats and we don’t know how to mitigate them.

That I understand as you believing we don't have know those measures right now, but you also write

If a company developing an unprecedentedly large AI model with surprising capabilities can’t prove it’s safe, they shouldn’t release it.

That if we agree there's no way to prove it then you're pretty much talking about a pause.

 

If your point is the first one I would disagree with it and I think even OpenAI when they say we don't know yet how to align a SI.

If your point is the second one, then my problem with that is that I don't think that would give us close to the same amount of time than a pause. Also it could make most people believe risks from AI, including X-risks, are safeguarded now, and we could lose support because of that. And all of that would lead to more money in the industry that could lead to regulatory capture recursively.

 

All of that is also related to

it’s closer to what those of us concerned about AI safety ideally want: not an end to progress, but progress that is safe and advances human flourishing.

Which I'm not sure that it's true. Of course this depends a lot in how much you think that the current work on alignment is close to being enough to make us safe. Are we going parallel enough to the precipice that we'll be able to reach or steer in time to reach an utopia? Or are we going towards it and we will have some brief progress before falling? Would that be closer to the ideal? Anyways, the ideal is the enemy of the good or the truth or something.

 

Lastly, after arguing why a pause would be a lot better than other regulations, I'll give you that of course it would be harder to get/ less "politically palatable" which is arguably the main point of the post. But I don't how many orders of magnitude. With a pause you win over people who think safer AI isn't enough or it's just marketing from the biggest companies and nations. And also, talking about marketing, I think pausing AI is a slogan that can draw a lot more attention, which I think is good given that most people seem to want regulation.

Curated and popular this week
Relevant opportunities