PauseAI largely seek to emulate existing social movements (like the climate justice movement) but essentially has a cargo cult approach to how social movements work. For a start, there is currently no scientific consensus around AI safety the way there is around climate change, so all actions trying to imitate the climate justice movement are extremely premature. Blockading an AI company's office talking about existential risk from artificial general intelligence won't convince any standby passenger, it will just make you look like a doomsayer caricature. It would be comparable to staging an Extinction Rebellion protest in the mid-19th-century.
Due to this, many in PauseAI are trying to do coalition politics bringing together all opponents of work on AI (neo-Luddites, SJ-oriented AI ethicists, environmentalists, intellectual property lobbyists). But the space of possible AI policies is highly dimensional, so any such coalition, done with little understanding of political strategy, will risk focusing on policies and AI systems that have little to do with existential risk (such as image generators), or that even might prove entirely counter-productive (by entrenching further centralization in the hands of the Big Four¹ and discouraging independent research by EA-aligned groups like EleutherAI).
¹: Microsoft/OpenAI, Amazon/Anthropic, Google/DeepMind, Facebook/Meta
Hi Marcus, I'm in the mood for a bit of debate, so I'm going to take a stab at responding to all four of your points :)
LMK what you think!
1. This is an argument against a pause policy not the Pause org or a Pause movement. I think discerning funders need to see the differences. Especially if you have thinking on the margin.
2. "Pausing AI development for any meaningful amount of time is incredibly unlikely to occur." < I think anything other than AGI in less than 10 years is unlikely to occur, but that isn't a good argument to not work on Safety. Scale a... (read more)
The provided source doesn't show PauseAI affiliated people calling Sam Altman and Dario Amodei evil.
The fact that you can't say more is part of the problem. There needs to be an open global discussion of an AGI Moratorium at the highest levels of policymaking, government, society and industry.
I agree with many of the things other people have already mentioned. However, I want to add one additional argument against PauseAI, which I believe is quite important and worth emphasizing clearly:
In general, hastening technological progress tends to be a good thing. For example, if a cure for cancer were to arrive in 5 years instead of 15 years, that would be very good. The earlier arrival of the cure would save many lives and prevent a lot of suffering for people who would otherwise endure unnecessary pain or death during those additional 10 years. The difference in timing matters because every year of delay means avoidable harm continues to occur.
I believe this same principle applies to AI, as I expect its main effects will likely be overwhelmingly positive. AI seems likely to accelerate economic growth, accelerate technological progress, and significantly improve health and well-being for billions of people. These outcomes are all very desirable, and I would strongly prefer for them to arrive sooner rather than later. Delaying these benefits unnecessarily means forgoing better lives, better health, and better opportunities for many people in the interim.
Of course, there are exceptions to this principle, as it’s not always the case that hastening technology is beneficial. Sometimes it is indeed wiser to delay the deployment of a new technology if the delay would substantially increase its safety or reduce risks. I’m not dogmatic about hastening technology and I recognize there are legitimate trade-offs here. However, in the case of AI, I am simply not convinced that delaying its development and deployment is justified on current margins.
To make this concrete, let’s say that delaying AI development by 5 years would reduce existential risk by only 0.001 percentage points. I would not support such a trade-off. From the perspective of any moral framework that incorporates even a slight discounting of future consumption and well-being, such a delay would be highly undesirable. There are pragmatic reasons to include time discounting in a moral framework: the future is inherently uncertain, and the farther out we try to forecast, the less predictable and reliable our expectations about the future become. If we can bring about something very good sooner, without significant costs, we should almost always do so rather than being indifferent to when it happens.
However, if the situation were different—if delaying AI by 5 years reduced existential risk by something like 10 percentage points—then I think the case for PauseAI would be much stronger. In such a scenario, I would seriously consider supporting PauseAI and might even advocate for it loudly. That said, I find this kind of large reduction in existential risk from a delay in AI development to be implausible, partly for the reasons others in this thread have already outlined.
This argument is highly dependent on your population ethics. From a longtermist, total positive utilitarian perspective, existential risk is many, many magnitudes worse than delaying progress, as it affect many, many magnitudes more (potential) people.
if delaying AI by 5 years reduced existential risk by something like 10 percentage points—then I think the case for PauseAI would be much stronger
This is the crux. I think it would reduce existential risk by at least 10% (probably a lot more). And 5 years would just be a start - obviously any Pause should (and in practice will) only be lifted conditionally. I link your AGI timelines are relatively short? And I don't think your reasons for expecting the default outcome from AGI to be good are sound (as you even allude to yourself).
I do in fact believe that delaying AI by 5 years reduce existential risk by something like 10 percentage points.
Probably this thread isn't the best place to hash it out, however.
I wrote some criticism in this comment. Mainly, I argue that
(1) A pause could be undesirable. A pause could be net-negative in expectation (with high variance depending on implementation specifics), and that PauseAI should take this concern more seriously.
(2) Fighting doesn't necessarily bring you closer to winning. PauseAI's approach *could* be counterproductive even for the aim of achieving a pause, whether or not it's desirable. From my comment:
Although the analogy of war is compelling and lends itself well to your post's argument, in politics fighting often does not get one closer to winning. Putting up a bad fight may be worse than putting up no fight at all. If the goal is winning (instead of just putting up a fight), then taking criticism to your fighting style seriously should be paramount.
What is the ultimate counterfactual here? I'd argue it's extinction from AGI/ASI in the next 5-10 years with high probability. Better to fight this and lose than just roll over and die.
To be clear - I'm open to more scouting being done concurrently (and open to changing my mind), but imo none of these answers are convincing or reassuring.
What PauseAI wants to ban or "pause" seems fairly weakly defined and not necessarily relevant to any actual threat level. Their stated goals focus on banning scaling of LLM architecture with known limitations that make 'takeover' scenarios unlikely (limited context windows, lack of recursive self-updating independently from training, dependence on massive datacentres to run) and known problems (inscrutability and obvious lack of consistent "alignment") that are still problems with smaller models if you try to use them for anything sensitive. It's not clear what "more powerful than GPT4" actually means. Nor is it clear what the level of understanding that will result in un-pausing is or how it will be obtained without any models to study.
Banning LLMs of a certain scale might even have the perverse effect of encouraging companies to optimize performance or reinvent the idea of learning in other ways which are more risky. Or setting back ability to understand extremely powerful LLMs when someone develops them outside a US/EU legislative framework anyway. Or preventing positive AI developments that could save thousands of lives (or from the point of view of a longtermist that believes existential risk is currently nonzero including non-AI factors but might drop to zero in future because of friendly AI, perhaps 10^31 lives!)
Beyond that I think from the perspective of being an effective giving target, PauseAI suffers from the same shortcomings most lobbying outfits do (influencing government and public opinion in an opposing direction to economic growth is hard , it's unclear what results a marginal dollar donation achieves, and the other side have a lot more dollars and connections to ramp up activity in an equal and opposite direction if they feel their business interests are threatened) so there's no reason to believe they're effective even if one agrees their goal is well-defined and correct.
You could also question the motivations of some of the people arguing for AI pauses (hi Elon, we see the LLM you launched shortly after signing the letter saying that LLMs that were ahead of yours were dangerous and should be banned...) although I don't think this applies to the PauseAI organization specifically.
>PauseAI suffers from the same shortcomings most lobbying outfits do...
I'm confused about this section: Yes, this kind of lobbying is hard, and the impact of a marginal dollar is very unclear. The acc-side also have far more resources (probably; we should be vary of this becoming a Bravery Debate).
This doesn't feel like a criticism of PauseAI. Limited tractability is easily outweighed by a very high potential impact.
They don't have any experience and no people with experience driving the ship, where experience and relationships in DC are extremely important. They are meeting with offices, yes, but it's not clear that they are meeting with the right offices or the right staffers. It's likely that they are actually not cost-effective because the money could probably be better spent on two highly competent and experienced/plugged in people rather than a bunch of junior people in terms of ROI.
Hi! Interesting comment. To what extent does this also describe most charities spinning out of Ambitious Impacts incubation program?
Another org in the same space, comprised of highly competent and experienced/plugged in people would certainly be welcome, and plausibly could be more effective.
I plan on donating to PauseAI, but I've put considerable thought into reasons not to donate.
I gave some arguments against slowing AI development (plus why I disagree with them) in this section of my recent post, so I won't repeat those.
I understand that this topic gets people excited, but commenters are confusing a Pause policy with a Pause movement with the organisation called PauseAI.
Commenters are also confusing 'should we give PauseAI more money?' with 'would it be good if we paused frontier models tomorrow?'
I've never seen a topic in EA get a subsection of the community so out of sorts. It makes me extremely suspicious.
Commenters are also confusing 'should we give PauseAI more money?' with 'would it be good if we paused frontier models tomorrow?'
I think it is a reasonable assumption that we only should give PauseAI more money (necessary conditions) if (1) we thought that pausing AI is desirable and (2) PauseAI methods are relatively likely to achieve that outcome, conditioned on having the resources to do so. I would argue that many of the comments highlight that both those assumptions are not clear for many of the forum participants. In fact I think it is reasonable to stress disagreement with (2) in particular.
I strongly agree. Almost all of the criticism in this thread seem to start from assumptions about AI that are very far from those held by PauseAI. This thread really needs to be split up to factor that out.
As an example: If you don't think shrimp can suffer, then that's a strong argument against the Shrimp Welfare Project. However, that criticism doesn't belong in the same thread as a discussion about whether the organization is effective, because the two subjects are so different.
Pause AI seems to not be very good at what they are trying to do. For example, this abysmal press release which makes pause AI sound like tinfoil wearing nutjobs, which I already complained about it in the comments here.
I think they've been coasting for a while on the novelty of what they're doing, which helps obscure that only like a dozen or so people are actually showing up to these protests, making them an empty threat. This is unlikely to change as long as the focus of these protests are based on the highly speculative threat of AI x-risk, which people do not viscerally feel as a threat and does not carry authoritative scientific backing compared to something like climate change. People might say they're concerned about AI on surveys, but they aren't going to actually hit the streets unless they think it's meaningfully and imminently going to harm them.
In todays climate, the only way to build a respectably sized protest movement is to put x-risk on the backburner and focus on attacking AI more broadly: there are a lot of people who are pissed at gen-AI in general, like people mad at data plagiarism, job loss and enshittification. They are making some steps towards this, but I think there's a feeling that doing so would end up aligning them politically with the left and make enemies among AI companies. They should either embrace this, or give up on protesting entirely.
Marcus says:
But a pause gets no additional benefit whereas most other regulation gets additional benefit (like model registry, chip registry, mandatory red teaming, dangerous model capability evals, model weights security standards, etc.)
Matrice says:
Due to this, many in PauseAI are trying to do coalition politics bringing together all opponents of work on AI (neo-Luddites, SJ-oriented AI ethicists, environmentalists, intellectual property lobbyists).
These seem to be hinting at an important crux. On the one hand, I can see that cooperating with people who have other concerns about AI could water down the content of one's advocacy.
On the other hand, might it be easier to get a broader coalition behind a pause, or some other form of regulation that many others in an AI-concerned coalition would view as a win? At least at a cursory level, many of the alternatives Marcus mentioned sound like things that wouldn't interest other members of a broader coalition, only people focused on x-risk.
Whether x-risk focused advocates alone can achieve enough policy wins against the power of Big AI (and corporations interested in harnessing it) is unclear to me. If other members of the AI-concerned coalition have significantly more influence than the x-risk group -- such that a coalition-based strategy would excessively "risk focusing on policies and AI systems that have little to do with existential risk" -- then it is unclear to me whether the x-risk group had enough influence to go it alone either. In that case, would they have been better off with the coalition even if most of the coalition's work only generically slowed down AI rather than bringing specific x-risk reductions?
My understanding is that most successful political/social movements employ a fairly wide range of strategies -- from elite lobbying to grassroots work, from narrow focus on the movement's core objectives to building coalitions with those who may have common opponents or somewhat associated concerns. Ultimately, elites care about staying in power, and most countries important to AI do have elections. AI advocates are not wrong that imposing a bunch of regulations of any sort will slow down AI, make it harder for AI to save someone like me from cancer 25-35 years down the road, and otherwise impose some real costs. There has to be enough popular support for paying those costs.
So my starting point would be an "all of the above" strategy, rather than giving up on coalition building without first making a concerted effort first. Maybe PauseAI the org, or pause advocacy the idea, isn't the best way to go about coalition building or to build broad-based public support. But I'm not seeing too much public discussion of better ways?
Hi Matrice! I find this comment interesting. Considering the public are in favour of slowing down AI, what evidence points you to the below conclusion?
“Blockading an AI company's office talking about existential risk from artificial general intelligence won't convince any standby passenger, it will just make you look like a doomsayer caricature.”
Also, what evidence do you have for the below comment? For example, I met the leader of the voice actors association in Australia and we agreed on many topics, including the need for an AISI. In fact, I’d argue you’ve got something important wrong here - talking about existential risk instead of catastrophic risks to policymakers can be counterproductive because there aren’t many useful policies to prevent it (besides pausing).
“ the space of possible AI policies is highly dimensional, so any such coalition, done with little understanding of political strategy, will risk focusing on policies and AI systems that have little to do with existential risk”
There is enough of a scientific consensus that extinction risk from AGI is real and significant. Timelines are arguably much shorter in the case of AGI than climate change, so the movement needs to be ramped up in months-years, not years-decades.
I'd say more like late-20th Century (late 1980s?) in terms of scientific consensus, and mid-21st century (2040s?) in terms of how close global catastrophe is.
Re the broad coalition - the focus is on pausing AI, which will help... (read more)
1% (again, conservative[1]) is not a Pascal's Mugging. 1%(+) catastrophic (not extinction) risk is plausible for climate change, and a lot is being done there (arguably, enough that we are on track to avert catastrophe if action[2] keeps scaling).
It's anything but flippant[3]. And x-risk isn't from LLMs alone. "System 2" architecture, and embodiment, two other essential ingredients, are well on track too. I'm happy to bear any reputation costs in the event we live through this. It's unfortunate, but if there is no extinction, then of course people will say we were wrong. But there might well only be no extinction because of our actions![4]
I actually think it's more like 50%, and can argue this case if you think it's a crux.
Including removing CO₂ from the atmosphere and/or deflecting solar radiation.
Please read the PauseAI website.
Or maybe we will just luck out [footnote 10 on linked post].
You don't have to go as far back as the mid-19th-century to find a time before scientific consensus about global warming. You only need to go back to 1990 or so.