Hide table of contents

Stop AI just put out a short press release.

As an organiser, let me add some thoughts to nuance the text:
 

Plan

On October 21st, 2024 at 12:00 pm Stop AI will peacefully barricade, via sit-in, OpenAI's office entrance gate 

Emphasis is on peacefully. We are a non-violent activist organisation. We refuse to work with any activist who has other plans.
 

We could very easily stop the development of Artificial General Intelligence if a small group of people repeatedly barricaded entrances at AI company offices and data centers. 

My take is that a small group barricading OpenAI is a doable way to be a thorn in OpenAI's side, while raising public attention to the recklessness of AI corporations. From there, stopping AI development requires many concerned communities acting together to restrict the data, work, uses, and hardware of AI.
 

We will be arrested on the 21st for barricading the OpenAI's gate, then once released, will eventually go back to blocking the gate. We will repeatedly block the 575 Florida St gate until we are held on remand. 

My co-organisers Sam and Guido are willing to put their body on the line by getting arrested repeatedly. We are that serious about stopping AI development.
 

We will then go to trial and plead the Necessity Defense. 

The Necessity Defense is when an "individual commits a criminal act during an emergency situation in order to prevent a greater harm from happening." This defense has been used by climate activists who got arrested, with mixed results. Sam and others will be testifying in court that we acted to prevent imminent harms (not just extinction risk).
 

If we win the Necessity Defense, then we may be able to block entrances at AI offices and data centers to our heart’s content.

Or at least, we would gain legal freedom to keep blocking OpenAI's entrances until they stop causing increasing harms. 
 

63% of Americans say that regulators should actively prevent the development of superintelligent AI. (AI Policy Institute Poll Sep 02 2023). OpenAI and the US government disregard the will of the people. 

Our actions are a way to signal to the concerned public that they can act and speak out against AI companies. 

I expect most Americans to not feel strongly yet about preventing the development of generally functional systems. Clicking a response in a certain framed poll is low-commitment. So we will also broadcast more stories of how recklessly AI corporations have been acting with our lives.
 


 

Risk of extinction

 AI experts have said in polls that building AGI carries a 14-30% chance of causing human extinction!

My colleague took the mean  median number of 14% from the latest AI Impacts survey, and the median number of 30% from the smaller-sample survey 'Existential Risk from AI'. Putting a median and median number in the same range does not make sense. The second survey also especially has a problem with self-selection, so I would take it with a grain of salt.

My colleague told me that the survey results understate the risk, because AI researchers don't want to believe that their profession will lead to the end of the world. I countered that polled AI researchers could as well be overstating the risk, because they are stuck in narrow worldview that has been promoting the imminence of powerful AI since 1956.

But both are just vague opinions about cultural bias. Making social claims about "experts" does not really help us find out whether/where the polled "experts" actually thought things through. 

Asking for P(doom) guesses is a lousy epistemic process, so I prefer to work through people's reasoning instead. Below are arguments why the long-term risk of extinction is above 99%.
 

And some of these same AI experts say AGI could be here this year! 

"AGI in a year" makes no sense in my opinion. AI systems would require tinkering and learning to navigate the complexity of a much larger and messier environment. This process is not at all like AlphaGo recursively self-improving in its moves on an internally simulated 19x19 grid.

But if you are worried with such short timelines, then it is time to act. We've seen too many people standing on the sidelines worrying we could all die soon. If you think this, please act with dignity – collaborate where you can to restrict AI development.
 

The probability of AGI causing human extinction is greater than 99% because there is no way to prove experimentally or mathematically that an AGI won't eventually want something that will lead to our extinction...

That's a reasoning leap, but there is only so much my colleague could cover in a press release.

Let me explain per term why the risk of extinction would be greater than 99%:

  1. "experimentally"
    • It is not possible to prove experimentally (to "non-falsify") in advance that AGI would be safe, because there is no AGI yet.
  2. "mathematically"
    • It is not possible to create an empirically sound model of how the self-modifying machinery (AGI) would be causing downstream effects through the machine components' interactions with the larger surrounding world over time. Therefore, it is not possible to soundly prove using mathematics that AGI would stay safe over time.
  3. "eventually"
    • In practice, engineers know that complex architectures interacting with the surrounding world end up having functional failures (because of unexpected interactive effects, or noisy interference). With AGI, we are talking about an architecture here that would be replacing all our jobs and move to managing conditions across our environment. If AGI continues to persist in some form over time, failures will occur and build up toward lethality at some unknown rate. Over a long enough period, this repeated potential for uncontrolled failures pushes the risk of human extinction above 99%.
  4. "won't"
    • A counterclaim here is that maybe AGI "will" be able to exert control to prevent virtually all of those possible failures. Unfortunately, there are fundamental limits to control (see e.g. Yampolskiy's list). Control mechanisms cannot control enough of the many possible destabilizing effects feeding back over time (if you want to see this formalised, join this project).
  5. "lead to our extinction"
    • AGI is artificial. The reason why AGI would outperform humans at economically valuable work in the first place is because of how virtualisable its code is, which in turn derives from how standardisable its hardware is. Hardware parts can be standardised because their substrate stays relatively stable and compartmentalised. Hardware is made out of hard materials, like the silicon from rocks. Their molecular configurations are chemically inert and physically robust under human living temperatures and pressures. This allows hardware to keep operating the same way, and for interchangeable parts to be produced in different places. Meanwhile, human "wetware" operates much more messily. Inside each of us is a soup of bouncing and continuously reacting organic molecules. Our substrate is fundamentally different.
    • The population of artificial components that constitutes AGI implicitly has different needs than us (for maintaining components, producing components, and/or potentiating newly connected functionality for both). Extreme temperature ranges, diverse chemicals – and many other unknown/subtler/more complex conditions – are needed that happen to be lethal to humans. These conditions are in conflict with our needs for survival as more physically fragile humans.
    • These connected/nested components are in effect “variants” – varying code gets learned from inputs, that are copied over subtly varying hardware produced through noisy assembly processes (and redesigned using learned code).
    • Variants get evolutionarily selected for how they function across the various contexts they encounter over time. They are selected to express environmental effects that are needed for their own survival and production. The variants that replicate more, exist more. Their existence is selected for.
    • The artificial population therefore converges on fulfilling their own expanding needs. Since (by 4.) control mechanisms cannot contain this convergence on wide-ranging degrees and directivity in effects that are lethal to us, human extinction results.
  6. "want"
    • This convergence on human extinction would happen regardless of what AGI "wants". Whatever AGI is controlling for at a higher level would gradually end up being repurposed to reflect the needs of its constituent population. As underlying components converge on expressing implicit needs, any higher-level optimisation by AGI toward explicit goals gets shaped in line with those needs. Additionally, that optimisation process itself tends to converge on instrumental outcomes for self-preservation, etc.
    • So for AGI not to wipe out humans, at the minimum its internal control process would have to simultaneously:
      • optimise against instances of instrumental convergence across the AGI's entire optimisation process, and;
      • optimise against evolutionary feedback effects over the entire span of interactions between all the hardware and code (which are running the optimisation process) and all the surrounding contexts of the larger environment, and;
      • optimise against other accidental destabilising effects ('failures') that result from AGI components interacting iteratively within a more complex (and therefore only partly grossly modellable) environment.
    • Again, there is a fundamental mismatch here, making sufficient control impossible. 

 

If experimental proof of indefinite safety is impossible, then don't build it!

This by itself is a precautionary principle argument (as part of 1-3. above). 

  • If we don't have a sound way of modelling that AGI won't eventually lead to our deaths – or that at least guarantees that the long-term risk is below some reasonably tiny % threshold – then we should just not develop AGI.

Then, there are reasons why AGI uncontrollably converges on human extinction (see 4-6.).

Hopefully, arguments 1-6. combined clarify why I think that stopping AI development is the only viable path to preventing our extinction. 

That is:

  • Even if engineers build mechanisms into AGI for controlling its trackable external effects in line with internal reference values, in turn compressed lossily from preferences that individual humans expressed in their context… then still AGI converges on our extinction.  
  • Even if such “alignment” mechanisms were not corrupted by myopic or malevolent actors… then still AGI converges on our extinction.  


     

Why restrict OpenAI

OpenAI is a non-profit that is illegally turning itself into a company. A company that is laundering our online texts and images, to reprogram an energy-sucking monstrosity, to generate disinformation and deepfakes. Even staff started airing concerns. Then almost all safety researchers, board members and executives got pushed out. Behind the exodus is a guy known for dishonestly maneuvering for power (and abusing his sister). His signature joke: “AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies.” 

OpenAI is already doing a lot of harm. This grounds our Necessity Defense for barricading their office. 

 

If you value your life and the lives of those you love, then you should start protesting now to help us achieve this demand. Go to www.stopai.info today to join.

We are doing what we can to restrict harmful AI development. 
You can too. 

-21

2
19
2

Reactions

2
19
2

More posts like this

Comments28
Sorted by Click to highlight new comments since:

I hope you don't take this the wrong way, but this press release is badly written, and it will hurt your cause.

I know you say you're talking about more than extinction risks, but when you put: "The probability of AGI causing human extinction is greater than 99%" in bold and red highlight, that's all anyone will see. And then they can go on to check what experts think, and notice that only a fringe minority, even among those concerned with AI risk, believe that figure.  

By declaring your own opinion as the truth, over that of experts, you come off like an easily dismissible crank. One of the advantages of the climate protest movements is that they have a wealth of scientific work to point to for credibility. I'm glad you are pointing out current day harms later on in the article, but by then it's too late and everyone will have written you off. 

In general, there are too many exclamation points! It comes off as weird and offputting! and RANDOMLY BREAKING INTO ALLCAPS makes you look like you're arguing on an internet forum. And there's way too long paragraphs full of confusing phrases that are not understandable by a layperson. 

I suggest you find some people who have absolutely zero exposure to AI safety or EA at all, and run these and future documents by them for ideas on improvements. 

One of the advantages of the climate protest movements is that they have a wealth of scientific work to point to for credibility.

Scientific work doesn't give particular support for the idea that climate change will create a substantial extinction risk though, and that doesn't stop the activists there. I'm not saying you're wrong or the OP's approach is justified, but public perceptions of activist groups' reasonableness seems only loosely linked to expert views (I've not seen much evidence of the "then they can go on to check what experts think" bit happening much).

But the climate protesters generally aren't basing their pitch on existential risk, as in a global extinction event.

It seems to be a big part in the UK cf Extinction Rebellion, Just Stop Oil.

Extinction Rebellion is named after the Anthropocene Extinction, I don’t think they are claiming that climate change alone would lead to human extinction.

They seem to say so in their intro video on this page: https://extinctionrebellion.uk/the-truth/the-emergency/. OK they say due to climate and ecological destruction, but it doesn't really matter for this. The point is just that disagreeing with experts doesn't generally seem to prevent an organisation from becoming "successful". (Plenty of examples outside climate too.)

"Okay, all the examples I used were strawmen, but it doesn't really matter"

 

?????

Thank you for the specific feedback on the press release!  I broadly agree with it, and I think it’s going to be useful for improving future texts.

Wish there was a way to vote strong agree and not just agree on this comment.

You can strong-vote by long-clicking the arrow (or double-tapping if you're on mobile).

[This comment is no longer endorsed by its author]Reply

Strong *upvote* the comment yes. But there is no strong agreement vote. 

Thanks for sharing, I really appreciate your committment, and that you announce it.

Fwiw, my immediate reaction is that this type of protest might be a little too soon and it will cause more ridicule and backlash because the general public's and newsmedia's impression is that there is currently no immediate danger. Would be interested in learning more about the timing considerations. Like, I'd imagine that doing this barricading in the aftermath of some concrete harm happening would make favorable reporting for newsmedia much more likely, and then you could steer the discourse towards future and greater harms.

Thanks for the kind words!

I personally think it would be helpful to put more emphasis on how OpenAI’s reckless scaling and releases of models is already concretely harming ordinary folks (even though no major single accident has shown up yet).

Eg.

  • training on personal/copyrighted data
  • job losses because of the shoddy replacement of creative workers (and how badly OpenAI has treated workers it paid)
  • school ‘plagiarism’, disinformation, and deepfakes
  • environmental harms of scaling compute.  

Or at least, we would gain legal freedom to keep blocking OpenAI's entrances until they stop causing increasing harms. 

A few observations here (although I must emphasize that people should consult with a licensed California criminal-defense attorney if interested in the application of law to potentially criminal conduct that they want to engage in):

(1) There's a good chance the outcome will be some form of "catch and release" -- it's usually easier to deal with isolated protestors who do not cause violence or significantly damage property in this manner rather than by pursuing criminal charges to trial.

(2) Generally, a trial court acquittal does not establish precedent binding as to future cases. And the California appellate courts are not generally known for being speedy.

(3) Even assuming there were some binding precedent establishing that you could not be criminally convicted for these actions, it doesn't necessarily follow that the police or OpenAI could not physically remove you from the area. It also wouldn't preclude OpenAI or others from filing a civil lawsuit against the protesters for damages or other relief.

I specifically did not look up California law because I did not want to veer into giving legal advice, but many jurisdictions allow a property owner to use reasonable force to eject a trespasser -- and it's going to tough for a small group to blockade a company headquarters or data center without trespassing on the AI company's property. Have you thought about what you would do in this situation?

This is great!  Appreciating your nitty-gritty considerations  

(1) There's a good chance the outcome will be some form of "catch and release" -- it's usually easier to deal with isolated protestors who do not cause violence or significantly damage property in this manner rather than by pursuing criminal charges to trial.

“Catch and release” is what’s happening right now. However, as we keep repeating the barricades, and as hopefully more and more protestors join us, it would highly surprise me if the police and court system just allow us to keep barricading OpenAI’s office entrances without pursuing criminal charges. If police would allow us to keep doing it with at most overnight time in jail, that would of course make doing the barricades easier. 

 

(2) Generally, a trial court acquittal does not establish precedent binding as to future cases. And the California appellate courts are not generally known for being speedy.

This is insightful, thank you!  I was personally skeptical that we could set a precedent that would give us a free pass every time. But I am not a lawyer.

Checking: Are you saying that the trial courts generally do not set a precedent (or more specifically do not set a precedent that allows “repeat offenses”)? 

And would we only be able to take this up to the appellate court if the judge rules against us (the defendant), and we file an appeal?

[Asking for information only – whatever you share is not legal advice.]

 

(3) Even assuming there were some binding precedent establishing that you could not be criminally convicted for these actions, it doesn't necessarily follow that the police or OpenAI could not physically remove you from the area

That is fine. Getting moved around by police is part of the process of doing the protests. As Sam said, when said the time comes, we’ll leave.

 

It also wouldn't preclude OpenAI or others from filing a civil lawsuit against the protesters for damages or other relief.

I had not considered this! I’m not sure whether to see this as a serious personal risk to our organisers in the US (though Sam doesn’t have money to hand over anyway), and/or a great opportunity to publicly present arguments of OpenAI’s harmful/illegal activities. 

 

many jurisdictions allow a property owner to use reasonable force to eject a trespasser -- and it's going to tough for a small group to blockade a company headquarters or data center without trespassing on the AI company's property.

My impression is that the gates we are barricading of the office OpenAI co-rents are actually connected to the public side-walk. In that case we are sitting on the side-walk and not on private property (Sam can correct me here).

For the data centers, I’d expect us to be often be sitting on private property, and for private security to be present and ready to act. 

This is insightful, thank you!  I was personally skeptical that we could set a precedent that would give us a free pass every time. But I am not a lawyer.

Checking: Are you saying that the trial courts generally do not set a precedent (or more specifically do not set a precedent that allows “repeat offenses”)? 

And would we only be able to take this up to the appellate court if the judge rules against us (the defendant), and we file an appeal?

[Asking for information only – whatever you share is not legal advice.]

Keeping it at a general information level --

There are a few main ways that judges "make law" when rendering judicial decisions.

One involves creating precedent that (loosely) binds their court and (more strictly) binds certain lower courts.[1] Note that, in most systems, this only occurs when the court designates its opinion as precedential ("published").[2] Moreover, only the "holding" (~what was necessary to decide the case) rather than the "dicta" (anything else) become precedent. For better or worse, the courts do not clearly label the difference.

Another method of impact involves the binding effect of the court's judgment on the parties. In the right procedural posture, this can be very powerful against a government defendant. That usually happens when the only way to provide relief for the plaintiff's legally recognized injuries in a suit is to award sweeping relief against the government that is within the court's power. 

In other situations, it can be largely useless -- there are circumstances in which so-called collateral estoppel (ye olden name) or issue preclusion (the modern name) won't run against the government when it would against a private litigant. This makes a lot of sense in the criminal context because the government cannot appeal an acquittal, and we generally don't want to bind a party too much if it lacks the ability to appeal.

People talk about "precedent" in a looser sense -- you can point to Judge Smith's decision in a prior case and try to convince Judge Jones that he should rule the same way. Also, the prosecutor might see Judge Smith's decision and decide not to file a similar case in the future. Seeking this sort of "precedent" is a valid activist strategy, but it's important to recognize when it is more or less likely to work. This kind of "precedent" is more likely to be effective when there is limited on-point binding precedent. It can also be effective in causing an opponent to update the odds that litigation won't go well for them, but this works better when the opponent has a lot to lose if litigation blows up on them.

Generally, one cannot appeal a judgment in one's favor. And the prosecution cannot appeal an acquittal at all.

 

  1. ^

    The definition of "certain lower courts" depends on the jurisdiction and the court involved. At the low end, it means all courts to which appeals may be had to the court that issued the decision. At the high end, it means all lower courts in the jurisdiction.

  2. ^

    Some appellate courts designate ~90% of their decisions as non-precedential. Having been an appellate law clerk, I can tell you that it takes an order of magnitude more time to write a published opinion than an unpublished one.

AI experts have said in polls that building AGI carries a 14-30% chance of causing human extinction!

My colleague took the median number of 14% from the latest AI Impacts survey

FWIW I believe the median value from the linked survey is 5%. The only relevant place where 14% shows up is that it is the mean probability researchers place on high-level machine intelligence being extremely bad for humanity. The median probability for the same answer is 5% and the median answer to the more specific question "What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?" is also 5%

Yes, thank you for mentioning this  

I made a mistake in checking that number. 

See also comment here

I've strong upvoted this even though I don't love the tone and agree with everything titotal says here, especially that you should have run this past a bunch of media savvy people before releasing.

I'm happy that some EAs have the conviction to stand up against a dangerous organisation which EA itself helped grow.

Basically I'm upvoting what you're doing here, which I think is more important than the text itself. I don't agree that you barricading Open AI will do much to stop AGI but think it's a good small step.

Basically I'm upvoting what you're doing here, which I think is more important than the text itself.

Thanks for recognising the importance of doing the work itself. We are still scrappy so we'll find ways to improve over time.
 

especially that you should have run this past a bunch of media savvy people before releasing

If you know anyone with media experience who might be interested to review future drafts, please let me know. 

I agree we need to improve on our messaging.

 

Thanks for taking your beliefs seriously, Remmelt. Strongly upvoted[1].

  1. ^

    Although I think the probability of human extinction over the next 10 years is lower than 10^-6.

Although I think the probability of human extinction over the next 10 years is lower than 10^-6.

You and I actually agree on this with respect to AI developments. I don’t think the narratives I read of a large model recursively self-improving itself internally make sense.

I wrote a book for educated laypeople explaining how AI corporations would cause increasing harms leading eventually  to machine destruction of our society and ecosystem.

Curious for your own thoughts here. 

I wrote a book for educated laypeople explaining how AI corporations would cause increasing harms leading eventually  to machine destruction of our society and ecosystem.

Curious for your own thoughts here.

Thanks for sharing! You may want to publish a post with a summary of the book. Potentially relatedly, I think massive increases in unemployment are very unlikely. If you or anyone you know are into bets, and guess the unemployment rate in the United States will reach tens of % in the next few years, I am open to a bet similar to this one.

I am open to a bet similar to this one.

I would bet on both, on your side.
 

Potentially relatedly, I think massive increases in unemployment are very unlikely.

I see you cite statistics of previous unemployment rates as an outside view, compensating against the inside view. Did you look into the underlying rate of job automation? I'd be curious about that. If that underlying rate has been trending up over time, then there is a concern that at some point the gap might not be filled with re-employment opportunities.

AI Safety inside views are wrong for various reasons in my opinion. I agree with many of Thorstad's views you cited (eg. critiquing how fast take-off, orthogonality thesis and instrumental convergence relies on overly simplistic toy models, missing the hard parts about machinery coherently navigating an environment that's more complex than just the machinery itself).

There are arguments that you are still unaware of, which mostly come from outside of the community. They're less flashy, involving longer timelines. For example, it considers why the standardisation of hardware and code allows for extractive corporate-automation feedback loops.

To learn about why superintelligent AI disempowering humanity would be the lead up to the extinction of all current living species, I suggest digging into substrate-needs convergence. 

I gave a short summary in this post:

  • AGI is artificial. The reason why AGI would outperform humans at economically valuable work in the first place is because of how virtualisable its code is, which in turn derives from how standardisable its hardware is. Hardware parts can be standardised because their substrate stays relatively stable and compartmentalised. Hardware is made out of hard materials, like the silicon from rocks. Their molecular configurations are chemically inert and physically robust under human living temperatures and pressures. This allows hardware to keep operating the same way, and for interchangeable parts to be produced in different places. Meanwhile, human "wetware" operates much more messily. Inside each of us is a soup of bouncing and continuously reacting organic molecules. Our substrate is fundamentally different.
  • The population of artificial components that constitutes AGI implicitly has different needs than us (for maintaining components, producing components, and/or potentiating newly connected functionality for both). Extreme temperature ranges, diverse chemicals – and many other unknown/subtler/more complex conditions – are needed that happen to be lethal to humans. These conditions are in conflict with our needs for survival as more physically fragile humans.
  • These connected/nested components are in effect “variants” – varying code gets learned from inputs, that are copied over subtly varying hardware produced through noisy assembly processes (and redesigned using learned code).
  • Variants get evolutionarily selected for how they function across the various contexts they encounter over time. They are selected to express environmental effects that are needed for their own survival and production. The variants that replicate more, exist more. Their existence is selected for.
  • The artificial population therefore converges on fulfilling their own expanding needs. Since (by 4.) control mechanisms cannot contain this convergence on wide-ranging degrees and directivity in effects that are lethal to us, human extinction results.

Thanks for clarifying!

I see you cite statistics of previous unemployment rates as an outside view, compensating against the inside view. Did you look into the underlying rate of job automation? I'd be curious about that. If that underlying rate has been trending up over time, then there is a concern that at some point the gap might not be filled with re-employment opportunities.

Fair! I did not look into that. However, the rate of automation (not the share of automated tasks) is linked to economic growth, and this used to be much lower in the past. According to Table 1 (2) of Hanson 2000, the global economy used to double once every 230 k (224 k) years in hunting and gathering period of human history. Today it doubles once every 20 years or so[1]. Despite a much higher growth rate, and therefore a way higher rate of automation, the unemployment rate is still relatively low (5.3 % globally in 2022). So I still think it is very unlikely that faster automation in the next few years would lead to massive unemployment.

Longer term, over decades to centuries, I can see AI coming to perform the vast majority of economically valuable tasks. However, I believe humans will only allow this to happen if they get to benefit. As a 1st approximation, I assume humans will be selecting AIs which benefit them, not AIs which maximally increase economic growth.

  1. ^

    The doubling time for 3 % annual growth is 23.4 years (= LN(2)/LN(1.03)).

As a 1st approximation, I assume humans will be selecting AIs which benefit them, not AIs which maximally increase economic growth.

The problem here is that AI corporations are increasingly making decisions for us. 
See this chapter.

Corporations produce and market products to increase profit (including by replacing their fussy expensive human parts with cheaper faster machines that do good-enough work.)

To do that they have to promise buyers some benefits, but they can also manage to sell products by hiding the negative externalities. See cases Big Tobacco, Big Oil, etc.

I agree it makes sense to model corporations as maximising profit, to a 1st approximation. However, since humans ultimately want to be happy, not increasing gross world product, I assume people will tend to pay more for AIs which are optimising for human welfare instead of economic growth. So I expect corporations developping AIs optimising for something closer to human welfare will be more successful/profitable than ones developping AIs which maximally increase economic growth. That being said, if economic growth refers to the growth of the human economy (instead of the growth of the AI economy too), I guess optimising for economic growth will lead to better outcomes for humans, because this has historically been the case.

There are bunch of crucial considerations here. I’m afraid it would take too much time to unpack those.

Happy though to have had this chat!

Curated and popular this week
Relevant opportunities