Today, the AI Extinction Statement was released by the Center for AI Safetya one-sentence statement jointly signed by a historic coalition of AI experts, professors, and tech leaders.

Geoffrey Hinton and Yoshua Bengio have signed, as have the CEOs of the major AGI labs–Sam Altman, Demis Hassabis, and Dario Amodei–as well as executives from Microsoft and Google (but notably not Meta).

The statement reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

We hope this statement will bring AI x-risk further into the overton window and open up discussion around AI’s most severe risks. Given the growing number of experts and public figures who take risks from advanced AI seriously, we hope to improve epistemics by encouraging discussion and focusing public and international attention toward this issue.

Comments28
Sorted by Click to highlight new comments since: Today at 10:29 PM

Thank you to the CAIS team (and other colleagues) for putting this together. This is such a valuable contribution to the broader AI risk discourse.

I'm really heartened by this, especially some of the names on here I independently admired who haven't been super vocal about the issue yet, like David Chalmers, Bill McKibben, and Audrey Tang. I also like certain aspects of this letter better than the FLI one. Since it focuses specifically on relevant public figures, rapid verification is easier and people are less overwhelmed by sheer numbers. Since it focuses on an extremely simple but extremely important statement it's easier to get a broad coalition on board and for discourse about it to stay on topic. I liked the FLI one overall as well, I signed it myself and think it genuinely helped the discourse, but if nothing else this seems like a valuable supplement.

Very cool!

I am surprised you did not mention climate since this is the one major risk where we are doing a good job (i.e. if we paying as much attention to AI as to future pandemics and nuclear risk this isn't very reassuring, as it seems these are major risks there are not well addressed / massively underresourced compared to importance).

I, for one, think that it is good that climate change was not mentioned. Not necessarily because there are no analogies and lessons to be drawn, but rather because it can more easily be misinterpreted. I think that the kind of actions and risks are much more similar to bio and nuclear, in that there are way less actors and, at least for now, it is much less integrated to day-to-day life. Moreover, in many scenarios, the risk itself is of more abrupt and binary nature (though of course not completely so), rather than a very long and gradual process. I'd be worried that comparing AI safety to climate change would be easily misinterpreted or dismissed by irrelevant claims.

At least in the US, I'd worry that comparisons to climate change will get you attacked by ideologues from both of the main political sides (vitriol from the left because they'll see it as evidence that you don't care enough about climate change, vitriol from the right because they'll see it as evidence that AI risk is as fake/political as climate change).

IMO it was tactically correct to not mention climate. The point of the letter is to get wide support, and I think many people would not be willing to put AI X-Risk on par with climate

Yeah, I can see that though it is a strange world where we treat nuclear and pandemics as second-order risks.

climate since this is the one major risk where we are doing a good job

Perhaps (at least in the United States) we haven't been doing a very good job on the communication front for climate change, as there are many social circles where climate change denial has been normalized and the issue has become very politically polarized with many politicians turning climate change from an empirical scientific problem into a political "us vs them" problem.

since this is the one major risk where we are doing a good job

What about ozone layer depletion?

Not a current major risk, but also turned out to be trivially easy to solve with minimal societal resources (technological substitution was already available when regulated, only needed regulating a couple of hundred factories in select countries), so does not feel like it belongs in the class of major risks.

I disagree, I think major risks should be defined in terms of their potential impact sans intervention, rather than taking tractability into account (negatively). 

Incidentally there was some earlier speculation of what counterfactually might happen if we had invented CFCs a century earlier, which you might find interesting.

I think we're talking past each other.

While I also disagree that we should ignore tractability for the purpose you indicate, the main point here is more "if we'd chose the ozone layer as an analogy we are suggesting the problem is trivially easy" which doesn't really help with solving the problem and it already seems extremely likely that AI risk is much trickier than ozone layer depletion.

This is exciting!

Do you have any thoughts on how the community should be following up on this?

Made the front page of Hacker News. Here's the comments.

The most common pushback (and the first two comments, as of now) are from people who think this is an attempt at regulatory capture by the AI labs, though there's a good deal of pushback and (I thought) some surprisingly high-quality discussion.

It seems relevant that most of the signatories are academics, where this criticism wouldn't make sense. @HaydnBelfield created a nice graphic here demonstrating this point.

I've also been trying this to people claiming financial interests. On the other hand, the tweet Haydn replied to actually makes another good point though, that does apply to professors - diverting attention to from societal risks that they're contributing to but can solve, to x-risk where they can mostly sign such statements and then go "🤷🏼‍♂️", shields them from having to change anything in practice.

In the vein of "another good point" made in public reactions to the statement, an article I read in The Telegraph:

"Big tech’s faux warnings should be taken with a pinch of salt, for incumbent players have a vested interest in barriers to entry. Oppressive levels of regulation make for some of the biggest. For large companies with dominant market positions, regulatory overkill is manageable; costly compliance comes with the territory. But for new entrants it can be a killer."

This seems obvious with hindsight as one factor at play, but I hadn't considered it before reading it here. This doesn't address Daniel / Haydn's point though, of course.

https://www.telegraph.co.uk/business/2023/06/04/worry-climate-change-not-artificial-intelligence/

The most common pushback (and the first two comments, as of now) are from people who think this is an attempt at regulatory capture by the AI labs

 

This is also the case in the comments on this FT article (paywalled I think), which I guess indicates how less techy people may be tending to see it.

Note that this was covered in the New York Times (paywalled) by Kevin Roose. I found it interesting to skim the comments. (Thanks for working on this, and sharing!) 

This is so awesome, thank you so much, I'm really glad this exists. The recent shift of experts publicly worrying about AI x-risks has been a significant update for me in terms of hoping humanity avoids losing control to AI.

(but notably not Meta)

Wondering how much I should update from Meta and other big tech firms not being represented on the list. Did you reach out to the signing individuals via your networks and maybe the network didn't reach some orgs as much? Maybe there are company policies in place that prevent employees from some firms from signing the statement? And is there something specific about Meta that I can read up on (besides Yann LeCun intransigence on Twitter :P)?

I'm not sure, we can dismiss Yann LeCun's statements so easily; mostly, because I do not understand how Meta works. How influential is he there? Does he set general policy around things like AI risk?

I feel there is this unhealthy dynamic where he represents the leader of some kind of "anti-doomerism" – and I'm under the impression that he and his Twitter crowd do not engage with the arguments of the debate at all. I'm pretty much looking at this from the outside, but LeCun's arguments seem to be so far behind. If he drives Meta's AI safety policy, I'm honestly worried about that. Meta just doesn't seem to be an insignificant player.

Huge appreciation to the CAIS team for the work put in here

Great work guys, thanks for organising this!

I'm mildly surprised that Elon Musk hasn't signed, given that he did sign the FLI 6-month pause open letter and has been vocal about being worried about AI x-risk for years.

Probably the simplest explanation for this is that the organizers of this statement haven't been able to reach him, or he just hasn't had time yet (although he should have heard about it by now?). 

I reckon there's a pretty good chance he didn't sign because he wasn't asked, because he's a controversial figure.

Yea, that could be the case, although I assume having Elon Musk sign could have generated 2x the publicity. Most news outlets seem to jump on everything he does. 

Not sure what the tradeoff between attention and controversy is for such a statement. 

Most news outlets seem to jump on everything he does.

That's where my thoughts went, maybe he and/or CAIS thought that the statement would have a higher impact if reporting focuses on other signatories. That Musk thinks AI is an x-risk seems fairly public knowledge anyways, so there's no big gain here.

Truly brilliant coalition-building by CAIS and collaborators. It is likely that the world has become a much safer place as a result. Congratulations!

Curated and popular this week
Relevant opportunities