Major news outlets have published articles about the Future of Life Institute's Open Letter. Time Magazine published an opinion piece by Elizer Yudkowsky. Lex Friedman featured EY on his podcast. Several US Members of Congress have spoken about the risks of AI. And a Fox News reporter asked a what the US President is doing to combat AI x-risk at a White House Press Conference.
Starting an Open Thread to discuss this, and how best to capitalize on this sudden attention.
Links:
WH Press Conference: https://twitter.com/therecount/status/1641526864626720774
Time Magazine: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
FLI Letter: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
I think being open to talking with as many people as you can about AGI x-risk is especially important now. This is a chance for it to become a mainstream political issue. Try and steer conversations about AI-induced job losses toward "the big one, the AI Apocalypse; that's the end game that we need to prevent".
I'll note that I've been using the term "AI Apocalypse" to refer to AGI x-risk in a non-jargony way for a while now, when talking to friends and family outside of the EA/LW/x-risk community.