The article is here (note that the Washington Post is paywalled[1]). The headline[2] is "How elite schools like Stanford became fixated on the AI apocalypse," subtitled "A billionaire-backed movement is recruiting college students to fight killer AI, which some see as the next Manhattan Project." It's by Nitasha Tiku.
Notes on the article:
- The article centers on how AI existential safety concerns became more of a discussion topic in some communities, especially on campuses. The main example is Stanford.
- It also talks about:
- EA (including recent scandals)
- Funding for work on alignment and AI safety field-building (particularly for university groups and fellowships)
- Whether or not extinction/existential risk from AI is plausible in the near future (sort of in passing)
- It features comments from:
- Paul Edwards, a Stanford University fellow "who spent decades studying nuclear war and climate change, considers himself 'an apocalypse guy'" and who developed a freshman course on human extinction —and generally focuses on pandemics, climate change, nuclear winter, and advanced AI. (He's also a faculty co-director of SERI.)
- Gabriel Mukobi, a Stanford graduate who organized a campus AI safety group
- And in brief:
- Timnit Gebru (very briefly)
- Steve Luby, an epidemiologist and professor of medicine and infectious disease and Edwards’s teaching partner for the class on human extinction (very briefly) (who's the other faculty co-director of SERI)
- Open Philanthropy spokesperson Mike Levine (pretty briefly)
I expect that some folks on the Forum might have reactions to the article — I might share some in the comments later, but I just want to remind people about the Forum norms of civility.
Seems "within tolerance". Like I guess I would nitpick some stuff, but does it seem egregiously unfair? No.
And in terms of tone, it's pretty supportive.
That's not my read? It starts by establishing Edwards as a trusted expert who pays attention to serious risks to humanity, and then contrasts this with students who are "focused on a purely hypothetical risk". Except the areas Edwards is concerned about ("autonomous weapons that target and kill without human intervention") are also "purely hypothetical", as is anything else wiping out humanity.
I read it as an attempt to present the facts accurately but with a tone that is maybe 40% along the continuum from "unsu... (read more)