The article is here (note that the Washington Post is paywalled[1]). The headline[2] is "How elite schools like Stanford became fixated on the AI apocalypse," subtitled "A billionaire-backed movement is recruiting college students to fight killer AI, which some see as the next Manhattan Project." It's by Nitasha Tiku.
Notes on the article:
- The article centers on how AI existential safety concerns became more of a discussion topic in some communities, especially on campuses. The main example is Stanford.
- It also talks about:
- EA (including recent scandals)
- Funding for work on alignment and AI safety field-building (particularly for university groups and fellowships)
- Whether or not extinction/existential risk from AI is plausible in the near future (sort of in passing)
- It features comments from:
- Paul Edwards, a Stanford University fellow "who spent decades studying nuclear war and climate change, considers himself 'an apocalypse guy'" and who developed a freshman course on human extinction —and generally focuses on pandemics, climate change, nuclear winter, and advanced AI. (He's also a faculty co-director of SERI.)
- Gabriel Mukobi, a Stanford graduate who organized a campus AI safety group
- And in brief:
- Timnit Gebru (very briefly)
- Steve Luby, an epidemiologist and professor of medicine and infectious disease and Edwards’s teaching partner for the class on human extinction (very briefly) (who's the other faculty co-director of SERI)
- Open Philanthropy spokesperson Mike Levine (pretty briefly)
I expect that some folks on the Forum might have reactions to the article — I might share some in the comments later, but I just want to remind people about the Forum norms of civility.
I work for Open Phil, which is discussed in the article. We spoke with Nitasha for this story, and we appreciate that she gave us the chance to engage on a number of points before it was published.
A few related thoughts we wanted to share:
We also want to express that we are very excited by the work of groups and organizers we’ve funded. We think that AI and other emerging technologies could threaten the lives of billions of people, and it’s encouraging to see students at universities around the world seriously engaging with ideas about AI safety (as well as other global catastrophic risks, like from a future pandemic). These are sorely neglected areas, and we hope that today’s undergraduates and graduate students will become tomorrow’s researchers, governance experts, and advocates for safer systems.
For a few examples of what students and academics in the article are working on, we recommend: