About the program
Hi! We’re Chana and Aric, from the new 80,000 Hours video program.
For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts.
But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it!
80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them.
[Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take]
We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change.
Our first long-form video
For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power.
Why?
We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.)
We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
The "Modeling Transformative AI Risk" project which I assisted with has the intent of explaining this, and we have a fairly extensive but not fully comprehensive report on the conceptual models that we think are critical, online here. (A less edited and polished version is on the alignment forum here.)
I think that you could probably read through the report itself in a week, going slowly and thinking through the issues - but doing so requires background in many of the questions discussed in Miles' Youtube Channel, and the collection of AGI safety fundamentals resources which others recommended. Assuming a fairly basic understanding of machine learning and optimization, which probably requires the equivalent an undergraduate degree in a related field, the linked material on AI safety questions that you'd need to study to understand the issues, plus that report should get you to a fairly good gears-level understanding. I'd expect that 3 months of research and reading by someone with a fairly strong undergraduate background, or closer to a year for someone starting from scratch, would be sufficient to have an overall gears level model of the different aspects of the risk.
Given that, I will note that contributing to solving the problems requires quite a bit more investment in skill building - and depending on what you are planning on doing to address the risks, this could be equivalent to an advanced degree in mathematics, machine learning, policy, or international relations.
Here's the most up-to-date version of the AGI Safety Fundamentals curriculum. Be sure to check out Richard Ngo's "AGI safety from first principles" report. There's also a "Further resources" section at the bottom linking to pages like "Lots of links" from AI Safety Support.