About the program
Hi! We’re Chana and Aric, from the new 80,000 Hours video program.
For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts.
But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it!
80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them.
[Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take]
We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change.
Our first long-form video
For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power.
Why?
We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.)
We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
If we're being precise, I would just avoid thinking in terms of X-risk, since "X-risk" vs "not-an-X-risk" imposes a binary where really we should just care about losing expected value.
If we want a definition to help gesture at the kinds of things we mean when we talk about X-risk, several possibilities would be fine. I like something like destruction of lots of expected value.
If we wanted to make this precise, which I don't think we should, we would need to go beyond fraction of expected value or fraction of potential, since something could reduce our expectations to zero or negative without being an X-catastrophe (in particular, if our expectations had already been reduced to an insignificant positive value by a previous X-catastrophe; note that the definitions MichaelA and Mauricio suggest are undesirable for this reason), and some things that should definitely be called X-catastrophes can destroy expectation without decreasing potential. A precise definition would need to look more like expectations decreasing by at least a standard deviation. Again, I don't think this is useful, but any simpler alternative won't precisely describe what we mean.
We might also need to appeal to some idealization of our expectations, such as expectations from the point of view of an imaginary smart/knowledgable person observing human civilization, such that changes in our knowledge affecting our expectations don't constitute X-catastrophes, but not so idealized that our future is predictable and nothing affects our expectations...
Best to just speak in terms of what we actually care about, not X-risks but expected value.