About the program
Hi! We’re Chana and Aric, from the new 80,000 Hours video program.
For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts.
But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it!
80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them.
[Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take]
We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change.
Our first long-form video
For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power.
Why?
We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.)
We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
Hi Scott. I've had one paper published in philosophy, and I've had several others accepted to conferences. I'm certainly not as credentialed as Will, but I might be able to give some tips. My guess is that many of these are not particularly unique to philosophy. First, it's always good to reference other relevant philosophical work. We all know what hedonistic utilitarianism is, but if you're going to write a paper about the implications of effective altruism for a hedonistic utilitarian, you should still clearly define the concept and cite major works on the topic. Second, clear writing is always preferred over convoluted writing. Sometimes people think philosophers want to sound smart and intentionally use complicated language, but the reverse is true. Sure, philosophy sometimes does legitimately require an understanding of technical terms, but good philosophical writing aims to be as clear as possible. Third, a good format to follow is abstract, introduction, argument, conclusion. Abstracts are extremely useful because they allow people to get the gist of your argument very quickly. Fourth, it is often better to make a genuine contribution to a narrow problem than to not really contribute anything to a broad topic. Finally, a good practice is probably to just read some published philosophy work. That is the best way to get an idea of the writing quality and organizational nature of publishable papers. I believe Will has some of his papers posted on his site. I've read some of his work, and I think it's a good example of clear writing. That's probably a good place to start.
Most CFPs request papers that have been prepared for blind review as well, so be sure to do that.
Thanks, Zack!