About the program
Hi! We’re Chana and Aric, from the new 80,000 Hours video program.
For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts.
But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it!
80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them.
[Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take]
We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change.
Our first long-form video
For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power.
Why?
We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.)
We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the broader public into that conversation.
We wanted to make a video that conveyed:
- Superintelligence is plausible
- It might be coming soon
- It might determine large amounts of how the coming years and decades play out
- And it’s not on track to go well
Whether viewers have encountered the AI 2027 report or not, we hope this video will give a new appreciation for the story it tells, what experts think about it, and what the implications are for the world.
We also just think it’s an enjoyable, highly produced video Forum readers will like watching (even if the material is kind of dark).
Watch the video here!
Strategy and future of the video program
Lots of people started thinking about AI when ChatGPT came out.
The people in our ecosystem though, know that that was just one point in a broader trend.
We want to talk about that trajectory, catch people up, and talk about where things are going.
We also believe a lot of thoughtful, smart people have been lightly following the rise of AI progress but aren’t quite sure what they think about it yet. We want to suggest a framework that we think better explains what’s happening and what will happen than most of what else is out there (rather than, e.g. describing it all as hype, focusing exclusively on some ethical issues we think don’t encompass the whole story, arguing we should develop AI as fast as possible, etc).
We’re excited to make more videos that tell important stories and discuss relevant arguments. We’re also leaving room for talking more about relevant news, making more skits about appalling behavior, and creating more short explanations of useful concepts.
Watch this space!
Subscribing and sharing
Subscribe to AI in Context if you want to keep up with what we’re doing there, and share the AI 2027 video if you liked it.
AI 2027 seems to have been unusually successful at communicating AI safety ideas to the broader public and non-EAs, so if you’ve been looking for something to communicate your worries about AI, this might be a good choice.
If you like the video, and you want to help boost its reach, then ‘liking’ it on YouTube and leaving a comment (even a short one) really help it get seen by more people. Plus, we hope to see some useful discussion of the scenario in the comments.
Request for feedback
The program is new and we’re excited to get your input. If you see one of our videos and have thoughts on how it could be better or ideas for videos to make, we'd love to hear from you!
For the AI 2027 video in particular, we'd love it if you filled out this feedback form.
- ^
This came after some initial experiments with making problem profile videos on bioweapons and AI risk you may have seen, and the podcast team expanding to video podcasts! All of those are separate from this video program.
I've a similar concern to Geoffrey's.
When I clicked on the video last week, there's a prominent link to careers, then jobs. At the time, 3 of the top 5 were at AGI companies (Anthropic, OpenAI, GDM). I eventually found the 'should you work at AGI labs?' link, but it was much less obvious. This is the funnel that tens of thousands of people will be following (assuming 1% of people watching the video consider a change of career).
80K has for a long time pushed safety & governance at AGI companies as a top career path. While some of the safety work may have been very safety-dominant, a lot of it has in practice helped companies ship product, and advanced capabilities in doing so (think RLHF etc - see https://arxiv.org/pdf/2312.08039 for more discussion). This is inevitably a higher likelihood in a commercial setting than in e.g. academia.
Policy and governance roles have done some good, but have in practice also contributed to misplaced trust in companies and greater support for e.g. self-governance than might otherwise have been the case. Anecdotally, I became more trusting of OpenAI after working with their researchers on Towards Trustworthy AI (https://arxiv.org/pdf/2004.07213), in light of them individually signing onto (and in some cases proposing) mechanisms such as whistleblower mechanisms and other independent oversight mechanisms. At the same time unbeknownst to them, OpenAI leadership were building clauses into their contracts to strip them of their equity if they criticised the company on leaving. I expect the act by safety-focused academics like myself of coauthoring the report with openAI policy people will have also had the effect of increasing the perceptions of trustworthiness of OpenAI.
By now, almost everyone concerned about safety seems to have left OpenAI, often citing concerns over the ethics, safety-committedness and responsibilit-committedness of leadership. This includes everyone on Towards Trustworthy AI, and I expect many of the people funneled there by 80K. ( I get the impression from speaking to some of them that they feel they were being used as 'useful idiots'). One of the people who left over concerns was Daniel Kokatajlo himself - who indeed had to give up 85% of his family's net worth (temporarily, I believe) in order to be critical of OpenAI.
Another consequence of this funnel is that it's contributed to the atrophy of the academic AI safety and governance pipeline, and loss of interest in supporting this part of the space by funders ('isn't the most exciting work happening inside the companies anyway?'). The most ethically-motivated people, who might otherwise have taken the hit of academic salaries and precarity, had a green light to go for the companies. This has contributed to the atrophy of the independent critical role, and government advisory role, that academia could have played in frontier AI governance.
There's a lot more worth reflecting on than is captured in the 'should you work at AI labs/companies' article. While I've focused on OpenAI to be concrete here, the underlying issues apply to some degree across frontier AI companies.
And there is a lot more that a properly reflective 80K could be doing here. E.g.
Heck, you could even have a little survey to answer before accessing these high risk roles, like when you're investing in a high-risk asset
(this is all just off the top of my head, I'm sure there are better suggestions).
It's long been part of 80k's strategy to put people in high-consequence positions in the hope they can do good, and exert influence around them. It is a high-risk strategy with pretty big potential downsides. There have now been multiple instances in which this plan has been shown not to survive contact with the kind of highly agentic, skilled-at-wielding-power individuals who end up in CEO-and-similar positions (I can think of a couple of Sams, for instance). If 80k is going to be pointing a lot of in-expectation young and inexperienced people in these directions, it might benefit from being a little more reflective about how it does it.
I don't think it's impossible to do good from within companies, but I do expect you need to be skilful, sophisticated, and somewhat experienced. These are AGI companies. Their goal is to build AGI sooner than their competitors build AGI. Their leadership are extremely focused on this. Whether the role is in governance, or safety, it's reasonable to expect ultimately that you as an employee will be expected to help them do that (certainly not hinder them)
I will say though that I really enjoyed this - and it definitely imparts the, ah, appropriate degree of scepticism i might want potential applicants to have ;)
https://www.youtube.com/shorts/_DxM15ZuvG4
Feels a bit unrelated to the topic at hand?
While I like it, stories often sensationalize these issues such as "AI by 2027 we're all gonna die" without providing good actionable steps. It almost feels like the climate change crisis by environmentalists that say "We're all gonna die by 2030 because of climate change! Protest in the streets!"
I know stories are very effective in communicating the urgency of AGI, and the end of the video has some resources about going to 80k. Nonetheless, I feel some dread such as "oh gosh, there's nothing I can do," and that is likely compounded by YouTube's younger audience (for example college students will graduate after 2027).
Therefore, I suggest the later videos should give actionable steps or areas if someone wants to reduce risks from AI. Not only will it relieve the doomerism but it will give actual relevant advice for people to work on AI.
It’s gone viral! This is just the third day since release, and it has already reached 178k views, and it looks like it’s still growing fast. This is very hard to pull off for a brand-new channel. Massive kudos :)
>800k views as of now (4 days in), very impressive!
>1.2 million views in 5 days! Incredible!
>1.7 million views in 7 days! B-A-N-A-N-A-S!
With that many views (800k as of now), it might be worth looking into starting non-English sister channels as well.
Youtube science channel Kurzgesagt, for example, has a very large German and Spanish channel as well (2m subscribers each, ~10% of the 24m of the English channel). We (aisafety.berlin) would be happy to help with German, if you ever want to prioritize that, though Spanish, Hindi, and maybe Mandarin seem more important.
Materializing AI 2027 with board game pieces was such a simple yet powerful idea. Brilliantly executed.
Congrats to Phoebe, Aric, Chana and the rest of the team.
Looking forward the upcoming videos.
i have spread it through most of my relevant whatsapp channels.
congrats to the whole 80K team (Chana, Aric, Phoebe, Sam), keep shipping more!
These guys absolutely worked their butts off to make this video, and I think the results show it :') Thanks Chana, Aric, Phoebe, Sam, and everyone for making something I'm so so so excited for the world to see!!
Best video I've seen yet on the story of AI yet, amazing job! I hope this gets the reach it deserves and needs.
The presenter Aric is extremely talented and compelling, hope to see him again in more material.
Can you say more about this? I largely have the opposite intuition: that presenting a specific set of empirical predictions (indeed "a story") requires more – rather than less – trust in the presenter as compared to a more abstract model with its assumptions and alternative explanations explicitly stated.
Empirically, I think they were right
Astonishingly good <3 <3 <3
Everything is so polished and well communicated: chef kiss! Nice work team!
Big fan of the vid. Cried a little toward the end (when Aric says that AI2027 made him want to have the talk with his family)
Great idea and even better execution on the Youtube vids. I can't wait for more!
P.S. If it's not too personal, I'm just curious how is your first video (on AI 2027) so incredibly polished? Did you work with a more experienced videography team?
I think someone shared on Twitter that they worked with Phoebe Brooks who also created a video series for GWWC.
This is incredibly well done. My new go-to resource for explaining AI risk to people
This is a good video; thanks for sharing.
But I have to ask: why is 80k Hours still including job listings for AGI development companies that are imposing extinction risks on humanity?
I see dozens of jobs on the 80k Hours job board for positions at OpenAI, Anthropic, xAI, etc -- and not just in AI safety roles, but in capabilities development, lobbying, propaganda, etc. And even the 'AI safety jobs' seem to be there for safety-washing/PR purposes, with no real influence on slowing down AI capabilities development.
If 80k Hours wants to take a principled stand against reckless AGI development, then please don't advertise jobs where EAs are enticed by $300,000+ salaries to push AGI development.
Hi Geoffrey,
I'm curious to know which roles we've posted which you consider to be capabilities development -- our policy is to not post capabilities roles at the frontier companies. We do aim to post jobs that are meaningfully able to contribute to safety and aren’t just safety-washing (and our views are discussed much more in depth here). Of course, we're not infallible, so if people see particular jobs they think are safety in name only, we always appreciate that being raised.
My view is these roles are going to filled regardless. Wouldn't you want someone who is safety-conscious in them?
The tagline for the job board is: "Handpicked to help you tackle the world's most pressing problems with your career." I think that gives the reader the impression that, at least by default, the listed jobs are expected to have positive impact on the world, that they are better off being done well/faithfully than being unfilled or filled by incompetent candidates, etc.
Based on what I take to be Geoffrey's position here, the best case that could be made for listing these positions would be: it could be impactful to fill a position one thinks is net harmful to prevent it from being filled by someone else in a way that causes even more net harm. But if that's the theory of impact, I think one has to be very, very clear with the would-be applicant on the what the theory is. I question whether you can do that effectively on a public job board.
For example, if one thinks that working in prisons is an deplorable thing to do, I submit that it would be low integrity to encourage people to work as prison guards by painting that work in a positive light (e.g., handpicked careers to help you tackle the nation's most pressing social-justice problems).
[The broader question of whether we're better off with safety-conscious people in these kinds of roles has been discussed in prior posts at some length, so I haven't attempted to restate that prior conversation.]
A clarification: We would not post roles if we thought they were net harmful and were hoping that somebody would counterfactually do less harm. I think that would be too morally fraught to propose to a stranger.
Relatedly, we would not post a job where we thought that to have a positive impact, you'd have to do the job badly.
We might post roles if we thought the average entrant would make the world worse, but a job board user would make the world better (due to the EA context our applicants typically have!). No cases of this come to mind immediately though. We post our jobs because we consider them promising opportunities to have a positive impact in the world, and expect job board users to do even more good than the average person.
Conor -- yes, I understand that you're making judgment calls about what's likely to be net harmful versus helpful.
But your judgment calls seem to assume -- implicitly or explicitly -- that ASI alignment and control are possible, eventually, at least in principle.
Why do you assume that it's possible, at all, to achieve reliable long-term alignment of ASI agents? I see no serious reason to think that it is possible. And I've never seen a single serious thinker make a principled argument that long-term ASI alignment with human values is, in fact, possible.
And if ASI alignment isn't possible, then all AI 'safety research' at AI companies aiming to build ASI is, in fact, just safety-washing. And it all increases X risk by giving a false sense of security, and encouraging capabilities development.
So, IMHO, 80k Hours should re-assess what it's doing by posting these ads for jobs inside AI companies -- which are arguably the most dangerous organizations in human history.
Jason -- your reply cuts to the heart of the matter.
Is it ethical to try to do good by taking a job within an evil and reckless industry? To 'steer it' in a better direction? To nudge it towards minimally-bad outcomes? To soften the extinction risk?
I think not. I think the AI industry is evil and reckless, and EAs would do best to denounce it clearly by warning talented young people not to work inside it.
FWIW my impression of the EA community's position is that we need to build safe AI, not that we need to stop AI development altogether (although some may hold this view).
Stopping AI development altogether misses out on all the benefits from AI, which could genuinely be extensive and could include helping us with other very pressing problems (global health, animal welfare etc.).
I do think one can do a tremendous amount of good at OpenAI, and a tremendous amount of harm. I am in favor of roles at AI companies being on the 80,000 Hours job board so that the former is more likely.
JackM - these alleged 'tremendous' benefits are all hypothetical and speculative.
Whereas the likely X risk from ASI have been examined in detail by thousands of serious people, and polls show that most people, both inside and outside the AI industry, are deeply concerned by them.
This is why I think it's deeply unethical for 80k Hours to post jobs to work on ASI within AI companies.
I share your concern about x-risk from ASI, that's why I want safety-aligned people in these roles as opposed to people who aren't concerned about the risks.
There are genuine proposals on how to align ASI, so I think it's possible. I'm not sure what the chances are, but I think it's possible. I think the most promising proposals involve using advanced AI to assist with oversight, interpretability, and recursive alignment tasks—eventually building a feedback loop where aligned systems help align more powerful successors.
I don't agree that benefits are speculative by the way. DeepMind has already won the Nobel prize for Chemistry for their work on protein folding.
EDIT: 80,000 Hours also doesn't seem to promote all roles, only those which contribute to safety, which seems reasonable to me.
I was thoroughly impressed and found the video very approachable, especially as someone who had been intentionally avoiding AI 2027 content for mental health reasons. I am excited to see more videos, and Aric's demeanour and the video's style helped me get through the otherwise dark content.
I watched this at the weekend and see that you're now approaching 900k views!
You deserve it (and more). Absolutely incredible video - a perfect balance between confidence, humility, seriousness, and levity.
Can't wait to see what's coming next!