As the founder of the premier fellowship for content creators in AI Safety (The Frame) , I'm happy to share my thoughts, experience and research on this topic. I've spent significant time researching what's missing, both through this research and from the conversations that I've had with other creators and cohort members.
This fund genuinely excites me. I think this remains one of the most underrated and important areas of AI Safety
I feel this way because media/films/content is usually the first interaction people have with AI Safety (and everything else) and this determines whether they eventually even contribute to the field (and how fast), or at the least contribute to healthy discourse. It is clear to me that currently the marketing funnel for AI Safety looks like a diamond, where it should actually be an upside-down triangle (which consequentially leads to growth of the size of the triangle due to downstream effects caused by a larger top of funnel).
Even within comms and media, I think there's still a funnel that has underreported issues. A framework that my friend who's the communicator at Palisade Research - Petr Lebedev outlines captures how audiences actually move through this topic:
- Meh- "AI is useless, all hype, don't understand most terms." (this is the starting point for most, just informing them about what AI is, how is simply works, and general effects it can have is good content)
- Wow- "AI is kind of impressive. Some things that they see, like an ai generated image, or voice ai, or chatgpt techniques" (a ChatGPT demo moment, or some simple use case interactions would be good pieces of content/awareness)
- Wow, good- "AI can do transformative things. it is a powerful technology that has transformative potential at all levels." (AlphaGo, Medical breakthroughs, and other higher level capabilities)
- Wow, bad- "AI can be weaponized, AI has a downside by bad actors." (mass cyberattacks, power concentration, ai-induced psychosis)
- Wow, bad bad- "AI itself poses a danger. if it is intelligent enough to be used badly, there's a possibility of intelligence that can make it cause bad itself" (risk from power seeking ai, existential risk).
While I do agree with most EAs and AI Safety folks that step 5 might be the most important, often it loses people's attention. There's a lot of creators who talk about step 1-3 and never move them to other steps. We need filmmakers and creators who can build trust with their audience, move them through this journey, meeting them where they're at, and moving them forward- without leaving them feeling helpless.
The rarest and most valuable creators are those who can:
- Genuinely understand all these stages.
- Have the ability to explain it accessibly.
- Connect it to meaningful actions people can take.
- and build and maintain relationships with the AI Safety community.
That being said, here are my thoughts on underreported but underestimated issues:
- Accessible coverage of actual research papers and LessWrong-style debates translated for general audiences
- Power concentration and post-AGI economics- what does the world look like structurally?
- Ground-level stories from underrepresented communities already being affected by AI
- Honest coverage of what organizations and labs are actually doing right
- Angry, urgent content that conveys the real severity without doomism
- Visualizations of exponential AI progress- humans are genuinely bad at intuiting this
- Funny and shareable short-form content.
- Cross-pollination- AI safety angles embedded in content from creators whose primary topic is something else entirely
- Vlog-style content where AI safety is just the natural topic of conversation
- Black Mirror-style narrative fiction exploring realistic futures
- Content that takes people from fearful and helpless to informed and empowered (very actionable walkthroughs)
Additionally, instead of just thinking about these underreported topics, I think we need more of the SAME existing content pieces told my different voices. This is because comms and media (unlike research) is narrator-dependent. The person who tells the story is incredibly important, in relation to the story itself. People like to listen to people they can relate to. People trust people who they can relate to. This is also a massive reason there's been a shift in trust from institutions to individuals. Currently AI Safety is largely males that are white ethnicity, concentrated largely in the bay area. We need more diverse voices. The Frame (https://framefellowship.com the fellowship I run) is representative of how I'd like the space to look- we're 50% female, 50% male, with people of all races, ethnicities, and backgrounds, and we can see the differences in the audiences they cater to.
Hope this helps, and hopefully we can collaborate on building this much-needed infrastructure, together.