I run the premier fellowship that support content creators in AI Safety through funding, production, and mentorship. Our fellows includes Oscar and Bafta featured filmmakers, Ex-journalists, viral creators, and AI Safety researcher turned creators.
Our mentors include Rob Miles, AI in Context, Species, Doom Debates, and others. Our partners include Control AI, CivAI, Seismic, Manifund, Mox, and many other credible organizations in the field.
My goal is to build AI Safety media and video-first comms infrastructure, and communicate important AI Safety topics in an understandable, accessible and actionable way.
connect me to AI Safety folks who might want to-
Connect you with communicators, media and creators, connect you with VCs for for-profit AI Safety, advise you on how to create maximal impact
As the founder of the premier fellowship for content creators in AI Safety (The Frame) , I'm happy to share my thoughts, experience and research on this topic. I've spent significant time researching what's missing, both through this research and from the conversations that I've had with other creators and cohort members.
This fund genuinely excites me. I think this remains one of the most underrated and important areas of AI Safety
I feel this way because media/films/content is usually the first interaction people have with AI Safety (and everything else) and this determines whether they eventually even contribute to the field (and how fast), or at the least contribute to healthy discourse. It is clear to me that currently the marketing funnel for AI Safety looks like a diamond, where it should actually be an upside-down triangle (which consequentially leads to growth of the size of the triangle due to downstream effects caused by a larger top of funnel).
Even within comms and media, I think there's still a funnel that has underreported issues. A framework that my friend who's the communicator at Palisade Research - Petr Lebedev outlines captures how audiences actually move through this topic:
While I do agree with most EAs and AI Safety folks that step 5 might be the most important, often it loses people's attention. There's a lot of creators who talk about step 1-3 and never move them to other steps. We need filmmakers and creators who can build trust with their audience, move them through this journey, meeting them where they're at, and moving them forward- without leaving them feeling helpless.
The rarest and most valuable creators are those who can:
That being said, here are my thoughts on underreported but underestimated issues:
Additionally, instead of just thinking about these underreported topics, I think we need more of the SAME existing content pieces told my different voices. This is because comms and media (unlike research) is narrator-dependent. The person who tells the story is incredibly important, in relation to the story itself. People like to listen to people they can relate to. People trust people who they can relate to. This is also a massive reason there's been a shift in trust from institutions to individuals. Currently AI Safety is largely males that are white ethnicity, concentrated largely in the bay area. We need more diverse voices. The Frame (https://framefellowship.com the fellowship I run) is representative of how I'd like the space to look- we're 50% female, 50% male, with people of all races, ethnicities, and backgrounds, and we can see the differences in the audiences they cater to.
Hope this helps, and hopefully we can collaborate on building this much-needed infrastructure, together.