Adapted from a Manifund proposal I announced yesterday.

In the past two weeks, I have been posting daily AI-Safety-related clips on TikTok and YouTube reaching more than 1M people.

Screenshot from my TikTok channel taken a couple days ago

I'm doing this because I believe short-form AI Safety content is currently neglected: most outreach efforts target long-form YouTube viewers, missing younger generations who get information from TikTok.

With 150M active TikTok users in the UK & US, this audience represents massive untouched potential for our talent pipeline (e.g., Alice Blair, who recently dropped out of MIT to work at Center for AI Safety as a Technical Writer would be the kind of person we'd want to reach).

TikTok Analytics from Jul 14 to Aug 10 


On Manifund, people have been asking me what kinds of messages I wanted to broadcast and what outcomes I wanted to achieve with this. Here's my answer:

My goal is to promote content that is fully or partly about AI Safety:

  1. Fully AI Safety content: Tristan Harris (176k views) on Anthropic's blackmail results, summarizes recent AI Safety research in a way that is accessible for most people. Daniel Kokotajlo (55k views) on fast takeoff scenarios introduces the concept of automated AI R&D, and related AI governance issues. These show that AI Safety content can get high reach if the delivery or editing is good enough.
  2. Partly / Indirectly AI Safety content: Ilya Sutskever (156k views) on AI doing all human jobs, the need for honest superintelligence and AI being the biggest issue of our time. Sam Altman (400k views) on sycophancy. These help with general AI awareness that makes viewers receptive to safety messages moving forward.
  3. "AI is a big deal" content: Sam Altman (600k views) on ChatGPT logs not being private in the case of a lawsuit. These videos aren't directly about safety but establish that AI is becoming a major societal issue.

The overall strategy here is to prioritize posting fully safety-focused content that has the potential to have high reach, then go for the partly / indirectly safety content that walks people through why AI could be a risk, and sometimes post some content that is more generally about AI being a big deal, bringing even more people in.

And here is the accompanying diagram I made:

Although the diagram above makes it seem like calls to action and clicking on links are the "end goals", I believe that "Progressive Exposure" is actually more important.

Progressive exposure: Most people who eventually worked in AI Safety needed multiple exposures from different sources before taking action. Even viewers who don't click anywhere are getting those crucial early exposures that add up over time.

And I'll go as far as to say that multiple exposures are actually needed in order to fully grok basic AI Risk arguments.

To give a personal example, the first time I wanted to learn about AI 2027, I listened to Dwarkesh's interview of Daniel Kokotajlo & Scott Alexander to get a first intuition for it. I then read the full post while listening to the audio version, and was able to grasp many more details and nuances. A bit later, I watched Drew's AI 2027 video which made me feel the scenario through the animated timeline of events and visceral music. Finally, a month ago I watched 80k's video which made things even more concrete and visceral through the board game elements. And when I started cutting out clips from multiple Daniel Kokotajlo's interviews, I internalized the core elements of the story even more (though I still miss a lot of the background research).

Essentially, what I'm trying to say is that as we're trying to onboard more talent into doing useful AI Safety work, we probably don't need to make them click on a single link that would lead them to take action or subscribe to some career coaching. 

Instead, the algorithms will directly feed people to more and more of that kind of content if they find it interesting, and they'll end up finding out the relevant resources if they're sufficiently curious and motivated.

Curated websites like aisafety.com or fellowships are there to shorten the time it takes to transition from "learning about AI risk" to "doing relevant work". And the goal of outreach is to accelerate people's progressive exposure to AI Safety ideas.

Longer description of the project here. Clips here.

Comments4
Sorted by Click to highlight new comments since:

FWIW, I'm (Alice Blair) not someone you could have reached through TikTok - I never used it and was using lesswrong well before I ever heard of TikTok. I appreciate this post, but I'm skeptical because many of the most agentic people self-select out of social media stimulation loops like TikTok.

Hi Alice, thanks for the datapoint. It's useful to know you have been a LessWrong user for a long time.

I agree with your overall point that the people we want to reach would be on platforms that have a higher signal-to-noise ratio.

Here are some reasons for why I think it might still make sense to post short-form (not trying to convince you, I just think these arguments are worth mentioning for anyone reading this):

  • Even if there's more people we want to reach who watch longform vs. short-form (or even who read LessWrong), what actually matters is whether short-form content is neglected, and whether the people who watch short-form would also end up watching long-form anyway. I think there's a case for it being neglected, but I agree that a lot of potentially impactful people who watch TikTok probably also watch Youtube.
  • The super-agentic people who have developed substantial "cog sec" and manage to not look at any social media at all would probably only be reachable via LessWrong / arXiv papers, which is an argument that undermines most AI Safety comms, not just short-form. To that I'd say:
    • I remember Dwarkesh saying somewhere that 30% of his podcast growth comes from short-form. This hints at short-form bringing potential long-form viewer / listener, and those Dwarkesh listeners are people we'd want to reach.
    • Youtube pushes aggressively for short-form. And for platforms like Instagram it's even harder to ignore.
      • It's possible to not use Instagram at all, and disable short-form recommendations on Youtube, but every time you add a "cog sec" criteria you're filtering even more people. (A substantial amount of my short-form views come posting on YT shorts, and I'm planning to extend to Instagram soon).
  • Similarly to what @Cameron Holmes argues below, broad public awareness is also a nice externality, not just getting more AI Safety talent.
  • You could imagine reaching people indirectly (think your friend who does watch short-form content talks to you about what they've learned at lunch).
  • When I actually look at the data of what kind of viewers watch my short-form content, it's essentially older people (> 24yo, even >34yo) from high-income countries like the US. It's surprisingly not younger people (who you might expect to have shorter attention span / be less agentic).

Makes sense, I agree that neglectedness is still pretty high here even though more people are getting into this side of comms. I'm working on broadly similar things, but not explicitly short form video content.

This seems great, although I expect most of the impact from this could come from broader public awareness rather than from new AI Safety talent as such and it might be worth leaning into that framing/goal a bit more?

I don't know exactly how to weigh 10k people aware of AIS (enough to consider it when voting) vs an additional person working on it full time, but I feel like the difference could be of that magnitude.

Tangent: I really like adding AIS topics into existing venues (e.g. Chana's video on Computerphile) for converting talent to AIS as I think it skips a lot of significant filters. Although I do get that this could still be the first step on that path (through multiple exposures)

Curated and popular this week
Relevant opportunities