Hide table of contents

Cross-posted from The Counterfactual by the Forum Team.


Subtitle: A concrete strategy for deploying the largest wave of philanthropic capital in history

The OpenAI Foundation holds $180 billion in equity. Anthropic’s co-founders have pledged to donate 80% of their wealth. When the time comes to spend all this money, what should we actually do with it?

Here’s my best guess.

The problem: scaling what we have is not enough

When most people think about how to solve AI safety, they think about what we’re already doing, and how to scale it up. Concretely, this looks like: scaling fellowships like MATS, Pivotal, and ERA; investing more money into AI safety research organizations like Redwood, METR, and MIRI; and perhaps, more recently, expanding programs like the Generator Residency.

This is important work, but it is not sufficient to win.

The Maven system that killed 120 children in Minab wasn’t misaligned. Claude didn’t go rogue. The system did exactly what it was designed to do.

The failure was that no government framework existed to regulate how AI gets integrated into military kill chains, how fast targeting decisions can be compressed, or what human oversight is required before a strike.

Alignment research can’t fix that, but legislation might.

The same is true across every domain where AI is already being deployed. No amount of technical safety research creates oversight requirements for AI in immigration enforcement, or addresses illegal data centers poisoning the air, or strengthens chip export controls. These are governance problems, and governance problems require political power to solve.

Even the parts of AI safety that are about technical alignment– e.g. making sure models follow instructions, don’t deceive, don’t pursue unintended goals, and don’t help people make bioweapons– only matter if frontier labs are required to implement them. Without legislation, every technical safety advance is effectively optional.

So what does AI safety actually do with billions of dollars? It can’t just pour it into research. The field has a few thousand people working on AI risk. The absorptive capacity for research funding doesn’t exist.

But two categories of spending can absorb billions immediately, because the infrastructure to deploy that capital already exists outside the field.


black camera on tripod in grayscale photography
Photo by Allec Gomes on Unsplash

1. Billions into mass media and movement building

When pollsters ask Americans whether AI should have safety guardrails, the answer is overwhelmingly yes. But those numbers are misleading. Rather than caring about whether people support AI regulation in the abstract, what we should be concerned about is whether they care enough to vote on it, call their representative about it, or show up at a town hall.

Right now, they don’t. In the Yale Youth Poll’s Spring 2026 survey, voters ranked AI near the bottom of 30 issues– at 24%, far below cost of living (84%), healthcare (75%), democracy (75%), and corruption (72%).

People support AI regulation when you ask them directly, but they’re not thinking about it otherwise. It’s not driving votes, it’s not driving calls to Congress, and until it does, the 73% of Americans that purportedly favor safety mandates changes nothing about what companies are actually required to do.

But we can ramp up existing sentiments, and fast, because it doesn’t require building a whole new field. Instead, it requires buying into Hollywood, Madison Avenue, and the campaign strategist ecosystem– industries that already know how to reach hundreds of millions of people. A blockbuster film like the Barbie Movie costs $200–300 million to produce and market. Obama’s 2012 reelection campaign spent $1.1 billion. A Super Bowl ad costs $10M for 30 seconds.

Sustained media attention has historically been one of the most powerful tools for building political will; Reagan explicitly cited The Day After as shifting his thinking on nuclear disarmament, and The Social Dilemma put Big Tech accountability in front of 100 million Netflix households.

Unfortunately, AI safety hasn’t really tried yet. The field’s biggest media project to date– the AI Doc– made $713,480 at the domestic box office, total. Opening weekend was $646,020 in 786 theaters; that’s $822 per theater. By day four, it was making $86 per theater.

Dividing the amount of money the AI Doc made by the average movie ticket price in the U.S. translates to roughly 64,000 tickets total.

The movie essentially disappeared after one weekend. Even accounting for screenings and eventual streaming, its total reach will probably be a fraction of a single MrBeast video.

So, the AI Doc had an Oscar-winning director and rave reviews, but it only made about 64,000 ticket sales. In contrast, The Social Dilemma reached 100 million households because Netflix put it on every subscriber’s home screen. AI safety needs that kind of reach– which means funding content designed for streaming platforms from the start, and conducting sustained media operations.

And, when AI safetyists put real money behind reaching people, I suspect we’ll find that they’re already with us.

The coalition practically builds itself: you don’t need everyone to care about loss of control and superintelligence alignment. You need privacy advocates furious about AI-powered mass surveillance, environmentalists furious about illegal gas turbines poisoning children in Memphis, parents furious about chatbots encouraging their kids to kill themselves, women furious about Grok still making sexualized images of them, workers furious about displacement with no transition plan, national security people furious about chips being sold to adversaries, anti-war activists furious about Minab.[1]

These constituencies already exist and already care, but nobody is organizing them at scale. Humans First is one of the first to try; they’re conservative, populist, and recently wrapped up a town hall tour nationwide. But Humans First is already funding constrained, and they’re only one side of the aisle. We need much more, we need bipartisan coalitions, and we need to start now.

The strategy, concretely, entails: funding a professional media operation, funding national ad campaigns, funding organizations focused on coalition-building, and funding AIPI and Gallup to do polling and message testing so every dollar of media spend is targeted at what actually motivates people to take action.


USA flag hanging in building
Photo by Caleb Fisher on Unsplash

2. Billions into political infrastructure

Why does public engagement matter? Because awareness translates into votes, and votes translate into political power. But a movement alone may not be enough. We also need the political infrastructure to channel public energy into concrete, specific legislative wins.

Lobbying works, which is why the industry spends so much on it:

In 2025, the AI industry spent $105 million on federal lobbying. One in four federal lobbyists reported working on AI. OpenAI’s super PAC has a $125 million war chest. Meta is spending $65 million to elect AI-friendly state officials. After Nvidia spent $4.97M lobbying in 2025– seven times their 2024 spend– the Trump administration weakened export controls and the bipartisan GAIN AI Act was killed in conference.

On the safety side, the asymmetry is staggering. Briefly: the CAIS Action Fund spent $310,000 in all of 2025, and although Anthropic donated $20m to Public First, that money “isn’t allowed to be used in the midterm battles.”

Note: I’ve written about this topic more expansively here.

AI safety needs to seriously show up on Capitol Hill. What does this look like, concretely? Retain bipartisan K Street firms; fund super PACs that counter Leading the Future; scale the CAIS Action Fund and the AI Policy Network; reward politicians who back safety legislation and make it expensive to oppose it.

The AI industry is currently winning in committee rooms, on K Street, and in campaign finance. To match that, AI safety needs (1) movement-building that delivers the votes, and (2) political infrastructure that converts those votes into leverage where policy fights actually happen.


3. Build the capacity to deploy the rest

We can rapidly deploy several billions of dollars into media, movement-building, and political infrastructure. Concurrently, we must also build the field’s ability to absorb the remaining amount. How do we do that?

a. Solve the grantmaker bottleneck

See more I’ve written on this here.

Currently, the entire field of AI safety has roughly 30 to 60 grantmakers, and Coefficient Giving’s Technical AI Safety team deployed $140 million last year with three investigators.

Julian Hazell, describing his experience as a grantmaker at CG, wrote:

[A]s our team has tripled headcount in the past year, we’ve also ~tripled the amount of grants we’re making, and we think the distribution of impact per dollar of our grantmaking has stayed about the same. That is, we’ve about tripled the amount of grant money we’ve moved towards the top end of the impact distribution as well as at the marginal end… [w]e want to scale further in 2026, but… we’re often bottlenecked by our grantmaker bandwidth.

Lack of grantmaker capacity is upstream of newly seeded organizations, faster grant evaluations, and faster capital deployment in the field; it is a binding constraint on everything else.

What does addressing the grantmaker talent bottleneck look like? We need to: fund and expand an official grantmaking stream as part of the Astra Fellowship (which already has grantmaker mentors and works directly with Coefficient Giving); build a BlueDot AI Safety Grantmaking Fundamentals course; run regranting and fellowship programs with Grantmaking.ai; and to help reduce the burden on individual donors making donation decisions, add better credibility signals to Manifund.

b. Poach top technical talent

Over the past decade, OpenAI, DeepMind, and Meta have pulled the world’s best ML researchers out of academia by offering up to 5x their university salaries, creating an “AI brain drain” that’s well-documented. To scale technical talent, we should do what the AI labs did for academia, but in reverse: pay top dollar to pull senior researchers out of capabilities labs and into safety research. We should also fund dedicated compute clusters for safety work.

We probably can’t match the $250M outliers, but most senior ML researchers at capabilities labs make $500K–$1M. Competitive packages at that level, combined with working on the most important problem in the world, could move enough people onto the safety side to make a real difference.

c. Address the generalist bottleneck

Currently, the AI safety talent pipeline is over-optimized for researchers. Every major fellowship produces alignment researchers; until very recently, nobody was working on building pipelines for operations managers, communicators, recruiters, and fundraisers– roles the field desperately needs.

The Generator Residency is the first program designed to fix this. However, its first cohort this summer is only slated to accommodate 15-30 residents. We need to scale and replicate this kind of program at AI safety hubs globally.

d. Fix fellowship pipelines

Fellowships today have mentors and research managers, but nobody working on career placement. Funding reverse headhunters embedded in fellowship programs whose job is placing graduates into full-time roles at established organizations would reduce “fellowship-hopping,” which creates more entry points and improves the field’s absorptive capacity.

Whatever remains should be used to seed billion-dollar moonshot prizes for solving key technical problems and to fund an endowment that will keep the field running for years to come, independent of new donations.


Conclusion

On the current trajectory, I think it’s likely the AI industry will win by default, because they’re on Capitol Hill, and we’re… largely not.

However, the good news is that changing that doesn’t necessarily require a breakthrough in alignment theory. It requires money, deployed into systems that already exist, to do things humans already know how to do: organize, persuade, vote, and legislate. The only question is whether we can do it fast enough.


For context, I originally wrote this for Dwarkesh’s essay contest. For the submission version, I added this note at the end: “The prompt asks what I’d do if I were in charge of the OpenAI Foundation. This is what I’d do. Whether I’d be allowed to is a different question, and one that points at why we can’t rely on AI companies’ own foundations to make AI go well.”

Thank you to Jason Hausenloy, Parv Mahajan, Sam Smith, Shannon Yang, and Jack Douglass for feedback and discussions.

  1. ^

    More on this soon; have been working on a project for coalition-building.

17

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities