If you want to reach a very wide audience the N times they need to read and think about and internalize the message you can either write N pieces that reach that whole audience or N×y pieces that reach a portion of that audience. Generally, if you have the ability to efficiently write N×y pieces, then the latter is going to be easier than the former. This is what I mean about comms being a numbers game, and I take this to be pretty foundational to a lot of comms work in marketing, political campaigning, and beyond.
Though I also agree with Caleb's adjacent take, largely because if you can build an AI company then you can create greater coverage for your idea, arguments, or data pursuant to the above.
Of course there's large and there's large. We may well disagree about how good LLMs are at writing. I think Claude is about 90th percentile as compared to tech journalists in terms of factfulness, clarity, and style.
Though contra Rebecca I have not used my AI workflow on my quick takes, must just have that silvery Bing voice 😊
AI swarm writers:
Comms is a big bottleneck for AI safety talent, policy, and public awareness. Currently the best human writers are better than the best LLMs, but LLMs are better writers than 99% of humans and much easier to align to a message and style than human employees. In many venues (particularly social media) factors other than writing and analytical quality drive discourse. This makes a lot of comms a numbers game. And the way you win a numbers game is by scaling a swarm of AI writers.
I'd like to see some people with good comms taste and epistemics, thoughtful quality control, and the diligence to keep at it experiment with controlling swarms of AI writers producing and distributing lots of decent quality content on AI safety. Probably the easiest place to get started would be on social media where outputs are shorter and the numbers game is much starker. As the swarms got good, they could be used for other comms, like blogs and op eds. 4o is good at designing cartoons and memes, which could also be utilized.
To be clear, there is a failure mode here where elites associate AI safety with spammy bad reasoning and where mass content dilutes the public quality of the arguments for safety, which are at the limit are very strong. But at the moment there is virtually zero content on AI safety, making the bar for improving discourse quality relatively low.
I've found some AI workflows that work pretty well, like recording long voice notes, turning them into transcripts, and using the transcript as context for the LLM to write. I'd be happy to walk interested people through this or, if helpful, write something public.
You're probably right that operating a data center doesn't make sense. The initial things that pushed me in that direction were concerns about robustness of the availability of compute and the aim to cut into the supply of frontier chips labs have available to them rather than funge out other cloud compute users, but it's likely way too much overhead.
I don't worry about academics preferring to spend on other things, it's specialization for efficient administration and a clear marketing narrative.
Just Compute: an idea for a highly scalable AI nonprofit
Just Compute is a 501c3 organization whose mission is to buy cutting-edge chips and distribute them to academic researchers and nonprofits doing research for societal benefit. Researchers can apply to Just Compute to get access to the JC cluster, which supports research in AI safety, AI for good, AI for science, AI ethics, and the like, through a transparent and streamlined process. It's a lean nonprofit organization with a highly ambitious founder who seeks to raise billions of dollars for compute.
The case for Just Compute is fairly robust: it supports socially valuable AI research and creates opportunities for good researchers to work in AI for social benefit and without having to join a scaling lab. And because frontier capabilities are compute constrained, it also slows down the frontier by using up a portion of the total available compute. The sales case for it is very strong, as it attracts a wide variety of donors interested in supporting AI research in the academy and at nonprofits. Donors can even earmark their donations for specific areas of research, if they'd like, perhaps with a portion of the donations mandatorily allocated to whatever JC sees as the most important area of AI research.
If a pair of co-founders wanted to launch this project, I think it could be a very cool moonshot!
70% disagree➔ 30% agreeEdit: OK almost done being nerdsniped by this, I think it basically comes down to:
Maybe something survives a paperclipper. It wants to turn all energy into data centers but it's at least conceivable that something survives this. The optimizer might, say, dissassemble Mercury and Venus to turn it into a Matryoshka brain but not need further such materials from Earth. Earth still might get some emanent heat from the sun despite all of the solar panels nested around it, and be the right temperature to turn the whole thing into data centers. But not all materials can be turned into data centers, so maybe some of the ocean is left in place. Maybe the Earth's atmosphere is intentionally cooled for faster data centers, but there's still geothermal heat for some bizarre animals.
But probably not. As @Davidmanheim points out (who changed my mind on this), you'll probably still want to disassemble the Earth to mine out all of the key resources for computing, whether for the Matryoshka brain or the Jupiter brain, and the most efficient way to do this probably isn't cautious precision mining.
Absent a powerful optimizer you'd expect some animals to survive. There's a lot of fish, some of them very deep in the ocean, and ocean life seems pretty wildly adaptive, particularly down at the bottom where they do crazy stuff like feeding off volcanic heat vents to turn their bodies into iron and withstand pressures that crumble submarines.
So by far the biggest parameter is going to be how much you expect the world to end from a powerful optimizer. This is the biggest threat in the near term, though if we don't build ASI or build it safely other existential threats loom larger.