AI swarm writers:
Comms is a big bottleneck for AI safety talent, policy, and public awareness. Currently the best human writers are better than the best LLMs, but LLMs are better writers than 99% of humans and much easier to align to a message and style than human employees. In many venues (particularly social media) factors other than writing and analytical quality drive discourse. This makes a lot of comms a numbers game. And the way you win a numbers game is by scaling a swarm of AI writers.
I'd like to see some people with good comms taste and epistemics, thoughtful quality control, and the diligence to keep at it experiment with controlling swarms of AI writers producing and distributing lots of decent quality content on AI safety. Probably the easiest place to get started would be on social media where outputs are shorter and the numbers game is much starker. As the swarms got good, they could be used for other comms, like blogs and op eds. 4o is good at designing cartoons and memes, which could also be utilized.
To be clear, there is a failure mode here where elites associate AI safety with spammy bad reasoning and where mass content dilutes the public quality of the arguments for safety, which are at the limit are very strong. But at the moment there is virtually zero content on AI safety, making the bar for improving discourse quality relatively low.
I've found some AI workflows that work pretty well, like recording long voice notes, turning them into transcripts, and using the transcript as context for the LLM to write. I'd be happy to walk interested people through this or, if helpful, write something public.
You're probably right that operating a data center doesn't make sense. The initial things that pushed me in that direction were concerns about robustness of the availability of compute and the aim to cut into the supply of frontier chips labs have available to them rather than funge out other cloud compute users, but it's likely way too much overhead.
I don't worry about academics preferring to spend on other things, it's specialization for efficient administration and a clear marketing narrative.
Just Compute: an idea for a highly scalable AI nonprofit
Just Compute is a 501c3 organization whose mission is to buy cutting-edge chips and distribute them to academic researchers and nonprofits doing research for societal benefit. Researchers can apply to Just Compute to get access to the JC cluster, which supports research in AI safety, AI for good, AI for science, AI ethics, and the like, through a transparent and streamlined process. It's a lean nonprofit organization with a highly ambitious founder who seeks to raise billions of dollars for compute.
The case for Just Compute is fairly robust: it supports socially valuable AI research and creates opportunities for good researchers to work in AI for social benefit and without having to join a scaling lab. And because frontier capabilities are compute constrained, it also slows down the frontier by using up a portion of the total available compute. The sales case for it is very strong, as it attracts a wide variety of donors interested in supporting AI research in the academy and at nonprofits. Donors can even earmark their donations for specific areas of research, if they'd like, perhaps with a portion of the donations mandatorily allocated to whatever JC sees as the most important area of AI research.
If a pair of co-founders wanted to launch this project, I think it could be a very cool moonshot!
I think this is true, but also having a successful for profit that achieves some of the goals you set out is an inherently narrower set of skills because you need to do market research, product market fit, customer relations, p/l, find ways to scale teams and products, etc. These are skills that need to be learned whereas for nonprofit work you can just do your research or whatever. Some of them involve a bunch of soft skills and types of scale/customer mindset I don't commonly see in EA.
I think this is a good thing to have in the toolkit and has been underleveraged in the past, so I'm glad you posted this. But imo the stronger considerations for most EAs are that they are likely a poor personal fit for for-profit work (especially given that prior experience is the biggest predictor of success) and capital incentives are very hard to align with most impactful aims.
I'd be excited to see 1-2 opportunistic EA-rationalist types looking into where marginal deregulation is a bottleneck to progress on x-risk/GHW, circulating 1-pagers among experts in these areas, and then pushing the ideas to DOGE/Mercatus/Executive Branch. I'm thinking things like clinical trials requirements for vaccines, UV light, anti-trust issues facing companies collaborating on safety and security, maybe housing (though I'm not sure which are bottlenecked by federal action). For most of these there's downside risk if the message is low fidelity, the issue becomes polarized, or priorities are poorly set, hence collaborating with experts. I doubt there's that much useful stuff to be done here, but marginal deregulation looks very easy right now and looks good to strike while the iron is hot.
I think these are fair points, I agree the info hazard stuff has smothered a lot of talent development and field building, and I agree the case for x-risk from misaligned advanced AI is more compelling. At the same time, I don't talk to a lot of EAs and people in the broader ecosystem these days who are laser focused on extinction over GCR, that seems like a small subset of the community. So I expect various social effects, making a bunch more money, and AI being really cool and interesting and fast-moving are probably a bigger deal than x-risk compellingness simpliciter. Or at least they have had a bigger effect on my choices!
But insufficiently successful talent development / salience / comms is probably the biggest thing, I agree.
Yup! The highest level plan is in Kevin Esvelt's "Delay, Detect, Defend": use access controls and regulation to delay worst-case pandemics, build a nucleic acid observatory and other tools to detect amino acid sequences for superpandemics, and defend by hardening the world against biological attacks.
The basic defense, as per DDD, is:
IMO "delay" has so far basically failed but "detect" has been fairly successful (though incompletely). Most of the important work now needs to rapidly be done on the "defend" side of things.
There's a lot more details on this and the biosecurity community has really good ideas now about how to develop and distribute effective PPE and rapidly scale environmental defenses. There's also now interest in developing small molecule countermeasures that can stop pandemics early but are general enough to stop a lot of different kinds of biological attacks. A lot of this is bottlenecked by things like developing industrial-scale capacity for defense production or solving logistics around supply chain robustness and PPE distribution. Happy to chat more details or put you in touch with people better suited than me if it's relevant to your planning.
Though contra Rebecca I have not used my AI workflow on my quick takes, must just have that silvery Bing voice 😊