TL;DR: Differential technological progress is underrated, especially the part where we speed up the good tech. If you're interested in helping change that, just book me.
As AI systems handle more of our work, the careers and skills most valuable for reducing x-risk might start shifting. I suspect many of the roles around building and deploying AI applications could grow to productively absorb a large fraction of people working on reducing x-risk, but that most of them haven't even considered it as a possibility. Two months ago I decided it would be my full-time role to change that.
While I'm still trying to discover the shape of the talent gap that needs to be filled, I think one of the things we could use more of is builders. People who can quickly build product prototypes, put them in front of users, and do their damnedest to iterate toward something people will love.
Being a builder has always been promising. But I suspect the coming explosion of AI capabilities will radically expand the space of what can be built, so we'll need many more people threading their way through this space and identifying the AI applications that could make a difference for how things go this century.
How I got here
Four days before EAG London 2025, I re-read Forethought's article “AI Tools for Existential Security” for what must have been the third or fifth time. I was preparing to meet with Lizka, the article's coauthor, who I admired for her writing and EA Forum work, and I wanted to have concrete useful things to ask her.
One quote from the Forethought piece caught my attention for its unassuming brazenness:
[W]e think a significant fraction — perhaps around 30% — of people in the existential risk field should be making this a focus today. [emphasis mine]
The thing they want more people to focus on is accelerating the development of AI applications aimed at reducing existential risk. Think "AI surveillance of biological threat actors," "automated negotiation between nation-states," or "automating alignment research."
The best concise reconstruction of their argument I've come up with is:
- In the near future, AI will burst open the space of what's technologically feasible
- This will lead to an abundance of risk-generating capabilities and applications, as well as risk-reducing ones
- Not all the highly promising risk-reducing technologies will be built in time to meaningfully reduce x-risk:
- An accelerating technological frontier means it takes growing effort to explore and build the applications enabled by AI
- We want risk-reducing technologies in place before risk-generating capabilities allow massive harm
- We can accelerate risk-reducing technologies, either by feeding better inputs to the AI development process (data, compute, algorithms) or by working on their application and deployment
The article discusses in some length the specific ways you could accelerate any given AI application, such as by collecting task-specific datasets or building better user interfaces. But to me, that begs the question: "datasets for what?"
What are the specific applications we should be supporting with our datasets, compute, and technical talent? When I look around, there aren't that many AI applications—if any—that I feel like wholeheartedly supporting myself.
My current sense is that we're severely lacking in builders who can figure out the shape of what's needed—people who can help uncover the needs out there and identify AI applications that people would enthusiastically adopt and which could meaningfully impact existential risk. There's a small but growing community of them, but I think we need many, many more.
So beyond the possibility of a massive talent gap, why did I decide to work on this? For one, I love the ethos this work gestures at. AI models are these wondrously weird entities, and I want to see and understand what can be done with them.
I also care deeply that my work not be adversarial to the aims and beliefs of reasonable people who've thought about the same topics[1]. Sometimes we do need to take a stand on important issues, but I personally can't deal with the anguish of constantly questioning whether I picked the right side—or whether my confidence is the same confidence every utopian before me felt in their misguided visions.
So I decided to quit a few contractor-style projects I was working on and devote myself full-time to getting 30% of the x-risk community to work on AI tools for existential risk—or at least discovering why that's not worth doing.
What I do
To no one's surprise, mobilizing a sizable chunk of the x-risk community toward a new budding area of work is actually kind of hard.
Here's a quick recap of some things I've been doing:
- Keeping a few of my previous projects: building an AI automation for background checks on DNA order customers and helping LEEP use AI search agents for desk research and easily accessing their sprawling Google Drive
- Meeting with multiple people who are starting to build exciting things with AI, trying to figure out how I could best help them
- Meeting with people at organizations doing valuable work, asking how they think they could use AI better, and trying to figure out how I could best help them
- Creating content that would be useful for people trying to automate things at work
Drafting what aspires to be a perfectly polished manifesto[2] on why and how the EA community should build more capacity to apply AI to the world's most pressing problems.
Some things I've discovered:
- Putting the bar at "every EA who reads this will immediately drop what they're doing and join me" is not conducive to delivering within a 2-week deadline—and it's also kinda stressful.
- "How can I get more like you?", while being a fun question to ask people doing exciting work, doesn't usually lead to concrete answers for how I could actually get more like them
- For any individual EA, it's hard to pin-point specific ways they could be using AI better to make themselves visibly more productive. For a while the closest I had to a truly advantageous recommendation was "become addicted to AI Twitter, but try to avoid the slop"
- I'm hopeful this might be changing a bit with Peter Hartree's newsletter. It's incredibly concise, packed with useful tips, and stripped of any pretension. Since its launch a few days ago it's been my go-to recommendation for people wanting to take small concrete steps toward using AI better
- Workflow automation platforms like Zapier and n8n really suck. They pretend to be user-friendly but demand that you learn the intricacies of their platform, only without easy access to their source code and with much worse documentation. Claude Code for everyone is the way to go
While I'm not entirely proud of my output over the last month, I still think this area of work is incredibly promising and I'm as energized as ever to keep working on it.
I also think I'm gradually moving toward work that suits my strengths better and where I might actually be helpful:
- Devoting myself to supporting people and communities I appreciate and admire. I love delivering things to people I like. One of the reasons I could juggle 5 contractor-like roles at the same time earlier this year was that I really loved my managers and the people I was working with, and that drove me to do more and better work to not disappoint them. I gathered a small group of AI builders I felt unusually enthusiastic about in a Slack channel, and I hope that allows me to discover many mundane and frequent ways I can bring them value
- Writing quick posts that require little additional thought (like this one!). I'm extremely slow at thinking through writing, but decently good at putting my initial thoughts on paper. I plan to do more of the latter and less of the former going forward
Call to Action
The only reason I started writing this is because I had the fervent hope that reading this, some of you might decide this is worthwhile work and you want to join me.
As mentioned before, the rising tide of AI capabilities could open a massive space for defensive applications, and much of that is still unexplored and undiscovered. If you like building things, I think there are few other things you could be doing that would be more valuable than working on AI Tools for Existential Security.
If you like making things happen or caring for talented communities, I can heartily recommend you become an AI builder-maker as well. It's still an open-ended, confusing area to work on, but there are so many exciting things you could make happen and so many exciting people you get to meet from this work.
When I was deciding whether to quit many of my projects to work on this, I stumbled upon this framing that made the decision easier:
- If I succeed, I may direct hundreds of people to do substantially more valuable work to reduce existential risk
- If I fail, I will have spent 6 months of my life learning from and with some of the people I find most exciting in the world
I was doing valuable work before this, but definitely nothing as exciting as what I'm doing right now.
If you'd like to get started doing similar work, you can just book me on my Calendly during practically any of my waking hours. I'm eager to meet you.
Thanks to Lizka and Owen for expanding my sense of what's worth doing and for making time when I impulsively decided to follow them to Oxford just to continue our conversation. Thanks also to Nic Kruus, Peter Hartree, and Tobias Haberli, who are just incredibly great and I love having you around.
- ^
I previously quit an AI Governance role because I was increasingly uneasy with helping impose permanent restrictions on a technology whose impact and consequences we were just barely beginning to understand.
- ^
Damn Benjamin Todd and his precious writing setting unrealistic expectations for all of us