In the past 10 years, Coefficient Giving (formerly Open Philanthropy) has funded dozens of projects doing important work related to AI safety / navigating transformative AI. And yet, perhaps most activities that would improve expected outcomes from transformative AI have no significant project pushing them forward, let alone multiple.
This is mainly because we and other funders in the space don't receive promising applications for most desirable activities, and so massive gaps in the ecosystem remain, for example:[1]
- Policy development, advising, and advocacy in many important but currently neglected countries.
- Projects to develop and advocate for better model specifications.
- Projects to fill key gaps in AI infosecurity, e.g. software to allow the ~entire frontier AI model development and deployment workflow to run in a confidential computing environment, or router software that would improve companies' ability to detect and block weight exfiltration attempts.
- Projects developing and deploying new tools for detecting and monitoring dangerous AI behaviors in the wild.
- Projects to build better de-escalation mechanisms for companies and countries, a la Catalink.
- Projects to build a more robust incident tracking and reporting ecosystem.
- Projects to build and promote new technologies to allow verification of international AI treaties.
- …and many more.
To fill such gaps, we sometimes engage in "active grantmaking," which consists in (a) scoping a project or organization that would fill a critical gap in the space, (b) headhunting a suitable founder for that organization, and (c) convincing them to take our funding to build the new project/organization (reshaped around their strengths and strategic perspectives).
We've done this multiple times in the past, to (I think) great success, but it requires much more staff time than "passive grantmaking" (investigating grantee renewals and incoming applications).
We want to dramatically scale up our active grantmaking work, to fill the most important gaps in the ecosystem as quickly as possible, and to do so we need to hire a lot more staff.
One of the critical roles for this initiative is an "Active Grantmaking Support" Recruiter, who will help our program teams (e.g. the AI governance and policy team, which I lead) headhunt founders for new projects. We hope to hire multiple people for this role, and we expect the bottleneck is getting sufficiently qualified applicants to apply.
So please apply before the deadline on December 7th, and help us fill critical gaps in the AI safety ecosystem! (Note also the $5000 reward for successful referrals.)
(And if you’re interested in getting funding to work on those gaps, you can submit a brief Expression of Interest here.)
Notes
To be clear, some of the items on this list see some ongoing activity, but not nearly at the scale and sophistication I think is desirable. ↩︎

[just leaving this resource here] speaking of gaps-in-need-of-founders, checkout our work at Atlas Computing: https://blog.atlascomputing.org/p/civilizations-maintenance-backlog
relatedly, we're also hiring for a Fields Strategist: https://jobs.80000hours.org/?jobPk=17895