Last month, Anthropic announced Mythos Preview, the most powerful cyberweapon in history, capable of finding and exploiting zero-day vulnerabilities in every major operating system and web browser. Meanwhile, many frontier AI company employees increasingly expect full automation of AI R&D in the next year or two, followed by the rapid automation of thousands of other important tasks and jobs.
This pace of technological change is unprecedented, and the world is not prepared. Very little of the commercial, government, and nonprofit infrastructure we need to respond to these transformative changes has been built.
To meet this challenge, dozens of philanthropists are hoping to deploy tens of billions of dollars in philanthropy and impact investments in AI safety and governance in the next several years alone.[1] But most of this capital is bottlenecked on a tiny number of grant and investment advisors who can identify and vet specific funding opportunities, and create new ones by headhunting project founders.
That's why the AI teams at Coefficient Giving (CG) are hiring grantmakers and senior generalists, and why I think the next people we hire will be among the highest-leverage people in AI safety.[2] Please apply here.
As a new AI grantmaker at CG,[3] you'd likely move >$30 million, and plausibly >$100 million, in your first year, funding dozens or hundreds of people to work full-time on projects we think will address catastrophic risks from AI. Because grant investigation capacity is tight, hiring one fewer grantmaker usually means those millions will just sit in an account for another year rather than being deployed to useful ends. And when a strong candidate turns down a CG offer, the result is often not “a slightly-less-good grantmaker," it’s just one fewer grantmaker. We routinely close rounds with fewer hires than we'd planned for.
We fund a mix of:
- proposals that come our way via a Request for Proposals or otherwise, often with some creative steering and reshaping by the investigator
- renewals of past grantees, with a special focus on ambitiously scaling-up the best performers
- strategy-driven creation of new grantees. We do this by (a) identifying a critical gap in the ecosystem, (b) headhunting a strong founder for a new project that would address the gap, and (c) helping them spin up the new project quickly and ambitiously. There are dozens of new projects we think need to be spun up, e.g. (i) a high-credibility AI company scorecard project, (ii) projects to build and advocate better chain of thought monitoring or agreement verification technology, additional specialized third-party auditors, and many more.)
As our AI timelines shorten, we've shifted more focus to (3) since many critical gaps remain that we haven't gotten good applications for. We've had strong success with this so far, but the strategy work and headhunting of (3) requires far more staff capacity per dollar moved than (1) or (2) do, so we need to grow our grantmaker capacity as quickly as we can.[4] (Also, to make this shift we had to close this RFP, but we'd rather have the staff capacity to do both!)
CG is an excellent place to do this work, because we have (among other things):
- Resources. We expect to move in the neighborhood of $1 billion in AI grantmaking from Good Ventures (our primary funding partner) in 2026, plus more from dozens of other AI safety funders we are advising, some of which have billions in philanthropic capacity.
- Experience. Our staff have more AI safety grantmaking experience than anyone else. We've made hundreds of AI grants since 2015, and we benefit from over a decade of learning via (a) watching what impact those grants did or didn't have, and (b) special funder access to private information about grantees and grantee impacts.
- Strong colleagues. I won't belabor this, but CG is a talent-dense organization full of thoughtful, capable, and deeply kind people, all of whom are working toward common goals.
Please apply here, and help address a key bottleneck to helping the world prepare for the arrival of transformative AI. We recently extended the application deadline to May 24 due to insufficient applications, so your application could really change how many people we are able to hire!
This post is written from an AI team's perspective. CG's Biosecurity & Pandemic Preparedness team is also hiring, but I'll let people closer to that work speak to it. See e.g. here. ↩︎
For the rest of this post I'll focus on grantmaking rather than grantmaking and impact investing, since CG advises more grants than impact investments. ↩︎
Available founders are another bottleneck for (3), but grantmaker capacity can be converted into additional founders by spending more time on founder search, and much of our success with (3) so far has come from finding people outside our immediate networks who have been successful at building large new grantees addressing critical gaps. ↩︎

Strong upvoted! Great post, definitely agree more people should consider transitioning into grantmaking. Especially since research is so power-law distributed, I think many current technical / governance researchers would have much higher counterfactual impact deploying tens of millions of dollars as opposed to e.g. writing another paper. Downstream of a similar post I wrote, I'm currently working on a project to address the grantmaker bottleneck. Would be keen to connect! Have DM'd.
Also, for any grantmakers reading this, please reach out to me if you're interested in e.g. helping create a BlueDot AI Safety Grantmaking Fundamentals course curriculum or doing mentorship!
Why? Shouldn't you make an offer to the runner-up?
In some cases we do, but we have a high bar for overall "fit for the role," we don't hire people below that bar, and so we often end a hiring round with too-few applicants ending up above our bar (as far we can assess at that time with limited investment by both us and the candidate). We maintain this high bar for several reasons, one of which is that managers' time is also scarce and high opportunity cost.