Hide table of contents

TL;DR

  • The AI safety ecosystem is about to receive billions in new funding but lacks the grantmaker capacity, speed, and infrastructure to deploy it well — especially for smaller, high-impact opportunities.
  • We're building grantmaking.ai, a comprehensive public database of x-risk AI safety funding opportunities with a trust and signal layer on top — think Crunchbase meets GiveWell for AI safety.
  • To test and seed the platform, we're launching a $1M grant round focused on rapid distribution of $5K–$50K grants. More details on the round coming soon.
  • We're looking to connect with funders who want better tools to find, evaluate, and fund AI safety opportunities. Reach out at hi@grantmaking.ai or book a call here.

The Problem

Billions of dollars are going to flow into AI safety. From AI lab employees with significant equity events, from new donors bought into existential risk, from foundations scaling up. Sophie Kim's "The Anthropic IPO Is Coming. We Aren't Ready for It" covers this well, and Julian Hazell's "What it's like to be an AI safety grantmaker" tells us there simply aren't enough grantmakers.

Every new funder who wants to deploy capital responsibly hits the same bottlenecks:

  • Discovery is hard. Most opportunities are behind insider networks, dispersed across private emails and Slack channels.
  • Evaluation is repetitive. Every funder reconstructs the same picture from scratch because there's no shared, structured source of truth.
  • Information is stale. You can't tell if an org still needs money without emailing them directly.
  • Trusted signal is invisible. What experienced grantmakers actually think lives in private conversations that never reach new funders.

On the other side, grantees maintain parallel email threads explaining the same information to every potential funder. They apply to five different funding sources with slightly different forms asking the same questions. And nobody can tell from the outside whether a project is already fully funded or desperately needs $30K to survive another quarter.

The established funders (Coefficient Giving, Longview, SFF, etc.) will likely absorb much of the incoming capital, and they are especially well-suited to larger grants, repeat grantees, and opportunities already in their networks. That's great. But there is a massive long tail of smaller projects, independent researchers, and early-stage work that falls below their threshold or outside their pipeline. That's the gap we're trying to fill.

What We're Building

grantmaking.ai is a comprehensive, public database of AI safety funding opportunities — organizations, independent researchers, projects, funds, combined with a trust and signal layer to help funders find what's worth supporting.

A useful framing: Crunchbase for AI safety. Every entity in the ecosystem gets a living profile on the platform. We use public sources to populate initial profiles and then the people behind those profiles can claim and update them with private information like current runway, active fundraising goals, and application status with other funders.

On top of the data sits a signal layer: experienced reviewers publish their top picks with explanations. Community members can comment, endorse, and discuss. Over time, the platform surfaces which projects have broad support, which are controversial, and which are flying under the radar. The goal is that a new funder can land on the platform and quickly go from "I have $500K to give away" to "here are the 15 projects that multiple people I trust think are excellent and still need funding."

Who This Is For

Funders (our primary focus): individuals with $100K–$10M+ to deploy into AI safety who don't have insider access to deal flow or simply want to make more informed decisions faster. We're especially looking for funders who are excited about the idea of open, public data and evaluation. If you want to understand the landscape, form your own views, and contribute to a shared resource that helps the whole ecosystem, please reach out, we’d love to chat.

Grantees: organizations and individuals seeking funding. Maintain your information once, in one place. Share your profile link with potential funders instead of answering the same due diligence questions repeatedly. Think of it as your funding-focused professional profile.

Reviewers and domain experts: if you have opinions about what should be funded in AI safety, this is a place to make those opinions visible, build a public track record, and actually influence where money flows.

The Initial $1M Grant Round

To test and seed the platform, we're launching an initial $1M grant round focused on existential AI safety. We’re excited to partner with Manifund to distribute the funds, all other details (application process, reviewer panel, timeline) will be shared soon. Join our newsletter to get notified directly!

Here's the shape of it:

Speed is the priority. Big funders take months to fund projects. We want to get from application to funding decision in weeks. We believe that with the volume of capital entering the ecosystem, funding quickly, while being careful about potential harms, is important.

Small grants that big funders can't serve. We're funding $5K–$50K grants, the range that is often hard for large institutional funders to evaluate cost-effectively but can be transformative for an independent researcher, an early-stage project, or someone who needs conference travel, compute credits, or a few months of runway.

Open and transparent. All applications, reviews, and funding decisions will be public. This creates accountability, and lets the community learn from what gets funded and why.

Ongoing, not one-shot. If the first round goes well, we aim to distribute approximately $1M/month on a rolling basis. We're actively looking to bring in additional funders who want to deploy capital through the platform.

Customizable for funders. If you're a funder with your own preferences we can help you set up your own grant round through our platform, connect you with reviewers, and handle the logistics. We're offering hands-on onboarding and customization for the right partnerships.

Just to be clear, the grant round is not the end product; it is our first test of whether a public funding database plus reviewer signal can help money move faster and better. We’re open to your feedback, and will iterate and tweak future rounds / distributions.

Long-Term Vision

The grant round is how we get started. The bigger picture is building infrastructure that makes the entire AI safety funding ecosystem work better as it scales by orders of magnitude.

A public coordination layer. Right now, every funder independently reconstructs the same map of the ecosystem. Every grantee independently pitches the same story to every funder. A shared, public, structured data layer eliminates enormous amounts of duplicated work and lets funders, grantees, and evaluators build on each other's contributions rather than starting from scratch.

A proving ground for new grantmakers. The ecosystem's biggest bottleneck isn't money, it's grantmaker capacity. Our platform lets community members build a visible track record of grant evaluation. Someone who consistently writes thoughtful, well-reasoned reviews can demonstrate their judgment publicly, potentially graduating into funded regranting roles. We're exploring the idea of a formal grantmaker talent development cohort, where participants learn evaluation skills while building their public track record on the platform.

Infrastructure for AI-assisted grantmaking. This is something that excites us and that we think the ecosystem is underinvesting in. Within the next year or two, AI agents will be capable of doing substantial work around gathering, cleaning, surfacing, and even helping to evaluate grant-relevant information.

Imagine a world where every AI safety org and independent researcher has an AI assistant that continuously updates their public profile with progress, needs, and milestones, with much less human time and effort. Imagine our platform using AI to triage incoming applications, surface relevant context from across the ecosystem, flag when an org's runway is getting low, or identify promising projects that match a specific funder's priorities. The bottleneck for that future isn't the AI capability, it's having the structured data, community, and infrastructure in one place for AI to work with.

We're building the data layer and coordination infrastructure now so that when AI-assisted grantmaking becomes feasible, there's something for it to plug into.

Who We Are

grantmaking.ai is built by a small team motivated by the belief that better infrastructure can meaningfully increase the impact of AI safety funding:

  • Matt Brooks - Product and engineering lead. Matt founded and runs a B2B tech startup and brings product development and full-stack engineering experience to the project.
  • Anchor Funder - Funding and strategy. An experienced individual AI safety donor providing the anchor funding for the platform and initial grant round.
  • Melissa Samworth - UX, UI, Design lead. Melissa has worked on various EA projects including Ought and AISafety.com
  • Austin Chen — Advising. Austin is the founder of Manifund, the regranting and impact funding platform, and is advising on regrantor coordination, community engagement, and launch strategy.

How You Can Help

If you're a funder or potential funder: We'd love to talk. Whether you have $100K or $10M to deploy, we want to understand what you need to make confident funding decisions and build the platform around that. We can help you set up your own grant round, connect you with trusted reviewers, or simply give you a better way to explore the landscape. Reach out at hi@grantmaking.ai or book a call here.

If you're a grantee - an org or individual seeking funding: We'll be opening applications for the $1M grant round soon. In the meantime, check out the platform at grantmaking.ai and reach out if you want to claim or update your profile early. When applications open, you'll be able to apply by creating a project and submitting a lightweight application — if you've applied to SFF you’ll be able to copy and paste your application.

If you just think this should exist: Share this post. Introduce us to people who should know about it. Comment with feedback, criticism, or ideas. We're building in the open and we want to hear what the community thinks.

7

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities