Researching donation opportunities. Previously: ailabwatch.org.
Thanks. I'm somewhat glad to hear this.
One crux is that I'm worried that broad field-building mostly recruits people to work on stuff like "are AIs conscious" and "how can we improve short-term AI welfare" rather than "how can we do digital-minds stuff to improve what the von Neumann probes tile the universe with." So the field-building feels approximately zero-value to me — I doubt you'll be able to steer people toward the important stuff in the future.
A smaller crux is that I'm worried about lab-facing work similarly being poorly aimed.
I endorse Longview's Frontier AI Fund; I think it'll give to high-marginal-EV AI safety c3s.
I do not endorse Longview's Digital Sentience Fund. (This view is weakly held. I haven't really engaged.) I expect it'll fund misc empirical and philosophical "digital sentience" work plus unfocused field-building — not backchaining from averting AI takeover or making the long-term future go well conditional on no AI takeover. I feel only barely positive about that. (I feel excited about theoretical work like this.)
$500M+/year in GCR spending
Wait, how much is it? https://www.openphilanthropy.org/grants/page/4/?q&focus-area%5B0%5D=global-catastrophic-risks&yr%5B0%5D=2025&sort=high-to-low&view-list=true lists $240M in 2025 so far.
I have a decent understanding of some of the space. I feel good about marginal c4 money for AIPN and SAIP. (I believe AIPN now has funding for most of 2026, but I still feel good about marginal funding.)
There are opportunities to donate to politicians and PACs which seem 5x as impactful as the best c4s. These are (1) more complicated and (2) public. If you're interested in donating ≥$20K to these, DM me. This is only for US permanent residents.
I mostly agree with the core claim. Here's how I'd put related points:
I haven't read all of the relevant stuff in a long time but my impression is Bio/Chem High is about uplifiting novices and Critical is about uplifting experts. See PF below. Also note OpenAI said Deep Research was safe; it's ChatGPT Agent and GPT-5 which it said required safeguards.