Part of Marginal Funding Week 2025
What would effective charities actually do with your money?
View all 53 posts
Hide table of contents

What is the ECF?

Longview operates a public fund: the Emerging Challenges FundIt is focused on global catastrophic risks from emerging technology such as AI. We recently released our 2025 ECF Annual Report, which provides a top-level overview of the fund and grants made from the $1.5M contributed by 2,171 donors in 2025.

Through the ECF, we offer anyone the opportunity to contribute to a pooled fund that will be disbursed by our expert grantmakers. Our grantmakers can track the evolving risk and funding landscapes in real time to fill strategic gaps, launch new projects and requests for proposals backed by a pre-capitalized fund, and move on time-sensitive opportunities within days. By contributing to the ECF, you can support impactful organizations without needing to investigate individual organizations’ budgets, track record, and counterfactual likelihood of funding. 

Consider donating to the ECF today

What will the ECF do with marginal funding?

Longview will allocate marginal ECF funding to organizations we investigate in 2026. To understand our worldview and the sort of grants we typically make, you can read our 2025 ECF Annual Report. In general, our investigations focus on the projects’ impact on reducing global catastrophic risks. For ECF grants, we consider two additional tests:

  1. Does the project have a legible theory of change? ECF grantees must have a compelling, transparent, and public case for how their activities will have an impact that appeals to a wide range of donors.
  2. Will the project benefit from diverse funding? Certain organizations sometimes benefit from the support of the ECF’s 2,000+ individual donors when demonstrating their independence from major funders and industry actors. ECF grantees often, though not always, pass this test. 

In our 2025 ECF Annual Report, we describe the ECF’s six 2025 grants. The ECF supported organizations advancing policy and research. On the policy side, grantees worked to shape frontier AI governance in the U.S. and Europe, including by building government capacity through talent pipelines and facilitating discussions on AI and arms control between the U.S. and China. On the research side, we funded groups evaluating AI system capabilities, their potential to accelerate biological misuse, and broader societal implications of rapid AI progress.

Longview in 2025

Longview has directed over $50 million in 2025. Across our private and public funds and bespoke advisory services, we’ve supported more than 50 projects aimed at reducing catastrophic and existential risks. We expect to scale further in 2026. 

In 2025, our AI team supported underfunded priority areas such as U.S. policy advocacy and helped catalyze emerging fields such as digital sentience and hardware-enabled mechanisms. Our nuclear team organized a summit for Giving Pledgers and peer philanthropists and co-launched a $10M Nuclear Consortium to help revitalize the field of nuclear philanthropy. Both teams expanded in late 2025, and we now have six AI grantmakers and four nuclear grantmakers going into 2026.

For donors giving over $100K: Our primary offering is access to our private Frontier AI FundDigital Sentience Fund, and Nuclear Weapons Policy Fund. Our private fund reports are sent directly to donors rather than distributed publicly, allowing us to use those funds to support higher-variance, ambitious, or confidential projects. We also help major donors develop personalized giving strategies, offer access to our grantmakers, and provide compliance and due diligence support. 

We strongly encourage donors contributing over $100K to email advising@longview.org

40

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since:

I endorse Longview's Frontier AI Fund; I think it'll give to high-marginal-EV AI safety c3s.

I do not endorse Longview's Digital Sentience Fund. (This view is weakly held. I haven't really engaged.) I expect it'll fund misc empirical and philosophical "digital sentience" work plus unfocused field-building — not backchaining from averting AI takeover or making the long-term future go well conditional on no AI takeover. I feel only barely positive about that. (I feel excited about theoretical work like this.)

I'm a grantmaker at Longview and manage the Digital Sentience Fund—thought I'd share my thinking here: “backchaining from… making the long-term future go well conditional on no Al takeover” is my goal with the fund (with the restriction of being related to the wellbeing of AIs in a somewhat direct way), though we might disagree on how that’s best achieved through funding. Specifically, the things you’re excited about would probably be toward the top of the list of things I’m excited about, but I also think broader empirical and philosophical work and field-building are some of the best ways to get there.
 

  • Relative to Lukas’s post, I’d say my goals are, in order, 5 and 2, then 4, then 3 and 1. An additional goal is improving the design of models that might be sticky over the long term.
  • All of the things on those lists require technical and policy researchers, engineers, lawyers, etc. that basically don’t currently exist in large numbers, so I do think fairly broad field building is important. There are pretty tight limits to how targeted field building can be: you can target, e.g., ML versus law, and you can suggest topics, but you’re basically just creating new specialists in the fields you pick who then pursue the topics you want.
    • Our recent funding opportunities targeted ML and neuroscience, so more around understanding minds than things like the role in society and trade. I’d guess that we repeat this but also run one or add in suggested topics that focus more on law, trade, etc.
  • Realistically addressing many of the things on those lists likely also requires a mature field of understanding AI minds, so I think empirical and philosophical work on sentience feeds into it.
  • To get concrete, recent distribution of funds and donations we’ve advised on (which is a decent approximation of where things go) looks like ~50% field building, of which maybe 10% is on things like the role of AI minds in society (including, e.g., trade) vs understanding AI minds; 40% research, of which maybe 25% is on the role of AI minds in society; a bit more than 10% lab-facing work, and 10% other miscellaneous things like communications and preparatory policy work. Generally the things I’m most excited to grow are lab-facing work and work on the role of AI minds in society.

(I also care about “averting Al takeover” and factor that in, though it’s not the main goal and gets less weight.)

 

 


 

Thanks. I'm somewhat glad to hear this.

One crux is that I'm worried that broad field-building mostly recruits people to work on stuff like "are AIs conscious" and "how can we improve short-term AI welfare" rather than "how can we do digital-minds stuff to improve what the von Neumann probes tile the universe with." So the field-building feels approximately zero-value to me — I doubt you'll be able to steer people toward the important stuff in the future.

A smaller crux is that I'm worried about lab-facing work similarly being poorly aimed.

Oh, clarification: it's very possible that there aren't great grant opportunities by my lights. It's not like I'm aware of great opportunities that the other Zach isn't funding. I should have focused more on expected grants than Zach's process.

I find this distinction kind of odd. If we care about what digital minds we produce in the future, what should we be doing now?

I expect that what minds we build in large numbers in the future will be largely depend on how we answer a political question. The best way to prepare now for influencing how we as a society answer that question (in a positive way) is to build up a community with a reputation for good research, figure out the most important cruxes and what we should say about them, create a better understanding of what we should actually be aiming for, initiate valuable relationships with potential stakeholders based on mutual respect and trust, creating basic norms about human-ai relationships, and so on. To me, that looks like engaging with whether near-future AIs are conscious (or have other morally important traits) and working with stakeholders to figure out what policies make sense at what times.

Though I would have thought the posts you highlighted as work you're more optimistic about fit squarely within that project, so maybe I'm misunderstanding you.

I'm not sure what we should be doing now! But I expect that people can make progress if they backchain from the von Neumann probes, whereas my impression is that most people entering the "digital sentience" space never think about the von Neumann probes.

Curated and popular this week
Relevant opportunities