Hide table of contents

SUMMARY: 

ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate.

Donate to ALLFED

FULL ARTICLE:

I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community. 

Read our funding appeal

At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety.

Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal today.

ALLFED’s funding situation

Without new funding ALLFED will need to cut half of our budget in the coming months, the outcome of which will be making significant cut backs to our team in June. This will reduce by about half our capacity to produce research, support governments, and develop practical interventions that could reduce the risk of global catastrophic food system failure during catastrophic events.

The case for ALLFED’s work, and why we think maintaining full current capacity is valuable

Today we are launching an urgent 2025 Appeal to ask for your help. With your support, we can continue advancing this work and protect the progress we have made, moving from research and planning to real-world applications that could shape global preparedness when it is most needed.

ALLFED is currently the only organization in the world focused exclusively on preventing global food system collapse in scenarios such as nuclear winter, engineered pandemics, or a widespread loss of infrastructure. 

In 2024 alone, we submitted 16 academic papers, advised policymakers across several continents, and supported governments including the UK, Argentina, and the Swedish Presidency of the Nordic Council in preparing for catastrophic food shocks such as abrupt sunlight reduction scenarios (ASRSs). This week we published our 2024 Annual Report where you can find out more.

Many of our current projects, such as advancing resilient food technologies and integrating them into policy frameworks, are in the position to scale and deliver real-world impact. With stable funding, we can transition this work from research and planning into practical applications, including pilot projects and expanding the reach of our policy. These steps will help bring resilient food solutions into the hands of governments, communities, and international actors.  You can find out more here about some of the pilots ALLFED would like to conduct.

How this connects to AI and other risks

The shift in funding toward AI safety is understandable, and we agree that AI safety is important. But AI also exacerbates other catastrophic risks. For example: 

  • AI may heighten nuclear tensions, accelerate arms races, and increase miscalculation risks.
  • AI could lower barriers to engineering pandemics, worsening biosecurity threats.
  • AI-enabled cyberattacks could disrupt electricity and industry, triggering severe food system failures.

This makes the case for diversified funding across multiple risk areas. Continued investment in resilient food solutions is a critical component of mitigating cascading global risks, including but not limited to those caused by unsafe AI. For example, AI risk experts have suggested investing in redundancy in critical infrastructure and rapid repair plans to reduce AI risk, which ALLFED has been working on for a long time.

Now more than ever, there is a need to prepare for abrupt global cooling resulting from a nuclear exchange, or a collapse of critical infrastructure from a pandemic or cyberattacks. Our work on resilient food solutions could increase the chances that a global catastrophic food system failure does not become the secondary disaster that collapses civilization in the wake of such an event.

What we’re asking for

We are seeking to raise $800,000 in grants and gifts in 2025. This funding would allow us to:

  • Continue producing policy-relevant research on extreme food system risks at full-strength
  • Start to move forward with planned pilot projects to test scalable resilient food solutions
  • Deepen collaborations with governments and institutions
  • Sustain our team, and avoid shrinking in June. 

The annual level of funding we need at ALLFED to remain operationally strong represents less than 1% of the EA community’s current support for AI safety. I’ve always said that AI safety should get more total money than resilience, but I believe that at recent levels of funding for resilience, the marginal cost effectiveness is still competitive with AI safety.

We welcome support at any level. 

We invite you to give at allfed.info/donate, share our appeal with your networks, or get in touch at appeals@allfed.info if you have any questions or would like to discuss a larger gift or funding opportunity.

Thank you for reading. If you’re able, now is the time to help us bridge this gap and keep this work moving forward. 

We are grateful for everything this community has helped us achieve so far, and hopeful for what we can do next, together.

Donate to ALLFED
Comments14


Sorted by Click to highlight new comments since:

Sorry to hear man. I tried to reach out to someone at OP a few months ago when I heard about your funding difficulties but I got ignored :(. Anyways, donated $100 and made a twitter thread here

This also made it more salient to me the need to become more independent of larger donors, so I'll be messaging some Sentinel readers to get more paid subscriptions

Thanks so much!

Thanks for all your efforts. I think donating to ALLFED saves human lives roughly as cost-effectively as GiveWell's top charities[1], so I would say it is a good opportunity for people supporting global health and development interventions[2].

  1. ^

    I estimated policy advocacy to increase resilience to global catastrophic food shocks is 4.08 times as cost-effective as GiveWell's top charities.

  2. ^

    Although I believe the best animal welfare interventions are way more cost-effective, and I do not know whether saving human lives is beneficial or harmful accounting for effects on animals.

cwa
13
1
0
6
3

I'm really sorry to hear about this --- I think ALLFED's work fills a really important niche. I donated and would encourage others to do so as well!

Thanks for everything you all do! Have been consistently impressed with ALLFED's work and its importance. Donated $1000 NZD, hope it helps somewhere. Good luck with the triaging

Seems like an important organization to keep going. I just donated ~$6K. I'm considering donating more, but I'd be quite curious to know first whether there's any reason in particular that OpenPhil isn't funding this?

Thank you both for your interest. OpenPhil recently responded to us on this, here's what we know: 

Different OpenPhil teams have discussed ALLFED's appeal for support that to see if there would be a fit, and they have concluded that there currently isn't a fit for ALLFED.

To clarify further, ALLFED has never received any grants from Open Philanthropy. No direct reason for this has been offered.

Thank you very much for your donation!

Kiwis who want to contribute tax-deductibly can make a donation via EA NZ's Gift Trust account.
(Note that you need to select ‘Allocate my donation to’ → ‘One charity or project supported by this Gift Account’ → ‘ALLFED’ to ensure your donation is allocated correctly)

Just made a small donation myself :)

[comment deleted]0
0
0
Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 8m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Despite setbacks, battery cages are on the retreat My colleague Emma Buckland contributed (excellent) research to this piece. All opinions and errors are mine alone. It’s deadline time. Over the last decade, many of the world’s largest food companies — from McDonald’s to Walmart — pledged to stop sourcing eggs from caged hens in at least their biggest markets. All in, over 2,700 companies globally have now pledged to go cage-free. Good things take time, and companies insisted they needed a lot of it to transition their egg supply chains — most set 2025 deadlines to do so. Over the years, companies reassured anxious advocates that their transitions were on track. But now, with just seven months left, it turns out that many are not. Walmart backtracked first, blaming both its customers and suppliers, who “have not kept pace with our aspiration to transition to a full cage-free egg supply chain.” Kroger soon followed suit. Others, like Target, waited until the last minute, when they could blame bird flu and high egg prices for their backtracks. Then there are those who have just gone quiet. Some, like Subway and Best Western, still insist they’ll be 100% cage-free by year’s end, but haven’t shared updates on their progress in years. Others, like Albertsons and Marriott, are sharing their progress, but have quietly removed their pledges to reach 100% cage-free. Opportunistic politicians are now getting in on the act. Nevada’s Republican governor recently delayed his state’s impending ban on caged eggs by 120 days. Arizona’s Democratic governor then did one better by delaying her state’s ban by seven years. US Secretary of Agriculture Brooke Rollins is trying to outdo them all by pushing Congress to wipe out all stat
Relevant opportunities