Edit 5th May 2025: We are now more funding constrained, updated post on this here

We've heard that some people interpreted this post as meaning that EAIF thinks that we don't need much more EA infrastructure. That's not what we meant to imply with this post. Whether our funding is constrained at any given time is primarily a function of (1) our own available funding reserves and (2) the specific grant applications we receive. It is not a commentary on the overall vitality of the EA ecosystem or its potential to grow or get stronger: we believe there are many potentially promising projects which could contribute to this growth/strengthening, and we are excited to receive many more applications. We shared some ideas about applications we’d be especially keen to receive late last year; EAIF is now more funding-constrained, but we have only slightly increased our bar and will still be able to fund strong applications.

 

The EA Infrastructure Fund doesn’t currently have a significant need for more funding and an increase in funding wouldn’t change our immediate grantmaking decisions. That said, additional funding now could increase our ability to increase the scope of our grantmaking in future. 

EAIF currently has $3.3M in available funds. So far over 2024 EAIF has made grants worth $1.1M, and I expect this to be around $1.4M by the end of 2024. 

Examples of grants we’ve made over 2024 include:

  • The Centre for Enabling EA Research and Learning (CEEALAR) - $184,000
  • EA Norway’s Annual Weekend Conference for the Norwegian EA Community - $9,500
  • Rethink Priorities Worldview Investigations Team, a 6 month project to enhance the cross-cause cost-effectiveness model - $168,867
  • Michael Dello-Iacovo, a 6 month salary to build an EA-aligned YouTube channel - $23,660
  • EA Brazil, a 12-month salary for a full-time and a part-time coordinator for EA Brazil to support community growth and projects $67,500

     

I expect EAIF’s grantmaking to increase over 2025, and it could increase significantly, for a number of reasons. 

  • Over the last few years EAIF has been mostly operating on a passive funding model, and hasn’t done major work to proactively solicit applications or find promising funding opportunities. We’re looking to play a more active role in our grantmaking going forwards - and if we’re successful here, this could result in a significant increase in our grantmaking.
  • Currently the vast majority of applications to EAIF are for smaller scale projects/ organisations - typically, projects with 2 or less FTE, applying for funding for less than $150k. This is different to EA Funds inception, where the grantmaking was exclusively to larger organisations in the EA space. In particular as more organisations within the EA space are looking to diversify their sources of funding, EAIF could play a more significant role in funding larger organisations again.
  • Our funding bar is higher now than it was in previous years, and there are projects which EAIF funded in previous years that we would be unlikely to fund now. We’re quite uncertain about where to set our funding bar - and it’s plausible that we end up lowering this going forwards. 

My best guess is that EAIF will make $2.5M of grants in 2025, less than we currently have in available funds. And I think there’s an 80% chance that this will be within $1M and $4M. If EAIF looks on track to grant more than ~$2.5M over 2025, we’ll plan on letting people know that we expect to have room for more funding. 

Of course, additional funding would be welcome to help us build our reserves and provide flexibility to increase our grantmaking in 2025. But we wanted to transparently communicate that EAIF’s need is lower than it has been previously and lower than the need of other EA Funds like the Animal Welfare Fund and Long-Term Future Fund. 

We’re truly grateful to the donors who helped us to fill out previous funding gaps and enabled us to continue to give grants above our bar! And we’re excited to continue supporting fantastic projects and applicants over the coming year.

Comments3
Sorted by Click to highlight new comments since:

Kudos for making this post! I think it's hard to notice when money would best we spent elsewhere, particularly when you do actually have a use for it, and I appreciate you being willing to share this.

Our funding bar is higher now than it was in previous years, and there are projects which EAIF funded in previous years that we would be unlikely to fund now.

Could you expand on why that's the case? Is the idea that you believe those projects are net negative, or that you would rather marginal donations go to animal welfare and the long term future instead of EA infrastructure?

I think it's a bit weird for donors who want to donate to EA infrastructure projects to see that initiatives like EA Poland are funding constrained while the EA Infrastructure fund isn't, and extra donations to the EAIF will likely counterfactually go to other cause areas.

Could you expand on why that's the case? Is the idea that you believe those projects are net negative, or that you would rather marginal donations go to animal welfare and the long term future instead of EA infrastructure?

In some cases there are projects that I or other fund managers think are net negative, but this is rare. Often things that we decide against funding I think are net positive, but think that the projects aren't competitive with funding things outside of the EA Infrastructure space (either the other EA Funds or more broadly). 

I think it's a bit weird for donors who want to donate to EA infrastructure projects to see that initiatives like EA Poland are funding constrained while the EA Infrastructure fund isn't

I think it makes sense that there are projects which EAIF decides not to fund, and that other people will still be excited about funding (and in these cases I think it makes sense for people to consider donating to those projects directly). Could you elaborate a bit on what you find weird? 

and extra donations to the EAIF will likely counterfactually go to other cause areas

I don't think this is the case. Extra donations to EAIF will help us build up more reserves for granting out at a future date. But it's not the case that eg. if EAIF has more money that we think that we can spend well at the moment, that we'll then eg. start donating this to other cause areas. I might have misunderstood you here? 

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack. ---------------------------------------- “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”     Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.     Source Bentham Bulldog’s Case Against Honey   Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus: P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs) P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections) P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products C: Therefore, honey is the worst commonly consumed animal product and should be avoided The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
 ·  · 7m read
 · 
Tl;dr: In this post, I describe a concept I call surface area for serendipity — the informal, behind-the-scenes work that makes it easier for others to notice, trust, and collaborate with you. In a job market where some EA and animal advocacy roles attract over 1,300 applicants, relying on traditional applications alone is unlikely to land you a role. This post offers a tactical roadmap to the hidden layer of hiring: small, often unpaid but high-leverage actions that build visibility and trust before a job ever opens. The general principle is simple: show up consistently where your future collaborators or employers hang out — and let your strengths be visible. Done well, this increases your chances of being invited, remembered, or hired — long before you ever apply. Acknowledgements: Thanks to Kevin Xia for your valuable feedback and suggestions, and Toby Tremlett for offering general feedback and encouragement. All mistakes are my own. Why I Wrote This Many community members have voiced their frustration because they have applied for many jobs and have got nowhere. Over the last few years, I’ve had hundreds of conversations with people trying to break into farmed animal advocacy or EA-aligned roles. When I ask whether they’re doing any networking or community engagement, they often shyly say “not really.” What I’ve noticed is that people tend to focus heavily on formal job ads. This makes sense, job ads are common, straightforward and predictable. However, the odds are stacked against them (sometimes 1,300:1 — see this recent Anima hiring round), and they tend to pay too little attention to the unofficial work — the small, informal, often unpaid actions that build trust and relationships long before a job is posted. This post is my attempt to name and explain that hidden layer of how hiring often happens, and to offer a more proactive, human, and strategic path into the work that matters. This isn’t a new idea, but I’ve noticed it’s still rarely discussed op