Nov 10
Funding strategy week
A week for discussing funding diversification, when to donate, and other strategic questions. Read more.
Nov 17
Marginal funding week
Nov 24
Donation election
Dec 8
Why I donate week
Dec 15
Donation celebration
Summary * I would say the total welfare of soil animals is overdetermined to be much larger than that of farmed invertebrates in absolute terms. The individual welfare per animal-year of soil ants and termites should not differ much from that of farmed invertebrates, and I calculate the population of soil ants and termites is 3.93 M times that of farmed black soldier fly (BSF) larvae and mealworms, and 652 k times that of farmed shrimps. * Projects targeting soil animals receive way less funding than ones targeting farmed invertebrates. The Wild Animal Initiative (WAI) granted 460 k$ to projects targeting invertebrates until 7 November 2025. In contrast, the Shrimp Welfare Project (SWP) received 2.9 M$ in 2024. * I believe interventions changing land use can increase welfare much more cost-effectively than ones targeting farmed invertebrates accounting for effects on soil animals. I estimate funding the Centre for Exploratory Altruism Research’s (CEARCH’s) High Impact Philanthropy Fund (HIPF), which I calculate increases agricultural land by 1.29 k m2-years per $, changes the welfare of soil ants, termites, springtails, mites, and nematodes 3.43 k times as cost-effectively as SWP’s Humane Slaughter Initiative (HSI) increases the welfare of shrimps. * I recommend research on the welfare of soil animals in different biomes over pursuing whatever land use change interventions naively look the most cost-effective. I have little idea about whether funding HIPF, or any other way of changing land use increases or decreases welfare. I am very uncertain about what increases or decreases soil-animal-years, and whether soil animals have positive or negative lives. * There is no escape from the uncertainty of the effects on soil animals if one wants to increase animal welfare accounting for all animals. I do not know about any interventions which robustly increase animal welfare due to dominant uncertain effects on soil animals. I conclude electrically stunning farmed shri
Looking into this a bit more, from this thread it seems like OP's grants database may currently be missing as much as half of their 2025 GCR spending. 
This is an incredible first post! Thanks so much for sharing!
This is really awesome work, it's great to have someone put this together! Hopefully the drop in @GiveWell's grants is just a timing or reporting issue and not nearly as large as it seems. Maybe they'll be able to clarify further!  If you wanted to extend this and cover more EA grants, I know Farmkind has a public database of grants from their platform that would be great to add. It also would be awesome if this could capture high-impact donations from Founders Pledge, but I'm not sure they provide granular enough data to be able to track by year and cause area. Maybe talking to @Matt_Lerner could shed some insight? 
$4137 raised to the Donation Election Fund
Donation Election Fund
$4137
Includes our match on the first $5000
Welcome to the EA Forum bot site. If you are trying to access the Forum programmatically (either by scraping or via the api) please use this site rather than forum.effectivealtruism.org.

This site has the same content as the main site, but is run in a separate environment to avoid bots overloading the main site and affecting performance for human users.

New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Funding Strategy
Global health
Animal welfare
Existential risk
12 more
I notice the 'guiding principles' in the introductory essay on effectivealtruism.org have been changed. It used to list: prioritisation, impartial altruism, open truthseeking, and a collaborative spirit. It now lists: scope sensitivity, impartiality, scout mindset, and recognition of trade-offs.   As far as I'm aware, this change wasn't signalled. I understand lots of work has been recently done to improve the messaging on effectivealtruism.org -- which is great! -- but it feels a bit weird for 'guiding principles' to have been changed without any discussion or notice.  As far as I understand, back in 2017 a set of principles were chosen through a somewhat deliberative process, and then organisations were invited to endorse them. This feels like a more appropriate process for such a change. 
8
Linch
5h
0
crossposted from https://inchpin.substack.com/p/legible-ai-safety-problems-that-dont Epistemic status: Think there’s something real here but drafted quickly and imprecisely I really appreciated reading Legible vs. Illegible AI Safety Problems by Wei Dai. I enjoyed it as an impressively sharp crystallization of an important idea: 1. Some AI safety problems are “legible” (obvious/understandable to leaders/policymakers) and some are “illegible” (obscure/hard to understand) 2. Legible problems are likely to block deployment because leaders won’t deploy until they’re solved 3. Leaders WILL still deploy models with illegible AI safety problems, since they won’t understand the problems’ full import and deploy the models anyway. 4. Therefore, working on legible problems have low or even negative value. If unsolved legible problems block deployment, solving them will just speed up deployment and thus AI timelines. 1. Wei Dai didn’t give a direct example, but the iconic example that comes to mind for me is Reinforcement Learning from Human Feedback (RLHF): implementing RLHF for early ChatGPT, Claude, and GPT-4 likely was central to making chatbots viable and viral. 2. The raw capabilities were interesting but the human attunement was necessary for practical and economic use cases. I mostly agree with this take. I think it’s interesting and important. However (and I suspect Wei Dai will agree), it’s also somewhat incomplete. In particular, the article presumes that “legible problems” and “problems that gate deployment” are idempotent, or at least the correlation is positive enough that the differences are barely worth mentioning. I don’t think this is true.     For example, consider AI psychosis and AI suicides. Obviously this is a highly legible problem that is very easy to understand (though not necessarily to quantify or solve). Yet they keep happening, and AI companies (or at least the less responsible ones) seem happy to continue deploying models withou
I've recently made an update to our Announcement on the future of Wytham Abbey noting that as of today, the property has now formally been sold. As was envisioned, proceeds from the sale will be allocated to high-impact charities, including EV’s operations.
Vince Gilligan (the Breaking Bad guy) has a new show Pluribus which is many things, but also illustrates an important principle, that being (not a spoiler I think since it happens in the first 10 minutes)... If you are SETI and you get an extraterrestrial signal which seems to code for a DNA sequence... DO NOT SYNTHESIZE THE DNA AND THEN INFECT A BUNCH OF RATS WITH IT JUST TO FIND OUT WHAT HAPPENS.  Just don't. Not a complicated decision. All you have to do is go from "I am going to synthesize the space sequence" to "nope" and look at that, x-risk averted. You're a hero. Incredible work.
People in effective altruism or adjacent to it should make some public predictions or forecasts about whether AI is in a bubble.  Since the timeline of any bubble is extremely hard to predict and isn’t the core issue, the time horizon for the bubble prediction could be quite long, say, 5 years. The point would not be to worry about the exact timeline but to get at the question of whether there is a bubble that will pop (say, before January 1, 2031).  For those who know more about forecasting than me, and especially for those who can think of good ways to financially operationalize such a prediction, I would encourage you to make a post about this. For now, an informal poll: