This is a special post for quick takes by Rockwell. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

One reason I'm excited about work on lead exposure is that it hits a sweet spot of meaningfully benefiting both humans and nonhumans. Lead has dramatic and detrimental effects for not just mammals, but basically all animals, from birds to aquatic animals to insects.

Are there other interventions that potentially likewise hit this sweet spot?

Someone+anonymous I think) recently suggested family planning at this intersection as well, because less humans = less animal suffering too. I did however think as a counterpoint this could be offset by the accelerated development associated with family planning could mean quicker transitions to factory farming too, bit that's just conjecture.

On this note any interventions that speed development could potentially be in the "negative" , anti sweet spot here too as developing country = more meat eaten = more factory farming.

Perhaps the soaking beans thing could also own slightly in this direction, someone suggested if coming beans were cheaper it could push further against eating meat, and also prevent deforestation which could either increase our reduce wild animal suffering - increase animal suffering by reducing habitat, or reduce it as less weeks animals can survive in the deforested area

Wow it's complicated

It’s noteworthy that if the procreation asymmetry is rejected, the sign of family planning interventions is the opposite of the sign of lifesaving interventions like AMF. Thus, those who support AMF might not support family planning interventions, and vice versa.

I admire you for repeatedly pushing a point that is so ideologically awkward for people, but that's not quite right. Sometimes family planning just changes when people have kids, rather than how many. In those cases, the other gains from it are good on all sensible views, and there's no objection based on "creating happy people is good". 

I appreciate that, and I agree with you!

However, as far as I'm aware, EA-recommended family planning interventions do decrease the amount of children people have. If these charities benefit farmed animals (and I believe they do), decreasing the human population is where these charities' benefits for farmed animals come from.

I've estimated that both MHI and FEM prevent on the order of 100 pregnancies for each maternal life they save. Unless my estimates are way too high (please let me know if they're wrong; I'm happy to update!), even if only a very small percentage of these pregnancies would have resulted in counterfactual births, both of these charities would still on net decrease the amount of children people have.

To the extent that they change timing rather than total number, the benefits (e.g. reduced maternal mortality) are probably overstated also, because you some of the maternal deaths you thought you prevents were actually just delayed.

Despite this I think Ariel is correct and these interventions are reducing the number.

Big picture wise isn't this making a normative judgement? Assuming a carrying capacity of earth for total biomass, less humans means more animal lives who are unable to record or communicate their experiences. We don't know what animals experience pre language but it's possible they are unable to reliably encode their experiences without the structure of a human language. (Similar to how humans have little memory from early childhood)

I am not sure it's a fair normative judgement to conclude this is an improvement.

Take it to the limit. All of humanity has died off except a small 100 person tribe. Nature has reclaimed everything else. Is this a net better world?

That biomass assumption has fallout if it's correct. For example blocking housing expansion for more wolf habitat might be the same tradeoff. Are the qalys of wolves better than the humans who might live there?

I think the biomass assumption does have a flaw: when we generate artificial fertilizer from fossil fuel and feed humans and pets with the agricultural products we are in disequilibrium, we can only do this for a finite amount of time before we can't.

I’m a huge fan of lead elimination too! And I could imagine that, for instance, cleaning up soil from battery recycling or mining could benefit some animals.

But just wanted to note that some of the most promising interventions to protect humans (eg getting lead out of spices, paint, cookware, cosmetics, toys, water pipes, etc) might not have much effect on nonhuman animals.

EA NYC is soliciting applications for Board Members! We especially welcome applications submitted by Sunday, September 24, 2023, but rolling applications will also be considered. This is a volunteer position, but crucial in both shaping the strategy of EA NYC and ensuring our sustainability and compliance as an organization. If you have questions, Jacob Eliosoff is the primary point of contact. I think this is a great opportunity for deepened involvement and impact for a range of backgrounds!

I often see people talking past each other when discussing x-risks because the definition[1] covers outcomes that are distinct in some worldviews. For some, humanity failing to reach its full potential and humanity going extinct are joint concerns, but for others they are separate outcomes. Is there a good solution to this?

  1. ^

    "An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development." (source)

I propose "positive and negative longtermism", so something to do with reaching full potential would all be positive longtermism and mere extinction protection is negative longtermism.  

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack. ---------------------------------------- “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”     Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.     Source Bentham Bulldog’s Case Against Honey   Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus: P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs) P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections) P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products C: Therefore, honey is the worst commonly consumed animal product and should be avoided The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
 ·  · 30m read
 · 
Summary In this article, I argue most of the interesting cross-cause prioritization decisions and conclusions rest on philosophical evidence that isn’t robust enough to justify high degrees of certainty that any given intervention (or class of cause interventions) is “best” above all others. I hold this to be true generally because of the reliance of such cross-cause prioritization judgments on relatively weak philosophical evidence. In particular, the case for high confidence in conclusions on which interventions are all things considered best seems to rely on particular approaches to handling normative uncertainty. The evidence for these approaches is weak and different approaches can produce radically different recommendations, which suggest that cross-cause prioritization intervention rankings or conclusions are fundamentally fragile and that high confidence in any single approach is unwarranted. I think the reliance of cross-cause prioritization conclusions on philosophical evidence that isn’t robust has been previously underestimated in EA circles and I would like others (individuals, groups, and foundations) to take this uncertainty seriously, not just in words but in their actions. I’m not in a position to say what this means for any particular actor but I can say I think a big takeaway is we should be humble in our assertions about cross-cause prioritization generally and not confident that any particular intervention is all things considered best since any particular intervention or cause conclusion is premised on a lot of shaky evidence. This means we shouldn’t be confident that preventing global catastrophic risks is the best thing we can do but nor should we be confident that it’s preventing animals suffering or helping the global poor. Key arguments I am advancing:  1. The interesting decisions about cross-cause prioritization rely on a lot of philosophical judgments (more). 2. Generally speaking, I find the type of evidence for these types of co