This is a special post for quick takes by Haris Shekeris. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

A bit of a newbie in EA (two-three weeks of reading and discovering stuff) so this may prove to be quite irrelevant, but here it goes anyway. I'm wondering if EAs should be worried about stories like the following (if needed i think i can find the scientific literature behind it):

https://www.sciencetimes.com/articles/40818/20221104/terrifying-model-provides-glimpse-humans-look-year-3000-overusing-technology.htm

My worry is that the standard EA literature where it is assumed that there will be thousands of generations left if humans are left alone may overlook some mundane effects or scenaria such as those stemming from such studies. 

An example, based on the above, could be that humans could be unrecognizable from today but due to the mundane reasons of using well-established technologies that exist today (laptops and smartphones). An unlikely extension of this may be that in say 1000 years homo sapiens goes extinct because evolution-for-optimized software use (i mean smartphones and laptops) has fiddled with the reproduction system (or simply people decided rationally they no longer wanted to have sex, either for pleasure or for child-raising purposes).

Another example could be long-term effects (unknown-unknowns at the moment though, but there's the example of the fish turning hermaphrodite due to exposure to antidepressants in a lake in the 90s, something that alerted people to the effects of small yet steady concentrations of medicines in human bodies) of substances in the body, which again change the average human body. An example of such a scenario would be if we discovered soon, say in 2025 that a concentration of above say 2μgs of microplastics in the gut begins to seriously mess up sperm or egg quality, hence rendering reproduction impossible.

Of course, we can always assume that major scientific bodies may produce advice to reverse such adverse effects, but what if these are as effective as anti-smoking campaigns? (imagine a current campaign to urgently reduce internet time to 30mins per day in advanced western countries because a cutting-edge scientific report has linked it to a rise in deadly brain tumours - how would that scenario play out? My predictions would involve some sort of denialism and conspiracy theories as well as rioting if the technological fix doesn't come fast, to say the least). Remember that even with COVID, which was a global pandemic, the desired response ( for example everybody or most people in the world getting vaccinated so that they don't die nor do they become altruists by not spreading the virus, at least when there was the uncertainty about how deadly the virus would be) largely failed due to humanity bringing out its worst self (politics among countries such as for securing more vaccines for their citizens or pharmaceutical companies maximizing their profits, to cite just two examples).

Once again, apologies if this is a bit off-topic or totally misses the point of EA.

I disagree a little bit with the credibility of some of the examples, and want to double-click others. But regardless, I think this is a very productive train of thought and thank you for writing it up. Interesting!

And btw, if you feel like a topic of investigation "might not fit into the EA genre", and yet you feel like it could be important based on first-principles reasoning, my guess is that that's a very important lead to pursue. Reluctance to step outside the genre, and thinking that the goal is to "do EA-like things", is exactly the kind of dynamic that's likely to lead the whole community to overlook something important.

Dear Emrik, 

Many thanks for the feedback and for the encouragement! The examples were a bit speculative, though the fish one is quite well-known, I think it was in the 90s. Also I know that it's only recently that the effects of long-term studies of small yet steady concentrations of macro-molecules  have begun to be conducted, not least because ten years ago we didn't have the technology to pursue such kinds of studies.
 If anybody is interested to 'research together' I can do imagine pursuing this further (this is an invitation), at the moment though it's just an idle thought. 

So please, anybody with more knowledge and means, if you're interested, I'd welcome the chance to conduct a literature review for anything mentioned above, and we can take it from there!

https://www.theguardian.com/commentisfree/2022/nov/30/science-hear-nature-digital-bioacoustics

what happens if in the future we discover that all life on Earth (especially plants) are sentient, but at the same time a) there are a lot more humans on the planet waiting to be fed and b) synthetic food/ proteins are deemed dangerous to human health?

Do we go back to eating plants and animals again? Do we farm them? Do we continue pursuing technologies for food given the past failures? 

Flagging a potential problem for longtermism and the possibility on expanding human civilisation on other planets: What will the people eat there? Can we just assume that technoscience will give us the answer'? Or is that a quick and too optimistic question? Can one imagine a situation where humanity goes extinct because the earth finally becomes uninhabitable and the on the first new planet on which we step on the technology either fails or the settlers miss the opportunity window to develop their food? I'm sure there must be some such examples in the history of settlers into new worlds in the existing human history, I don't know if anybody's working on this in the context of longtermism though.

Just some food for thought hopefully

https://www.theguardian.com/environment/2023/jan/07/holy-grail-wheat-gene-discovery-could-feed-our-overheated-world

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack. ---------------------------------------- “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”     Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.     Source Bentham Bulldog’s Case Against Honey   Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus: P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs) P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections) P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products C: Therefore, honey is the worst commonly consumed animal product and should be avoided The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
 ·  · 30m read
 · 
Summary In this article, I argue most of the interesting cross-cause prioritization decisions and conclusions rest on philosophical evidence that isn’t robust enough to justify high degrees of certainty that any given intervention (or class of cause interventions) is “best” above all others. I hold this to be true generally because of the reliance of such cross-cause prioritization judgments on relatively weak philosophical evidence. In particular, the case for high confidence in conclusions on which interventions are all things considered best seems to rely on particular approaches to handling normative uncertainty. The evidence for these approaches is weak and different approaches can produce radically different recommendations, which suggest that cross-cause prioritization intervention rankings or conclusions are fundamentally fragile and that high confidence in any single approach is unwarranted. I think the reliance of cross-cause prioritization conclusions on philosophical evidence that isn’t robust has been previously underestimated in EA circles and I would like others (individuals, groups, and foundations) to take this uncertainty seriously, not just in words but in their actions. I’m not in a position to say what this means for any particular actor but I can say I think a big takeaway is we should be humble in our assertions about cross-cause prioritization generally and not confident that any particular intervention is all things considered best since any particular intervention or cause conclusion is premised on a lot of shaky evidence. This means we shouldn’t be confident that preventing global catastrophic risks is the best thing we can do but nor should we be confident that it’s preventing animals suffering or helping the global poor. Key arguments I am advancing:  1. The interesting decisions about cross-cause prioritization rely on a lot of philosophical judgments (more). 2. Generally speaking, I find the type of evidence for these types of co