Effective immediately, my wife and I will no longer plan funding for EA or EAs. There’s enough money with OpenPhil to wind down operations gracefully, paying out all current grants, all grants under consideration that we normally would have made, and any new grants that come in within the next three months that we normally would have said yes to (existing charities receiving Open Philanthropy money are particularly encouraged to apply), and providing six months for everyone currently at the non-profit before we shut down.

I want to emphasize that this is not because of anything that Alexander Berger or the rest of the wonderful team at OpenPhil have done. They’re great, and I think that they’ve tried as hard as anyone could to do the best possible work with our money.

It’s the rest of you. I present three primary motivations. They're all somewhat interrelated, but hopefully by presenting three arguments in succession you will update on each of them sequentially. Certainly I've lost all hope in y'all retaining any of the virtues of the rationalist community, rather than just their vices. I hope that this helps you as a community clean up your act while you try to convince someone else to fund this mess. Maybe Bernauld Arnault. That was a joke. Haha, fat chance.

1. In the words of philosopher Liam Kofi Bright, “Why can’t you just be normal?

https://forum.effectivealtruism.org/posts/DaRvpDHHdaoad9Tfu/critiques-of-prominent-ai-safety-labs-redwood-research

Two of Redwood's leadership team have or have had relationships to an [Open Philanthropy] grant maker. A Redwood board member is married to a different OP grantmaker. A co-CEO of OP is one of the other three board members of Redwood. Additionally, many OP staff work out of Constellation, the office that Redwood runs. OP pays Redwood for use of the space.

Just be normal. Stop having a social community where people live and work and study and sing together, and do social atomization like everybody else. This won’t cause any problems. Everyone else is doing it. There is another way! You don't, actually, need to have more partners than the average American adult has friends.[1]

Also, just don’t have sex. That’s not that much to ask for, is it? I’ve been married for a decade now: I can tell you, it’s perfectly possible.

2. I’m tired of all the criticism. I’m tired of it hitting Asana, which I still love and care about. Moving my donations to instead buying superyachts, artwork, and expanding on an actually fun hobby (giant bonsai) is going to substantially reduce how often my family, friends, and employees sees me getting attacked in one news outlet or another.

3. Pick a cause and stick with it. Have the courage of your convictions. I don't need to spend my time hearing about sea-rats and prisoners and suffering matrices and matrices that are suffering and discount rates and so many different ways human bodies can go wrong in other countries and immigration and housing for techies and so many more. Y'all were supposed to be optimizers, so this donating splitting between different cause areas should end. Like I said, most of my wealth is going into the new super-yacht my wife and I will be commissioning. Maybe then you could stop arguing quite so much. Get it all out of your systems, figure out what the best charity is, and stick with it.

[Addendum: multiple people have mistaken this for having been written by Dustin Moskovitz, for which I apologize. It was written by Keller Scholl, with no input from Dustin.]
 

  1. ^

    Average American adult has three or fewer friends.

47

0
0

Reactions

0
0

More posts like this

Comments5


Sorted by Click to highlight new comments since:

Hi Dusten,

Will this yacht replace the Empress of the Seas cruise ship grant, planned to house the new headquarters of Open Philanthropy 2?  Unlike the original cruise ship design, I'm highly skeptical that a yacht will be large enough of a headquarters to house the top-performing 50% of EA.

What part of "no more EA funding" was I not clear enough about? Open Philanthropy 2 will be funded and hosted by someone else with an Open Wallet.

Several people supposedly thought this was written by Dustin so seems worth noting:

[Addendum: multiple people have mistaken this for having been written by Dustin Moskovitz, for which I apologize. It was written by Keller Scholl, with no input from Dustin.]

It's for the good of all of us, except the ones who'll suffer from it.

Because of timezones, the date of this post is displayed as Apr 2nd for me, and I read it while listening to a sad song saying "I've got this funny feeling that the end is near" (From "Mayflower, New york") - making it all a bit funnier

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
I wrote a reply to the Bentham Bulldog argument that has been going mildly viral. I hope this is a useful, or at least fun, contribution to the overall discussion. Intro/summary below, full post on Substack. ---------------------------------------- “One pump of honey?” the barista asked. “Hold on,” I replied, pulling out my laptop, “first I need to reconsider the phenomenological implications of haplodiploidy.”     Recently, an article arguing against honey has been making the rounds. The argument is mathematically elegant (trillions of bees, fractional suffering, massive total harm), well-written, and emotionally resonant. Naturally, I think it's completely wrong. Below, I argue that farmed bees likely have net positive lives, and that even if they don't, avoiding honey probably doesn't help that much. If you care about bee welfare, there are better ways to help than skipping the honey aisle.     Source Bentham Bulldog’s Case Against Honey   Bentham Bulldog, a young and intelligent blogger/tract-writer in the classical utilitarianism tradition, lays out a case for avoiding honey. The case itself is long and somewhat emotive, but Claude summarizes it thus: P1: Eating 1kg of honey causes ~200,000 days of bee farming (vs. 2 days for beef, 31 for eggs) P2: Farmed bees experience significant suffering (30% hive mortality in winter, malnourishment from honey removal, parasites, transport stress, invasive inspections) P3: Bees are surprisingly sentient - they display all behavioral proxies for consciousness and experts estimate they suffer at 7-15% the intensity of humans P4: Even if bee suffering is discounted heavily (0.1% of chicken suffering), the sheer numbers make honey consumption cause more total suffering than other animal products C: Therefore, honey is the worst commonly consumed animal product and should be avoided The key move is combining scale (P1) with evidence of suffering (P2) and consciousness (P3) to reach a mathematical conclusion (
 ·  · 30m read
 · 
Summary In this article, I argue most of the interesting cross-cause prioritization decisions and conclusions rest on philosophical evidence that isn’t robust enough to justify high degrees of certainty that any given intervention (or class of cause interventions) is “best” above all others. I hold this to be true generally because of the reliance of such cross-cause prioritization judgments on relatively weak philosophical evidence. In particular, the case for high confidence in conclusions on which interventions are all things considered best seems to rely on particular approaches to handling normative uncertainty. The evidence for these approaches is weak and different approaches can produce radically different recommendations, which suggest that cross-cause prioritization intervention rankings or conclusions are fundamentally fragile and that high confidence in any single approach is unwarranted. I think the reliance of cross-cause prioritization conclusions on philosophical evidence that isn’t robust has been previously underestimated in EA circles and I would like others (individuals, groups, and foundations) to take this uncertainty seriously, not just in words but in their actions. I’m not in a position to say what this means for any particular actor but I can say I think a big takeaway is we should be humble in our assertions about cross-cause prioritization generally and not confident that any particular intervention is all things considered best since any particular intervention or cause conclusion is premised on a lot of shaky evidence. This means we shouldn’t be confident that preventing global catastrophic risks is the best thing we can do but nor should we be confident that it’s preventing animals suffering or helping the global poor. Key arguments I am advancing:  1. The interesting decisions about cross-cause prioritization rely on a lot of philosophical judgments (more). 2. Generally speaking, I find the type of evidence for these types of co