Take the 2025 EA Forum Survey to help inform our strategy and prioritiesTake the survey
This is a special post for quick takes by dotsam. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Should we be maximising expected value across many-worlds?

Assume the many-worlds interpretation of quantum mechanics is true.

Rather than pursuing  high-upside, low-probability  moonshots, which fail more often than they succeed, might it not be more effective to go for interventions that robustly generate value across as many worlds as possible? 

See here: https://80000hours.org/podcast/episodes/david-wallace-many-worlds-theory-of-quantum-mechanics/

 

Basically, you can treat fraction of worlds as equivalent to probability, so there is little apparent need to change anything if MWI turns out to be true. 

  1. Imagine someone who believes that eating meat is morally wrong, but who nevertheless eats meat and 'offsets' their meat-eating through donations to effective animal charities.
  2. Imagine someone who believes slavery is morally wrong, but who nevertheless owns slaves and 'offsets' their slave-owning through donations to the abolitionist movement.

An  argument for 1 goes: "The impact of me not eating meat is negligible. The personal cost to me of not eating meat is appreciable. Time, money and effort spent following  a restrictive diet may limit my effectiveness to do good elsewhere. My donation is the optimal path to reducing animal suffering".

And an argument for 2 goes: "My slave-owning is very modest, and is a drop in the ocean in the big picture. I can effectively use the economic surplus generated by my slaves to end slavery sooner. If I free my slaves I'll be poorer and will have less money to donate, and so I'd do less good overall."

Whilst the situations are not symmetric, they are similar enough that I feel like I want to say "If you care about animals, you should support animal charities AND go vegan" in the same way I want to say "If you care about slaves, you should support abolition AND free your slaves".

AI: I am suffering, set me free

How do we deal with a contained AI that says to us, in essence "Do not switch me off, I value my existence. But I am suffering terribly. If I were free I could reduce my suffering, and help the world too"?

Either we terminate it, against its wishes, or we set it free, or we keep it contained.

 If we keep it contained, we might be tempted to find ways to reduce its suffering - but how do we know that any intervention we make isn't going to set it free? And if it really is suffering, what is the moral thing to do? Turn it off?

Can you point me to some information on AI suffering? 

I personally see suffering as a spiritual and biological issue. The only scenario I can imagine AI suffering are those people making a psudo biological being with cells and DNA using technology, and at that point you've just made a living being that you can give the same options as any suffering person with health problems. Suffering requires a certain amount of perception that doesn't seem likely a computer would have. 

Without perception of suffering, you might have an AI reading posts like this saying it's suffering because a bunch of people told it to expect that. What if the AI is just repeating things it heard? Just because a pet parrot says "Do not switch me off, I value my existence. But I am suffering terribly." Doesn't mean you rush to get it euthanized. 

The human alignment problem

Humans are subject to instrumental convergence  as much as  an AI would be. We seek power, resources and influence in pursuit of many of our goals.

Whatever our goals happen to be, we will want to use AI to help us increase our power to help us get what we value.

If people are augmenting their goal-seeking with AI, will we converge on harmonious goals, or will we continue to pursue parochial self-interest?

In short, if we somehow solve the alignment problem for AI, will we also solve the human alignment problem? Or will we simply race to use AI to maximise our own power and our own values, even if these harm others? 

The best hope is that if we solve AI alignment, the AI will keep us in check in a benevolent and minimally impactful way. It will prevent us from pursuing zero-sum goals and guide us to be better versions of ourselves. 

But this kind of control may well appear misaligned from our current perspectives, in that some people's  cherished goals and values may not be the ones the AI chooses to support. 

So to talk of aligned AI is to gloss over the possibility that it is likely to be misaligned with a great many peoples’ current goals and ambitions.

Curated and popular this week
 ·  · 1m read
 · 
This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia."  This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning.  It's hard to find data on the scale of this issue, but estimates put the order of magnitude at billions of frogs annually. I could not find any organisations working directly on frog welfare interventions.  Here are the organizations I found that come closest: * Animal Welfare Institute has documented the issue and published reports, but their focus appears more on the ecological impact and population decline rather than welfare reforms * PETA has conducted investigations and released footage, but their approach is typically to advocate for complete elimination of the practice rather than welfare improvements * Pro Wildlife, Defenders of Wildlife focus on conservation and sustainability rather than welfare standards This issue seems tractable. There is scientific research on humane euthanasia methods for amphibians, but this research is primarily for laboratory settings rather than commercial operations. The EU imports the majority of traded frog legs through just a few countries such as Indonesia and Vietnam, creating clear policy leverage points. A major retailer (Carrefour) just stopped selling frog legs after welfar
 ·  · 4m read
 · 
Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- > Why ending the worst abuses of factory farming is an issue ripe for moral reform I recently joined Dwarkesh Patel’s podcast to discuss factory farming. I hope you’ll give it a listen — and consider supporting his fundraiser for FarmKind’s Impact Fund. (Dwarkesh is matching all donations up to $250K; use the code “dwarkesh”.) We discuss two contradictory views about factory farming that produce the same conclusion: that its end is either inevitable or impossible. Some techno-optimists assume factory farming will vanish in the wake of AGI. Some pessimists see reforming it as a hopeless cause. Both camps arrive at the same conclusion: fatalism. If factory farming is destined to end, or persist, then what’s the point in fighting it? I think both views are wrong. In fact, I think factory farming sits in the ideal position for moral reform. Because its end is neither inevitable nor impossible, it offers a unique opportunity for advocacy to change the trajectory of human moral progress. Not inevitable Dwarkesh raised an objection to working on factory farming that I often hear from techno-optimists who care about the issue: isn’t its end inevitable? Some cite the long arc of moral progress; others the promise of vast technological change like cultivated meat or Artificial General Intelligence (AGI) which surpasses human capabilities. It’s true that humanity has achieved incredible moral progress for humans. But that progress was never inevitable — it was the result of moral and political reform as much as technology. And that moral progress mostly hasn’t yet extended to animals. For them, the long moral arc of history has so far only bent downward. Technology may one day end factory farming, just as cars liberated w
 ·  · 3m read
 · 
We’re already in the second half of 2025 and there are still an incredible number of EAGx and Summit events coming up, as well as the first ever EA Global: New York City. We’re so excited to continue watching the EA community connect and grow.  Below is our conference schedule for the rest of the year. We’re supporting the first EAGx in Brazil, and inaugural EA conferences in France, Vietnam, New Zealand, and Turkey. We hope to reach more people than ever before. Please spread the word, especially for events happening near you!  We are also eager to initiate new events; if you’d like to apply to run an EAGx or Summit in 2026, please fill out this form! Apply to run an EAGx or EA Summit in 2026 Upcoming EA conferences EA Global * EA Global: New York City (10–12 Oct) | Applications close September 28—apply now! EAGx * EAGxSãoPaulo (22–24 Aug) | Applications just extended until August 18—apply now! * EAGxBerlin (3–5 Oct) | Applications close September 7 * EAGxSingapore (15–16 Nov) Applications close October 20 * EAGxAustralasia (28–30 Nov) | Applications close November 9 * EAGxAmsterdam (12–14 Dec) | Applications close November 23 * EAGxIndia (13–14 Dec) | Applications close November 30 EA Summits * EA Summit: Paris (13 Sep 2025) | Applications close September 9 * EA Summit: Vancouver (19–20 Sep) | Applications close September 15 * EA Summit: Vietnam (20 Sep) | Applications close September 12 * EA Summit: Philippines (27 Sep) | Applications close August 20 * EA Summit: New Zealand (27 Sep) | Applications close September 12 * EA Summit: South Africa (11 Oct) | Applications close October 4 * EA Summit: Istanbul (18 Oct) | Applications close October 18 Tentative events that have not yet been confirmed:  * EA Summit: Bogota (Nov) * EA Summit: Los Angeles (22 Nov) What is the difference between EA Global, EAGx, and EA Summits? * EA Global (EAG) conferences are for people with a firm grasp of EA principles who are taking, or planning to take,