Hide table of contents

Is now the time to add to RP’s great work?
 

Someone should commission new moral weights work in the next year
58 votes. Voting has now closed,
AA
JM
DG
D
F
JLG
C
BS
C
LDL
NM
EPC
J
MSJ
J
SP
BM
CC
MO
OS
B
DT
M
NB
DR
A
JT
N
J
A
GP
J
MLT
HP
RB
AK
BS
SM
TK
disagree
agree


Rethink’s Moral weights project (MWP) is immense and influential. Their work is the most cited “EA” paper written in the last 3 years by a mile - I struggle to think of another that comes close. Almost every animal welfare related post on the forum quotes the MWP headline numbers - usually not as gospel truth, but with confidence. Their numbers carry moral weight[1] moving hearts, minds and money towards animals.

To oversimplify, if their numbers are ballpark correct then...

  1. Farmed animal welfare interventions outcompete human welfare interventions for cost-effectiveness under most moral positions.[2]

  2.  Smaller animal welfare interventions outcompete larger animal welfare if you aren’t risk averse.


There are downsides in over-indexing on one research project for too long, especially considering a question this important. The MWP was groundbreaking, and I hope it provides fertile soil for other work to sprout with new approaches and insights. Although the concept of “replicability”  isn't quite as relevant here as with empirical research, I think its important to have multiple attempts at questions this important. Given the strength of the original work, any new work might be lower quality - but perhaps we can live with that. Most people would agree that more deep work needs to happen here at some stage, but the question might be is now the right time to intentionally invest in more?
 

Arguments against more Moral Weights work

  1. It might cost more money than it will add value
  2. New researchers are likely to land land on a similar approaches and numbers to RP so what's the point?[3]
  3. RP’s work is as good as we are likely to get, why try again and get a probably worse product?
  4. We don’t have enough new scientific information since the original project to meaningfully add to the work.
  5. So little money goes to animal welfare work  now anyway, we might do more harm than good at least in the short term if new research pushes in the other direction from RP's work. 


Steps I’d like to see in a new project[4]

  1. Form a diverse group of experts, including some more cynical of animal sentience and moral worth along with pro animal-welfare scientists.
  2. Start basically from scratch, making efforts not to rely too much on RP’s work[5]

  3. Exclude those who worked directly on the RP project
  4. Make efforts to include experts from outside the EA sphere


Full disclosure, I’m not unbiased here. As I’ve written I think RP may have favored animals at a number of critical junctures. The project was spearheaded by animal welfare campaigners, neuron counts were largely disregarded, and their behavioral score system meant that tiny weights were very unlikely. I also think its a little strange that RP don’t seem to have changed their numbers or processes on further inspection and criticism[6]. Often by now a project that complex would have evolved and changed in some respects. Of course none of this means that they are wrong, and I like many have updated heavily in favour of animal welfare work on the basis of their excellent work.

What do you think - is now the time to intentionally plan and fund more moral weights work?
 

  1. ^

    Zero apologies for the pun

  2. ^

     See Rethink’s own moral parliament https://parliament.rethinkpriorities.org/

  3. ^

    IMO this would still be useful information if independent people ended up using similar messages.

  4. ^

     I realise these might be hard practically

  5. ^

     Difficult to be sure, given the lack of other work.

  6. ^

     Either self-reflection or external criticism

  7. Show all footnotes
Comments19


Sorted by Click to highlight new comments since:

Hi Nick. Thanks for the kind words about the MWP. We agree that it would be great to have other people tackling this problem from different angles, including ones that are unfriendly to animals. We've always said that our work was meant to be a first pass, not the final word. A diversity of perspectives would be valuable here.

For what it’s worth, we have lots of thoughts about how to extend, refine, and reimagine the MWP. We lay out several of them here. In addition, we’d like to adapt the work we’ve been doing on our Digital Consciousness Model for the MWP, which uses a Bayesian approach. Funding is, and long has been, the bottleneck—which explains why there haven’t been many public updates about the MWP since we finished it (apart from the book, which refines the methodology in notable ways). But if people are interested in supporting these or related projects, we’d be very glad to work on them.

I’ll just add: I’ve long thought that one important criticism of the MWP is that it’s badly named. We don’t actually give “moral weights,” at least if that phrase is understood as “all things considered assessments of the importance of benefiting some animals relative to others” (whether human or nonhuman). Instead, we give estimates of the differences in the possible intensities of valenced states across species—which only double as moral weights given lots of contentious assumptions. 

All things considered assessments may be possible. But if we want them, we need to grapple with a huge number of uncertainties, including uncertainties over theories of welfare, operationalizations of theories of welfare, approaches to handling data gaps, normative theories, and much else besides. The full project is enormous and, in my view, is only feasible if tackled collaboratively. So, while I understand the call for independent teams, I’d much prefer a consortium of researchers trying to make progress together.

Thanks @Bob Fischer those are all good points.

I agree its very difficult and probably impossible to "get right" with a small team of researchers, but I still think (as many people have commented) that there would be great value in truly independent work on this. I think there is too much upside to independent work here to continue with only collaboration, even if reductoin in quality might be a downside. 

If work continued with only collaboration, I think the Gravity Well effect mentioned by would be hard to avoid, credibility would be reduced, and that new researchers might find it hard to flesh out new methodology and ideas and in some cases be adversarial if RP's team was involved from the beginning of any new research.

Of course then collaboration and conversation would come later.

Someone should commission new moral weights work in the next year


I’d like to see it expanded to even smaller animals if possible, like @Vasco Grilo🔸 asks here.

Thanks for pointing that out, Hugh! I would like Rethink Priorities (RP) to get mainline welfare ranges nematodes, microorganisms (protists, archaea, and bacteria), and plants. From Table S1 of Bar-on et al. (2018), there are 10^21 nematodes, 10^27 protists, 10^29 archaea, 10^30 bacteria, and 10^13 trees. I think effects on nematodes, and maybe microorganisms are the driver of the overall effects of the vast majority of interventions.

Does anyone know roughly what this would cost, either financially or in terms of what the people involved would be doing counterfactually?

(Obviously the amount would depend on the precise scope of work, but given that funding seems to be the bottleneck, throwing a range out there might sharpen the discussion.)

I'm not an expert on moral weights research itself, but approaching this rationally, I’m strongly in favour of commissioning an independent, methodologically distinct reassessment of moral weights—precisely because a single, highly-cited study can become an invisible “gravity well” for the whole field.

Two design suggestions that echo robustness principles in other scientific domains:

  1. Build in structured scepticism.
    Even a small team can add value if its members are explicitly chosen for diverse priors, including at least one (ideally several) researchers who are publicly on record as cautious about high animal weights. The goal is not to “dilute” the cause, but to surface hidden assumptions and push every parameter through an adversarial filter.
  2. Consider parallel, blind teams.
    A light-weight version of adversarial collaboration: one sub-team starts from a welfare-maximising animal-advocacy stance, another from a welfare-sceptical stance. Each produces its own model and headline numbers under pre-registered methods; then the groups reconcile differences. Where all three sets of numbers (Team A, Team B, RP) converge, we gain confidence. Where they diverge, at least we know which assumptions drive the spread.

The result doesn’t have to dethrone RP; even showing that key conclusions are insensitive to modelling choices (or, conversely, highly sensitive) would be valuable decision information for funders.

In other words: additional estimates may not be “better” in isolation, but they increase our collective confidence interval—and for something as consequential as cross-species moral weights, that’s well worth the cost.

Thanks for fleshing this out more, both of your design suggestions make a lot of sense to me. You also stated one of my major concerns far better than I did.

"A single, highly-cited study can become an invisible “gravity well” for the whole field."

Hi Nick, just sent you a brief DM about a “stress-test” idea for the moral-weight "gravity well". Would appreciate any steer on who might sanity-check it when you have a moment. Thanks!

I'm not sure it needs a whole other large project, especially one started from scratch. You could just have a few people push further on these points, which seem like the most likely cruxes:

  1. Further developing and defending measures that scale with neuron counts.
  2. Assessing animals on normative stances besides expectational hedonistic utilitarianism.
  3. Defending less animal-friendly responses to the two envelopes problem (see prior writing and the comments here, here, here, here, here and here).
  4. EDIT, also: Assessing the probability that invertebrates of interest (and perhaps other animals of interest) can experience excruciating or unbearable pain, as effectively all-consuming pain an animal would be desperate and take incredible risks to avoid.

And then have them come up with their own models and estimates. They could mostly rely on the studies and data RP collected on animals, although they could check the ones that seem most cruxy, too.

In case anyone is interested, I also have:

  1. unpublished material on the conscious subsystems hypothesis, related to neuron count measures, with my own quantitative models, arguments for my parameter choices/bounds and sensitivity analysis for chickens vs humans. Feel free to message me for access.
  2. arguments for animals mattering a lot in expectation on non-hedonistic stances here and here, distinct from RP's.

I think Heather Browning has an upcoming book project about Interspecies Welfare Comparisons - here's an example of her published work on the topic

Someone should commission new moral weights work in the next year

 

The moral weights of animals seems like one of the most important inputs into cause prioritization. The difference between whether we use RP weights or neuron count is the difference between whether the present contains more happiness than suffering, and potentially whether humanity has been overall good or bad for wellbeing. 

This also poses challenges to the future. Averting catastrophes is profoundly insufficient if the default trajectory for wellbeing is negative (and potentially worsening). Indeed, if the default trajectory is negative (and we have no good ways of changing it) we can imagine the universe giving a sigh of relief if we were filtered out of the cosmic pool of awareness. 

Given the profound importance for cause prioritization -- if the present is overall negative for wellbeing I think it implies we should focus much, much more on making the future and present go well than go long -- we should have several independent well resourced attempts to answer the question of "how do we weigh the wellbeing of animals versus humans?" 

I'd like to see a few more surveys on moral weights, with larger samples of both animal welfare experts and lay people, than this small one (n=100) I conducted in Belgium https://brill.com/view/journals/jaae/7/1/article-p91_6.xml

I agree. We proposed some surveys on this topic here and here. And we did some very limited work conceptually replicating earlier surveys here.

The MWP project was impressive and thought-provoking. But given how important this topic is and how load-bearing a single report has become for essential cause prio discussions, it seems crucial to get a second perspective that's just as careful and detailed!

Someone should commission new moral weights work in the next year

I strongly agree with the author’s point about the danger of relying too heavily on one study, especially given the importance of moral weights in estimating cost effectiveness. I also think that there is value in reexamining moral weights within GHD (eg health relative to income).

Someone should commission new moral weights work in the next year

I think any study should be replicated, and RP's moral weights is no exception. Having different frameworks can allow us to get upper and lower bound estimates. I'm sure even the lower bounds would not be negligible when multiplied by the number of animals in factory farms

Someone should commission new moral weights work in the next year

Most of the reasons stated below by others. In general, multiple independent investigations boosts credibility. Not sure whether it needs to be in the next year given resource constraints, but sooner is better. 

Someone should commission new moral weights work in the next year

 

There are many judgement calls in this type of work. A new moral weights project will give useful information about how robust results are.

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 6m read
 · 
I am writing this to reflect on my experience interning with the Fish Welfare Initiative, and to provide my thoughts on why more students looking to build EA experience should do something similar.  Back in October, I cold-emailed the Fish Welfare Initiative (FWI) with my resume and a short cover letter expressing interest in an unpaid in-person internship in the summer of 2025. I figured I had a better chance of getting an internship by building my own door than competing with hundreds of others to squeeze through an existing door, and the opportunity to travel to India carried strong appeal. Haven, the Executive Director of FWI, set up a call with me that mostly consisted of him listing all the challenges of living in rural India — 110° F temperatures, electricity outages, lack of entertainment… When I didn’t seem deterred, he offered me an internship.  I stayed with FWI for one month. By rotating through the different teams, I completed a wide range of tasks:  * Made ~20 visits to fish farms * Wrote a recommendation on next steps for FWI’s stunning project * Conducted data analysis in Python on the efficacy of the Alliance for Responsible Aquaculture’s corrective actions * Received training in water quality testing methods * Created charts in Tableau for a webinar presentation * Brainstormed and implemented office improvements  I wasn’t able to drive myself around in India, so I rode on the back of a coworker’s motorbike to commute. FWI provided me with my own bedroom in a company-owned flat. Sometimes Haven and I would cook together at the residence, talking for hours over a chopping board and our metal plates about war, family, or effective altruism. Other times I would eat at restaurants or street food booths with my Indian coworkers. Excluding flights, I spent less than $100 USD in total. I covered all costs, including international transportation, through the Summer in South Asia Fellowship, which provides funding for University of Michigan under