I'm a Senior Researcher for Rethink Priorities, a Professor of Philosophy at Texas State University, a Director of the Animal Welfare Economics Working Group, the Treasurer for the Insect Welfare Research Society, and the President of the Arthropoda Foundation. I work on a wide range of theoretical and applied issues related to animal welfare. You can reach me here.
Hi Nick. Thanks for the kind words about the MWP. We agree that it would be great to have other people tackling this problem from different angles, including ones that are unfriendly to animals. We've always said that our work was meant to be a first pass, not the final word. A diversity of perspectives would be valuable here.
For what it’s worth, we have lots of thoughts about how to extend, refine, and reimagine the MWP. We lay out several of them here. In addition, we’d like to adapt the work we’ve been doing on our Digital Consciousness Model for the MWP, which uses a Bayesian approach. Funding is, and long has been, the bottleneck—which explains why there haven’t been many public updates about the MWP since we finished it (apart from the book, which refines the methodology in notable ways). But if people are interested in supporting these or related projects, we’d be very glad to work on them.
I’ll just add: I’ve long thought that one important criticism of the MWP is that it’s badly named. We don’t actually give “moral weights,” at least if that phrase is understood as “all things considered assessments of the importance of benefiting some animals relative to others” (whether human or nonhuman). Instead, we give estimates of the differences in the possible intensities of valenced states across species—which only double as moral weights given lots of contentious assumptions.
All things considered assessments may be possible. But if we want them, we need to grapple with a huge number of uncertainties, including uncertainties over theories of welfare, operationalizations of theories of welfare, approaches to handling data gaps, normative theories, and much else besides. The full project is enormous and, in my view, is only feasible if tackled collaboratively. So, while I understand the call for independent teams, I’d much prefer a consortium of researchers trying to make progress together.
Thanks for this great post! Really appreciate your thinking about this important question.
Here's one question that I'm turning over. On the face of it, you might think of the pain categories as being assessed behaviorally and relative to an individual’s capacity for welfare. So, disabling pain would be whatever pain "takes priority over most bids for behavioral execution and prevents all forms of enjoyment or positive welfare.” But then, disabling pain wouldn't be a single pain level across species, which some might be able to feel and others not. It would be a capacity-for-welfare-neutral behavioral characterization of an internal state.
However, your post seems not to endorse this view. Instead, it seems to imply that the pain categories are indexed to humans, without any assumption that all animals can experience the same thing.
I don't necessarily have an objection to the indexed-to-humans view. However, it does seem to undermine the idea that we can look at behavior to assess the presence of a particular pain level unless we have independent reasons to think that the relevant animal is capable of that pain level. Am I understanding that correctly?
Thanks a bunch!
We discuss this in the book here. The summary:
...we’ll have to settle for some rough, intuitive sense of the space of possibilities relative to which we’re evaluating welfare ranges. We might think of them as the “realistic biological possibilities,” or something to that effect, which seems like the set of possibilities to which general physiological, cognitive, and behavioral traits might be relevant (as, again, we’ll discuss Chapters 5–7). Very roughly, these possibilities are the ones that we take to be realistic ways things could turn out for an individual based on our best biological theories and our understanding of their biological characteristics.
Of course, even if we have a tolerably good understanding of the “realistic biological possibilities,” it remains the case that a “tolerably good understanding” leaves plenty of room for disagreement about specific cases, including many that may be practically relevant. So, we aren’t going to get the fine-grained, context-sensitive picture we might have wanted—or, at least, not without further discussion about how to extend the framework that we’re developing. However, whatever the limits of this approach, it does reasonably well overall. It does a better job of limiting our attention to relevant possibilities (in one sense of “relevant”) than we’d get by considering the logical, metaphysical, or physical possibilities. Insofar as we can secure welfare-relevant biological knowledge, it does reasonably well on the epistemic criterion, and while more coarse-grained than we might like, it may still prove useful in many practical contexts. After all, the goal here is to improve interspecies welfare comparisons relative to armchair speculation. If that method is bad enough, then the bar for claiming improvement is low.
Thanks to everyone for the discussion here. A few replies to different strands.
First, I agree with Vasco that transparency matters. However, transparency isn’t the only good—and, unfortunately, it often competes with others. (Time is limited. Optics are complicated. Etc.) So, by Vasco's own lights, it's only plausible that organizations should devote scarce resources to answering this particular cause prioritization question—and then post their answer publicly on the Forum—if they think (or should think) that the expected value of so doing is positive. It isn’t obvious that anyone in these organizations thinks (or should think) that’s true.
Second, you can use our work on welfare ranges without buying into naive expected utility maximization. I assume that many people who use our welfare ranges are averse to being mugged and, as a result, adopt one of the many strategies for avoiding that outcome. So, it can be true that (a) the expected value of impacts on some group of animals is very large in expectation and (b) you aren’t rationally required, by your own lights, to care much about that fact (and, by extension, investigate it in depth or engage on it publicly).
Third, our models have a narrow theoretical and pragmatic purpose: we wanted to improve the community’s thinking about cause prioritization regarding a group of animals where we took there to be good evidence of sentience. We don’t think you can take our models and apply them generally, nor do we think you can ignore the specific purpose for which they were developed. Put differently, once some animals have crossed some threshold of plausibility for sentience, we support using our models with trepidation, largely because we don't have better options. But you shouldn't apply the model beyond that and, if you have any other principled ways to make decisions, that's probably better. (Principled: “We think that any theory of change for the smallest animals begins with key victories for larger animals.” Unprincipled: “We don’t like thinking about the smallest animals.”)
Fourth, we disagree with @NickLaing characterization of the Moral Weight Project as stacking the deck in favor of high welfare range estimates. There are two reasons why. One of them is that the MWP does not say, “Sum the number of proxies found for a species and divide by the total number of proxies to get the welfare range.” If that were true, then the number of proxies would straightforwardly determine the maximum difference in welfare ranges. But that isn't correct. We have models (like the cubic model) where you need to have lots of proxies before you have a "highish" welfare range. However, we have lots of models, with uncertainty across them. Predictably, then, more moderate estimates emerge rather than any extreme (whether high or low). Someone is free to say: "A better methodology wouldn't have been so uncertain about the models; it would have just included animal-unfriendly options." That's clearly tendentious, though, and we think we made the right call in including a wider range of theoretical options. That being said, we’ll reiterate that those who are interested in the details of the project should examine the particulars of each model and its conclusions rather than just taking the overall estimates straightforwardly. You can find each model’s results here.
The second reason we disagree with Nick’s characterization of the MWP is that, even if you isolate a particular model, you don’t automatically get high welfare ranges. Suppose, for instance, that there are 80 proxies total and that a model uses them all. If there were N that were as simple as "any pain-averse behavior," then, for the core models of the MWP, saying "likely yes" to each of them would give you a sentience-conditioned welfare score of 0.875*N/80 on average. We didn't consider animals as simple as nematodes in the MWP because we didn't think that the methods were robust for that type of animal. (See above.) But say you think there's a 0.5% chance of sentience for nematodes. Then, the sentience-conditioned welfare range would have been approximately 0.005*0.875*N/80. If the average model had 5 proxies that are as simple as "any pain averse behavior" and we gave "likely yes" to nematodes on all five, that would generate a mean welfare range of 0.005*0.875*5/80 = 0.00027. Again, we don’t endorse using the MWP for animals with that small of a probability of sentience, but 0.00027 isn’t a particularly high welfare range. (And as we've said many times, we’re just talking about hedonic capacity, not “all things considered moral weights,” which don’t assume hedonism. That number would be lower still.)
Should we find funding for a second version of the project, we’re likely to take a different approach to aggregating the proxies to produce welfare ranges, aggregating welfare ranges across models, and communicating the results. Still, we hope the first version of MWP contributes to more informed and systematic thinking about how to prioritize among different interventions.