I'm a Senior Researcher for Rethink Priorities, a Professor of Philosophy at Texas State University, a Director of the Animal Welfare Economics Working Group, the Treasurer for the Insect Welfare Research Society, and the President of the Arthropoda Foundation. I work on a wide range of theoretical and applied issues related to animal welfare. You can reach me here.
Thanks for your question, Nathan. We were making programmatic remarks and there's obviously a lot to be said to defend those claims in any detail. Moreover, we don't mean to endorse every claim in any of the articles we linked. However, we do think that the worries we mentioned are reasonable ones to have; lots of EAs can probably think of their own examples of people engaging in motivated reasoning or being wary about what evidence they share for social reasons. So, we hope that's enough to motivate the general thought that we should take uncertainty seriously in our modeling and deliberations.
Good question! Re: the Moral Weight Project, perhaps the biggest area of impact has been on animal welfare economics, where having a method to make interspecies comparisons is crucial for benefit-cost analysis. Many individuals and organizations have also reported to us that our work was an update on the importance of animals and on invertebrates specifically. We’ve seen something similar with the CCM tool, with results ranging from positive feedback and enthusiasm to more concrete updates in their decisions. There’s more we can say privately than publicly, however, so please feel free to get in touch if you’d like to chat!
Great (and difficult!) question, Jordan. I (Bob) am responding to this one for myself and not for the team; others can chime in as they see fit. The biggest issue I see in EA cause prioritization is overconfidence. It’s easy to think that because there are some prominent arguments for expected value maximization, we don’t need to run the numbers to see what happens if we have a modest level of risk aversion. It’s easy to think that because the future could be long and positive, the EV calculation is going to favor x-risk work. Etc. I’m not anti-EV; I’m not anti-x-risk. However, I think these are clear areas where people have been too quick to assume that they don’t need to run the numbers because it's obvious how they'll come out.
I’m a “chickens and children” EA, having come to the movement through Singer’s arguments about animals and global poverty. I still find EA most compelling both philosophically and emotionally when it focuses on areas where it’s clear that we can make a difference. However, the more I grapple with the many uncertainties associated with resource allocation, the more sympathetic I become to diversification, to include significant resources for work that doesn’t appeal to me at all personally. So you probably won’t catch me pivoting to AI governance anytime soon, but I’m glad others are doing it.
Thanks for these questions, Toby!
Excellent question, Ian! At a high level, I’d say that moral uncertainty has made me much more inclined to care about having an overlapping consensus of reasons for any important decision. Equivalently, I want a diverse set of considerations to point in the same direction before I’m inclined to make a big change. That’s how I got into animal work in the first place. It’s good for the animals, good for human health, good for long-term food security, good for the environment, etc. There are probably lots of other impacts too, but that’s the first one that comes to mind!
I'm encouraged by your principles-first focus, Zach, and I'm glad you're at the helm of CEA. Thanks for all you're doing.