Currently working as a Community Associate at Center on Long-Term Risk, and as an independent s-risk researcher. Former scholar at SERI MATS 4.1 (multipolar stream), former summer research fellow at Center on Long-Term Risk, former intern at Center for Reducing Suffering, Wild Animal Initiative, Animal Charity Evaluators. Former co-head of Haverford Effective Altruism.
Research interests: • AI alignment • animal advocacy from a longtermist perspective • acausal interactions • artificial sentience • commitment races • s-risks • updatelessness
Feel free to contact me for whatever reason! You can set up a meeting with me here.
I've noticed in my work that some people assume that "moral circle expansion" is a benefit of some animal advocacy campaigns (e.g. fish welfare) and not others (e.g. dog welfare).
I think the main difference is fish are not considered worthy of significant moral concern by most people, who view them more as living objects. With companion animal species, at least in many communities it is understood that their interests are very important. This doesn't prevent there from being serious welfare concerns involving them, but I think these are usually more a symptom of insufficient awareness and action to address those concerns, rather than a denial those concerns are valid. So if we fix people's current valuation of animals' interests but consider a world in which we are much better able to put our values into effect (as may be the case in some futures), then companion animal species but not fish would hopefully be fairly well off. Therefore, if values might be locked-in at the level of which species matter, it seems important that we act to extend concern to as many species as possible (ignoring backfire risk).
Caveats to the above:
As you point out there is also the potential for secondary transfer effects where expanding concern to one additional species/type of entity increases concern for others. My impression is that the significance of this effect as regards nonhumans is debatable, but it's been studied a little in the psychology literature (maybe see this review).
That said, I probably prioritise companion animal welfare more than most EAs! Relative to farmed animals, I think humanity might have slightly more of a deontological duty to companion animals; we have higher confidence that companion animal species are sentient in most cases; and advocacy for companion animal species seems less likely to backfire. I also care about it more from a partial perspective. Given the current distribution of resources in animal advocacy, I'd rather marginal resources go to farmed/wild animals unless there's a particularly good opportunity to help companion animal species, but I think I endorse some level of disproportionality in spending (but a good deal less than the current level of disproportionately we see).
I agree that veg*n retention is important, thanks for writing this up!
Another reason for concern here is that ex-veg*ns might be a significant source of opposition to animal advocacy, because they are motivated to express a sense of disillusionment/betrayal (e.g. see https://www.reddit.com/r/exvegans/) and because their stories can provide powerful support to other opponents of animal advocacy.
Note that the Faunalytics study finds that a decent number (37%) of ex-vegetarians are interested in trying again in the future, which bodes well for future outreach to them and mitigates my concern above a little bit.
There's another very large disadvantage to speeding up research here—once we have digital minds, it might be fairly trivial for bad actors to create many instances of minds in states of extreme suffering (for reasons such as sadism). This seems like a dominant consideration to me, to the extent that I'd support any promising non-confrontational efforts to slow down research into WBE, despite the benefits to individuals that would come about from achieving digital immortality.
I also think digital people (especially those whose cognition is deliberately modified from that of baseline humans, to e.g. increase "power") are likely to act in unpredictable ways—because of errors in the emulation process, or the very different environment they find themselves in relative to biological humans. So digital people could actually be less trustworthy than biological people, at least in the earlier stages of their deployment.
I have some draft reports on this matter (one on longtermist animal advocacy and one on work to help artificial sentience) written during two internships I did, which I can share with anyone doing relevant work. I really ought to finish editing those and post them soon! In the meantime here are some takeaways—apologies in advance for listing these out without the supporting argumentation, but I felt it would probably be helpful on net to do so.
In terms of who is doing relevant work, I consider Center for Reducing Suffering, Sentience Institute, Wild Animal Initiative, and Animal Ethics to be especially relevant. But I do think most effective animal advocacy organisations that are near-term oriented are doing work that is helpful in the long-term as well—especially any which are positively influencing attitudes towards alternative foods or expanding the reach of the animal movement to neglected regions/groups without creating significant backlash. Same goes for meta-EAA organisations like ACE or Animal Advocacy Careers.
Tobias Baumann's recent post How the animal movement can do even more good is quite relevant here, as well as an earlier one Longtermism and animal advocacy.
I also am very pleased that I'm the third James to respond so far out of four commenters :)
Pablo Stafforini has a great bibliography of articles on wild animal welfare that includes some earlier work coming from outside the EA space.
Thanks for checking - it's not, as the CRS S-risk Introductory Fellowship doesn't go into sufficient detail on some of the risks that CLR prioritises. I've added this to the seminar EOI form now.
I think the CRS S-risk Introductory Fellowship and CLR Foundations Course are pretty complementary. We're taking a more targeted / object-level approach of mostly discussing a few specific risks CLR prioritises. We won't spend significant time on the broader overview of s-risks and reasons for prioritising them that the CRS fellowship focuses on.