www.jimbuhler.site
Also on LessWrong and Substack, with different essays.
"On the margin, it is better for animals to work on the transition to AGI going well, than directly working on AI for animal welfare"
I'm worried everyone will just agree that this seems unlikely. That's a very high bar.
"AGI which doesn't cause human extinction or disempowerment will value animal welfare"
I think we don't care about whether it "values animal welfare". We care about what happens to animals. There are many very plausible worlds where these two are uncorrelated (just like in ours, where people have never valued AW that high and it has never been that bad for farmed animals, especially the smaller ones).
"Without extra animal-focused work, even aligned superintelligence would be bad for non-human animals"
That's my favorite version, but I'm worried it invites everyone to just agree on "we should have some extra animal-focused work, anyway" and not red-team each other deeply enough.
So here's a minimal version I propose: AI safety work that helps humans also helps other animals, to some extent.
(The "to some extent" is optional. I added it to invite people to think about whether AIS helps other animals at all, and not just all agree over the uncontroversial and boring claim that "AIS helps humans more than animals".)
I like this minimal formulation because
Thanks for asking us, Toby! Looking forward to this debate week :)
I assume the most important reason is that it is something that most people close to them do. Likewise, I think most people prioritise animals with a higher probability of sentience like chickens instead of shrimps because it is what most people close to them do.
Interesting. I think there's something to this analogy, though ofc the social pressure to put your seatbelt on is far higher than that to prioritize chickens over shrimp.
I guess [their motivation] has little to do with the actual probability of sentience of the animals in question.
Yeah, maybe they just rationalize their motivations with moral weight arguments while their real drive is something else (see Simler & Hanson 2018). And highlighting potential biases we have might be helpful. On the other hand, you may wanna mainly stick to red-teaming the importance of p(sentience) as a potential crux (by, e.g., red-teaming Clatterbuck and Fischer), anyway, if that's the reason people give (even if it might not be their real motivation deep down). I generally find this to be the most productive. People rarely update just based on noticing or being reminded of a bias they may have.
I guess most people see voting as fulfilling their duty to improve society.
That also seems part of the picture, yeah! And notice that this bolsters my broader point that it might not be about EV max and that there might be no inconsistency between voting and being difference-making risk-averse.
I wonder to what extent people donate to interventions targeting animals which are more likely to be sentient to boost the probability of increasing welfare. People routinely take actions which are super unlikely to actually matter
This position many animal advocates hold (even if only implicitly) was indeed rationalized/explained with difference-making risk aversion by Clatterbuck and Fischer (2025). And in this case, p(sentience), and moral weights more broadly, indeed seem important, actually.
I think it's very plausible people are inconsistent in how difference-making risk averse they are for different things. However, let me play devil's advocate:
(Tangential but I guess from the above that you think the following is not another example where MNB is sensitive to the individuation of normative views, and I'd like to understand why. Nw at all if you don't have the time to reply, tho.)
Antonia found an intervention that reduces overall animal suffering in the near-term, but she's not sure which is true between
Brian comes along and says he agrees with the above and subdivides L, this way:
Antonia shares Brian's above best guesses and normative uncertainty. They both totally agree. The only difference is that Brian specified normative sub-views.
Now, say Nuutti joins the party, agrees with these two, but recategorizes things this way:
The MNB sceptic would say that Antonia grouping L1-3 together to form L is just as arbitrary as Nuutti grouping L2, L3, and N together to form O.[1]
Is your response: The former seems less arbitrary because
With the consequentist-bracketing version of the individuation problem I present here, the bracketer can appeal to a "only value locations that have been identified can be bracketed in" principle. This saves them if this principle is sound. Here, this doesn't save them. The normative theories have been identified in both cases.
The idea that the unpleasantness of pain increases superlinearly with its intensity (i.e. an 8/10 on the pain scale is more than twice as bad as a 4/10).
Yeah... I wish we would just say that the 4 is actually lower than 4 and directly track what you mean by "unpleasantness" with these scores, since this is what we care about. But that's not how people use the /10 scale, unfortunately. And that's understandable. If they were, they would seldom say that they're suffering above a 1/10.[1]
And yes. When researchers/people assign welfare ranges, they think they're tracking "unpleasantness", but I also suspect they are actually tracking what you mean by "intensity" to a large extent, which may lead to very misguided cross-species welfare tradeoffs. I am extremely skeptical of the following counter-view you describe:
If a researcher judges an animal to be at 10% of its capacity, they simply mean 1/10 as bad as its worst state — there's no question about whether 100% is "really" 10x worse, because that's just what the numbers mean by construction.
Maybe that's what they mean, but I doubt that their estimate is not deeply biased by the "unpleasantness"/"intensity" confusion.
To be clear, though, I don't want people to take away that we should care less about insects and shrimp. There are so many other considerations. If anything, this should make us less confident in precise-ish moral weight estimates (and maybe look for projects robust to this uncertainty).
That's a very important problem you raise! Thank you for this. :)
Great points from you here and from @Mia Fernyhough in another thread! What about in countries where animal advocacy is (almost) nonexistent and where the counterfactual is probably not cage-free, but no change at all? Curious what the two of you (and others) think. I know this does not address all the limitations you raise, but maybe the most crucial ones?
I mean, only if the rewilding projects increase the overall number of (welfare-range-adjusted) wild animals. This would certainly be the case for rewilding projects that introduce (more) life in dead-ish zones. But the rewilding examples you happen to discuss (and also commonly discussed by others, especially outside of EA)[1] are not of this type. They're about introducing species into a pre-existing ecosystem, and I guess you would agree that these projects don't clearly "expand and intensify" nature, overall.
It is actually not clear to me how serious/common rewilding projects of the former kind are, how worried WAW advocates should be, and what to do about them.
Nice post! :)
See, e.g., Tremlett (2024) and Ireland (2024).