Bio

Participation
4

www.jimbuhler.site

Also on LessWrong and Substack, with different essays.

Sequences
4

On the sign of X-risk reduction
On risks from malevolence
On Cluelessness
What values will control the Future?

Comments
176

Topic contributions
4

Rewilding projects would increase the total amount of suffering by expanding and intensifying landscapes where such suffering is endemic.

I mean, only if the rewilding projects increase the overall number of (welfare-range-adjusted) wild animals. This would certainly be the case for rewilding projects that introduce (more) life in dead-ish zones. But the rewilding examples you happen to discuss (and also commonly discussed by others, especially outside of EA)[1] are not of this type. They're about introducing species into a pre-existing ecosystem, and I guess you would agree that these projects don't clearly "expand and intensify" nature, overall.

It is actually not clear to me how serious/common rewilding projects of the former kind are, how worried WAW advocates should be, and what to do about them.

Nice post! :)

  1. ^

    See, e.g., Tremlett (2024) and Ireland (2024).

Do you believe that human welfare dominates the welfare of the wild animals that you think are sentient? (I wonder why wildlife conservation isn't your priority if you assume WAW is positive.)

what is not yet published is that it is looking like the ocean fertilization effect will not be as strong as we had originally estimated.

Has this been published since then? Would love to read this :)

"On the margin, it is better for animals to work on the transition to AGI going well, than directly working on AI for animal welfare"

I'm worried everyone will just agree that this seems unlikely. That's a very high bar.

"AGI which doesn't cause human extinction or disempowerment will value animal welfare"

I think we don't care about whether it "values animal welfare". We care about what happens to animals. There are many very plausible worlds where these two are uncorrelated (just like in ours, where people have never valued AW that high and it has never been that bad for farmed animals, especially the smaller ones).

"Without extra animal-focused work, even aligned superintelligence would be bad for non-human animals"

That's my favorite version, but I'm worried it invites everyone to just agree on "we should have some extra animal-focused work, anyway" and not red-team each other deeply enough.

So here's a minimal version I propose: AI safety work that helps humans also helps other animals, to some extent.

(The "to some extent" is optional. I added it to invite people to think about whether AIS helps other animals at all, and not just all agree over the uncontroversial and boring claim that "AIS helps humans more than animals".)

I like this minimal formulation because 

Thanks for asking us, Toby! Looking forward to this debate week :)

I assume the most important reason is that it is something that most people close to them do. Likewise, I think most people prioritise animals with a higher probability of sentience like chickens instead of shrimps because it is what most people close to them do.

Interesting. I think there's something to this analogy, though ofc the social pressure to put your seatbelt on is far higher than that to prioritize chickens over shrimp. 

I guess [their motivation] has little to do with the actual probability of sentience of the animals in question.

Yeah, maybe they just rationalize their motivations with moral weight arguments while their real drive is something else (see Simler & Hanson 2018). And highlighting potential biases we have might be helpful. On the other hand, you may wanna mainly stick to red-teaming the importance of p(sentience) as a potential crux (by, e.g., red-teaming Clatterbuck and Fischer), anyway, if that's the reason people give (even if it might not be their real motivation deep down). I generally find this to be the most productive. People rarely update just based on noticing or being reminded of a bias they may have.

I guess most people see voting as fulfilling their duty to improve society.

That also seems part of the picture, yeah! And notice that this bolsters my broader point that it might not be about EV max and that there might be no inconsistency between voting and being difference-making risk-averse.

I wonder to what extent people donate to interventions targeting animals which are more likely to be sentient to boost the probability of increasing welfare. People routinely take actions which are super unlikely to actually matter

This position many animal advocates hold (even if only implicitly) was indeed rationalized/explained with difference-making risk aversion by Clatterbuck and Fischer (2025). And in this case, p(sentience), and moral weights more broadly, indeed seem important, actually.

I think it's very plausible people are inconsistent in how difference-making risk averse they are for different things. However, let me play devil's advocate:

  1. Seatbelts. One could argue this is just a habit they just don't bother questioning, not risk-neutral EV max and getting mugged by small probabilities.
  2. Voting. I would be surprised if many of the people who prioritize chickens because of risk aversion do vote. If they do, I agree this seems inconsistent. But, fwiw, if they were forced to pick a lane, I think most would drop voting and not their diffence-making risk aversion.

(Tangential but I guess from the above that you think the following is not another example where MNB is sensitive to the individuation of normative views, and I'd like to understand why. Nw at all if you don't have the time to reply, tho.)

Antonia found an intervention that reduces overall animal suffering in the near-term, but she's not sure which is true between

  • L) the long-term effects dominate, but I don't know what they overall imply, and I can't ignore them (so I'm clueless).
  • N) neartermism thanks to bracketing out the long-term effects (so I should intervene).

Brian comes along and says he agrees with the above and subdivides L, this way: 

  • L) the long-term effects dominate, but he doesn't know what they overall imply, and he can't ignore them (so he's clueless).
    • L1) same, but he trusts his longtermist best guess that the intervention is bad, assuming pure negative utilitarianism (so he should not intervene).
    • L2) same but assuming negative-leaning utilitarianism (so he should not intervene).
    • L3) he trusts his longtermist best guess that the intervention is good, assuming classical utilitarianism (so he should intervene).
  • N) neartermism thanks to bracketing out the long-term effects (so he should intervene).

Antonia shares Brian's above best guesses and normative uncertainty. They both totally agree. The only difference is that Brian specified normative sub-views.

Now, say Nuutti joins the party, agrees with these two, but recategorizes things this way:

  • L1) stubborn precise EV despite imprecision arguments + negative utilitarianism (we should not intervene)
  • O) all other plausible normative views (in sum, we're clueless)

The MNB sceptic would say that Antonia grouping L1-3 together to form L is just as arbitrary as Nuutti grouping L2, L3, and N together to form O.[1]

Is your response: The former seems less arbitrary because 

  • L1-3 share key epistemic principles and/or decision theory that make L an actual normative view (even though the moral theory part is imprecise). In contrast, L2, L3, and N have nothing in common, normatively that justifies grouping them against L1. It'd be too arbitrary to consider N + L2 + L3 as a normative view.
  • Normative views seem to be the most legitimate units to bracket over (e.g., more legit than empirical views). Making a comprehensive case for/against this would be nice, but I give some reasons for, in this section.
  1. ^

    With the consequentist-bracketing version of the individuation problem I present here, the bracketer can appeal to a "only value locations that have been identified can be bracketed in" principle. This saves them if this principle is sound. Here, this doesn't save them. The normative theories have been identified in both cases.

The idea that the unpleasantness of pain increases superlinearly with its intensity (i.e. an 8/10 on the pain scale is more than twice as bad as a 4/10).

Yeah... I wish we would just say that the 4 is actually lower than 4 and directly track what you mean by "unpleasantness" with these scores, since this is what we care about. But that's not how people use the /10 scale, unfortunately. And that's understandable. If they were, they would seldom say that they're suffering above a 1/10.[1]

And yes. When researchers/people assign welfare ranges, they think they're tracking "unpleasantness", but I also suspect they are actually tracking what you mean by "intensity" to a large extent, which may lead to very misguided cross-species welfare tradeoffs. I am extremely skeptical of the following counter-view you describe:

If a researcher judges an animal to be at 10% of its capacity, they simply mean 1/10 as bad as its worst state — there's no question about whether 100% is "really" 10x worse, because that's just what the numbers mean by construction.

Maybe that's what they mean, but I doubt that their estimate is not deeply biased by the "unpleasantness"/"intensity" confusion.

To be clear, though, I don't want people to take away that we should care less about insects and shrimp. There are so many other considerations. If anything, this should make us less confident in precise-ish moral weight estimates (and maybe look for projects robust to this uncertainty).

That's a very important problem you raise! Thank you for this. :)

  1. ^

    I guess that's why the /10 scale measures what you mean by "intensity," even though I agree with Toby it's not clear what it's even supposed to be.

Great points from you here and from @Mia Fernyhough in another thread! What about in countries where animal advocacy is (almost) nonexistent and where the counterfactual is probably not cage-free, but no change at all? Curious what the two of you (and others) think. I know this does not address all the limitations you raise, but maybe the most crucial ones?

The chatbot completely misses DiGiovanni's point fwiw aha. Literally all of the objections it raises are explicitly addressed in what I've linked or elsewhere in his sequence. :)

No pressure to read anything, though. It's a thorny topic and understanding all the complex details takes time.

Load more