M

MichaelStJules

Grantmaking contractor in animal welfare
12327 karmaJoined Working (6-15 years)Vancouver, BC, Canada

Bio

Philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals.

I've also done economic modelling for some animal welfare issues.

Want to leave anonymous feedback for me, positive or negative? https://www.admonymous.co/michael-st-jules

Sequences
3

Radical empathy
Human impacts on animals
Welfare and moral weights

Comments
2575

Topic contributions
12

Against option 3, you write:

There are many different ways of carving up the set of “effects” according to the reasoning above, which favor different strategies. For example: I might say that I’m confident that an AMF donation saves lives, and I’m clueless about its long-term effects overall. Yet I could just as well say I’m confident that there’s some nontrivially likely possible world containing an astronomical number of happy lives, which the donation makes less likely via potentially increasing x-risk, and I’m clueless about all the other effects overall. So, at least without an argument that some decomposition of the effects is normatively privileged over others, Option 3 won’t give us much action guidance.

Wouldn't you also say that the donation makes these happy lives more likely on some elements of your representor via potentially increasing x-risk? So then they're neither made determinately better off nor determinately worse off in expectation, and we can (maybe) ignore them.

Maybe you need some account of transworld identity (or counterparts) to match these lives across possible worlds, though.

I haven't read much of this post, so just call me out if this is totally off base, but I suspect you're treating events as more "independent" than you should.

Relevant: A nuclear war forecast is not a coin flip by David Johnston.

I also illustrated in a comment there:

On the other extreme, we could imagine repeatedly flipping a coin with only heads on it, or a coin with only tails on it, but we don't know which, but we think it's probably the one only with heads. Of course, this goes too far, since only one coin flip outcome is enough to find out what coin we were flipping. Instead, we could imagine two coins, one with only heads (or extremely biased towards heads), and the other a fair coin, and we lose if we get tails. The more heads we get, the more confident we should be that we have the heads-only coin.

To translate this into risks, we don't know what kind of world we live in and how vulnerable it is to a given risk, and the probability that the world is vulnerable to the given risk at all an upper bound for the probability of catastrophe. As you suggest, the more time goes on without catastrophe, the more confident we should be that we aren't so vulnerable.

Hi Nicolas, thanks for commenting!

Whether or not you think you can add separate components seems pretty important for the hedging approach. 

Indeed, if a portfolio dominates the default on each individual component, then some interventions in the portfolio must dominate the default overall.[1] So if you can compare interventions based on their total effects, the existence of such portfolios imply that some interventions dominate the default.

Ah, good point. (You're assuming the separate components can be added directly (or with fixed weights, say).)

I guess the cases where you can't add directly (or with fixed weights) involve genuine normative uncertainty or incommensurability. Or, maybe some cases of two envelopes problems where it's too difficult or unjustifiable to set a unique common scale and use the Bayesian solution.

In practice, I may have normative uncertainty about moral weights between species.

 

Intuitively then, you would prefer investing in one of those interventions over hedging?

If you're risk neutral, probably. Maybe not if you're difference-making risk averse. Perhaps helping insects is robustly positive in expectation, but highly likely to have no impact at all. Then you might like a better chance of positive impact, while maintaining 0 (or low) probability of negative impact.

 

Given the above, a worry I have is that the hedging approach doesn't save us from cluelessness, because we don't have access to an overall-better-than-the-default intervention to begin with.

For my illustration, that's right.

However, my illustration treats the components as independent, so that you can get the worst case on each of them together. But this need not be the case in practice. You could in principle have interventions A and B, both with ranges of (expected) cost-effectiveness [-1, 2], but whose sum is exactly 1. Let the cost-effectiveness of B be 1-"the cost-effectiveness of A". Having things cancel out so exactly and ending up with a range that's a single value is unrealistic, but I wonder if we could at least get a positive range this way.

 

(Although a complication I haven't thought about is that you should compare interventions with one another too, unless you think the default has a privileged status.)

Ya, the default doesn't seem privileged if you're a consequentialist. See this post.

If using precise credences, then I'd be a strong longtermist (probably focusing on existential risks of some kind) or chase infinities. I haven't thought a lot from this perspective about practical donation recommendations, if I'm assuming not suffering-focused. If suffering-focused (like I actually am), then probably CLR.

Yes, absolutely right about 0 being possible and reaonably likely. Maybe I'd say "average welfare conditional on having any welfare at all". I only added that so that X% likely to be negative meant (100-X)% likely to be positive, in order to simplify the argument.

A Pascal's mugging by nematodes? Nematodes as utility monsters?

Pascal's bugging and the Rebugnant Conclusion (Sebo, 2024). :P

 

Interested to hear from Insect Welfare and Wild Animal Welfare advocates why they disagree that nematodes are the primary moral concern of planet Earth.

I'm sympathetic to difference-making risk aversion and difference-making ambiguity aversion (although see here) and assign nematodes a quite low probability of mattering much at all to me, low enough for now that I'm inclined to ignore them altogether (and what would have gone to nematodes instead goes to mitigating s-risks). Mites, springtails, copepods and insect larvae seem substantially more likely to matter to me, based on my beliefs about their capacities.

Still, I'd rather not go 100% on invertebrates either, also due to my difference-making sympathies. I'd deal with this like normative uncertainty and use a kind of bucket approach, like the Property Rights approach and hedging, with normative uncertainty about difference-making and approaches to dealing with uncertainty, about the nature of consciousness and moral patienthood and how to deal with it (although also see this), and about aggregation. So, roughly in practice, based on the probabilities of making a difference, probabilities of moral patienthood, attitudes towards risk and aggregation, I have a humans bucket, a mammals and birds bucket, a fish bucket, a shrimp and insects bucket, a mites, springtails and copepods bucket, and an s-risks bucket.

Another potentially useful takeaway is that these interventions Vasco considered, or at least diet change interventions like Veganuary and School Plates, are not robustly positive in expectation, when considering exactly the near-term animal effects. So why would we support them?

These interventions don't seem justified by their direct cost-effectiveness, unless we have adequate reason to single out those effects and ignore or discount the effects on wild terrestrial invertebrates. We'd need a good reason to single out the direct effects, or refer to even more indirect or longer term reasons (e.g. moral circle expansion, space colonization and s-risks).

If you discounted nematodes >10x or more, then SWP's HSI would come out ahead, or roughly tied with HIPF, right?

For the comparison to Shrimp Welfare Project's Humane Slaughter Initiative, how long are you assuming the stunners are (counterfactually) used for? If I recall correctly, some prior estimates only assumed 1 year, which seems very conservative, and would probably make the comparisons to other opportunities here unfair.

It's not so much that there's a specific threshold away from 50%, it's more that if you're wildly uncertain and it's highly speculative, rather than assigning a single precise probability like 55%, you should use a range of probabilities, say 40% to 70%. This range has values on either side of 50%. Then:

  1. If you were difference-making ambiguity averse,[1] then both increasing their populations would look bad (possibly more bad lives in expectation) and decreasing their populations would look bad (possibly fewer good lives in expectation). You'd want to minimize these effects, by avoiding interventions with such large predictable effects on wild animal population sizes, or by hedging.
  2. If you were ambiguity averse (not difference-making), then I imagine you'd want to decrease their populations. The worst possibilities for animals in the near-term are those where wild invertebrates are sentient and have horrible lives in expectation and you'd want to make those less bad. But s-risks (and especially hellish existential risks) would plausibly dominate instead, if you can robustly mitigate them.
  3. On a different account dealing with imprecise credences, when we reduce their populations, you might say these wild animals are neither better off in expectation (in case they have good lives in expectation), nor are they worse off in expectation (in case they have bad lives in expectation), so we can ignore them, via a principle that extends the Pareto principle (Hedden, 2024).

(I'm assuming we're ruling out an average welfare of exactly 0 or assigning that negligible probability, EDIT: conditional on sentience/having any welfare at all.)

  1. ^

    On standard accounts of difference-making ambiguity aversion, which I think are problematic. I'm less sure about the implications of other accounts. See my 2024 post.

Load more