MSJ

Michael St Jules 🔸

Grantmaking contractor in animal welfare
12573 karmaJoined Working (6-15 years)Vancouver, BC, Canada

Bio

Philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals.

I've also done economic modelling for some animal welfare issues.

Want to leave anonymous feedback for me, positive, constructive or negative? https://www.admonymous.co/michael-st-jules

Sequences
3

Radical empathy
Human impacts on animals
Welfare and moral weights

Comments
2610

Topic contributions
15

My sense is that if you're weighing nematodes, you should also consider things like conscious subsystems or experience sizes that could tell you larger-brained animals have thousands or millions of times more valenced experiences or more valence at a time per individual organism. For example, if a nematode realizes some valence-generating function (or indicator) once with its ~302 neurons, how many times could a chicken brain, with ~200 million neurons, separately realize a similar function? What about a cow brain, with 3 billion neurons?

Taking expected values over those hypotheses and different possible scaling law hypotheses tends, on credences I find plausible, to lead to expected moral weights scaling roughly proportionally with the number of neurons (see the illustration in the conscious subsystems post). But nematodes (and other wild invertebrates) could still matter a lot even on proportional weighing, e.g. as you found here.

Ya, bracketing on its own wouldn’t tell you to ignore a potential group of moral patients just because its probability of sentience is very small. The numbers could compensate. It's more that conditional on sentience, we'd have to be clueless about whether they're made better or worse off. And we may often be in this position in practice.

 

I think you could still want some kind of difference-making view or bounded utility function used with bracketing, so that you can discount extreme overall downsides more than proportionally to their probability, along with extreme upsides. Or do something like Nicolausian discounting, i.e. ignoring small probabilities.

Great post! :)

 

Third, it’s easy for an org to think it’s helping when it’s hurting: we know so little about how to help that some caution is warranted. I don’t just want to do good in expectation: I want to do good.

FWIW, I think this would count against most animal interventions targeting vertebrates (welfare reforms, reductions in production), and possibly lead to paralysis pretty generally, and not just for animal advocates.

 

If we give extra weight to net harm over net benefits compared to inaction, as in typical difference-making views, I think most animal interventions targeting vertebrates will look worse than doing nothing, considering only the effects on Earth or in the next 20 years, say. This is because:

  1. there are possibly far larger effects on wild invertebrates (even just wild insects and shrimp, but also of course also mites, springtails, nematodes and copepods) through land use change and effects on fishing, and huge net harm is possible through harming them, and
  2. there's usually at least around as much reason to expect large net harm to wild animals as there is to expect large net benefit to them, and difference-making gives more weight to the former, so it will dominate.

There could be similar stories for the far future and acausally, replacing wild animals on Earth with far future moral patients and aliens. There are also possibilities and effects of which we're totally unaware.

 

That being said, I suspect typical accounts of difference-making lead to paralysis pretty generally for similar reasons. This isn't just a problem for animal interventions. I discussed this and proposed some alternative accounts here.

 

Bracketing can also sometimes help. It's an attempt to formalize the idea that when we're clueless about whether some group of moral patients is made better off or worse off, we can just ignore them and focus on those we are clueful about.

Sounds exciting!

Do you have an estimate of the cost of the product per rodent spared you can share? This could help set a lower bound on the potential cost-effectiveness, where in the roughly worst case, donors, grantmakers or impact investors buy or subsidize the product for snake owners, similar to SWP buying stunners for shrimp producers.

(Edited: had the comparison of sizes flipped.)

One thought I've just had about this: these "cumulative elasticities" assume demand/price shifts just for one product at a time, and will therefore be too (EDIT) low if used for the effects of people going veg or reducing their consumption of multiple animal products. Here's why:

  1. Suppose one person (or a large group of them) just cuts out chicken, and for simplicity, they don't eat more of anything else to compensate. That reduces the price of chicken. Other people will eat more chicken and less of everything else (including other animal products) to compensate, all else (or at least elasticities) equal. All else equal, it will also disproportionately affect chicken production.
  2. Now, suppose this same person (or same large group of people) go vegan. This reduces the price of all animal products. Instead of others consuming more chicken and less of everything else in response, they consume more animal products all around (all else equal) and less plant-based food. The effect of other consumers on chicken in particular will be smaller than in 1, because they aren't shifting away from other animal products to eat more chicken in particular. So more of the demand shift goes through.

It can also matter whether the elasticities were determined in a model with multiple products (general equilibrium), including with cross-price elasticities of demand, or just a single-product market. In models with multiple products, when you estimate the price elasticity of chicken demand, you're adjusting for other prices and quantities besides chicken's. In a model with just chicken prices and quantities, you aren't adjusting for other products' prices and quantities. These elasticities will differ.

Great post, thanks for writing!

I buy that individuals should try to pick "policies" and psychologically commit themselves to them, rather than only evaluate actions one at a time. I think this totally makes sense for seatbelts and helmets. However, I'm not sure it requires evaluating actions collectively at a fundamental normative level rather than practically, especially across individuals. I think we can defend wearing seatbelts and helmets with Nicolausian discounting without supporting longtermism or x-risk work to most individuals, even if the marginal x-risk opportunity were similar to the average or best already funded.

In particular, I know that if I don't wear my seatbelt this time in a car by some logic that is not very circumstance-specific, I could use similar logic in the future to keep talking myself out of wearing a seatbelt, and those risks would accumulate into a larger risk that could be above the discount threshold. So I should stop myself now to minimize that risk. I should consider the effects of my reasoning and decision now on my own future decisions.

However, I don't have nearly as much potential influence over humanity's x-risk strategy (causally or acausally) and the probability of an existential catastrophe. The typical individual has hardly any potential influence.

 

Also, separately, how would you decide who or what is included in the collective? Should we include the very agents creating the problems for us?

FWIW, unless you have reason otherwise (you may very well think some Fs are more likely than others), there's some symmetry here between any function F and the function 1-F, and I think if you apply it, you could say P(F > 1/2) = P(1-F < 1/2) = P(F < 1/2), so P(F < 1/2) ≤ 1/2, and strictly less iff P(F = 1/2) > 0.

If you can rule out P(F = 1/2) > 0 (say by an additional assumption), or the bet were on F ≤ 1/2 instead of F < 1/2, then the probability would just be 1/2.

I recommend funding GWWC and the Centre for Exploratory Altruism Research’s (CEARCH’s) High Impact Philanthropy Fund (HIPF) due their effects on soil animals, which I think are practically proportional to the increase in agricultural-land-years per $. I estimate HIPF increases agricultural land 9.42 times as cost-effectively as GiveWell’s top charities, which is similar to my estimates for the giving multiplier of GWWC in 2023 and 2024.

A few quick comments:

  1. GWWC donors don't only give to GiveWell (or other life-saving) charities, so you'd want to discount by the proportion going to them.
  2. There might be other charities that GWWC supporters donate to that will increase soil animal populations in expectation, e.g. some diet change and alternative protein work. You'd probably want to check the net effect combining them.

(Edited to elaborate.)

I think bracketing agents could be moved to bracket out and ignore value of information sometimes and more often than EV-maxers, but it's worth breaking things down further to see when. Imagine we're considering an intervention with:

  1. Direct effects on a group of moral patients (or locations of value), and we're clueless about those effects.
  2. Some (expected) value of information for another group of moral patients (possibly the same group, a disjoint group or intersecting the group in 1).

Then:

a. If the group in 2 is disjoint from the group in 1, then we can bracket out those affected in 1 and decide just on the basis of the expected value of information in 2 (and opportunity costs).

b. If the group in 2 is a subset of the group in 1, then the minimum expected value of information needs to be high enough to overcome the potential expected worst case downsides from the direct effects on the group in 1, for the intervention to beat doing nothing. The VOI can get bracketed away and ignored along with the direct effects in 1.

And there are intermediate cases, with probably intermediate recommendations.

Without continuity (but maybe some weaker assumptions required), I think you get a representation theorem giving lexicographically ordered ordinal sequences of real utilities, i.e. a sequence of expected values, which you compare lexicographically. With an infinitary extension of independence or the sure-thing principle, you get lexicographically ordered ordinal sequences of bounded real utilities, ruling out St Pesterburg-like prospects, and so also ruling out risk neutral expectational utilitarianism.

Load more