MSJ

Michael St Jules 🔸

Grantmaking contractor in animal welfare
12492 karmaJoined Working (6-15 years)Vancouver, BC, Canada

Bio

Philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals.

I've also done economic modelling for some animal welfare issues.

Want to leave anonymous feedback for me, positive, constructive or negative? https://www.admonymous.co/michael-st-jules

Sequences
3

Radical empathy
Human impacts on animals
Welfare and moral weights

Comments
2599

Topic contributions
15

Yes, good point. It's worth checking if the delay could have a significant impact.

It might not be a genuine supply shift, if the same business/company would just want to set up elsewhere. You could model the probability that they wouldn't, so there would be a supply shift, and then use elasticities for that.

FWIW, since 2022 (so after SWP and FWI), I count:

  1. Scale Welfare (founded in 2025, but I think the idea was recommended for a few years?)
  2. A charity working on fish welfare in Europe (founded in 2024), not listed on their website
  3. Another working on fish started because of AIM in 2023, but not an official incubatee and not listed on their website.

One way you could think about the St Petersburg lottery money pump is that the future version of yourself after evaluating the lottery just has different preferences or is a different agent. Now, you might say your preferences should be consistent over time and after evaluations, but why? I think the main reason is to avoid picking dominated outcome distributions, but there could be other ways to do that in practice, e.g. pre-commitments, resolute choice, burning bridges, trades, etc.. You would want to do the same thing for Parfit's hitchhiker. And you would similarly want to constrain the choices of or make trades with other agents with different preferences, if you were handing off the decision-making to them.

 

I grant that this is pretty weird. But I think it’s weird because of the mathematical property that an infinite function can have where it’s average value (or its expected value) can be greater than any possible value it might have. In light of such a situation, it’s not particularly surprising that each time you discover the outcome of the situation, you’ll be disappointed and want to trade it away. If a view has weird implications because of weird math, that is the fault of the math, not of the view.

I'm not sure I would only blame the math, or that you should really separate the math from the view.

Basically all of the arguments for the finitary independence axiom and finitary sure-thing principle are also arguments for their infinitary versions, and then they imply "bounded" utility functions.[1] You could make exceptions for unbounded prospects and infinities because infinites are weird, but you should also probably accept that you're at least somewhat undermining some of your arguments for fanaticism in the first place, because they won't hold in full generality.

Indeed, I would say fanaticism is less instrumentally rational than bounded utility functions, i.e. more prone to making dominated choices. But there can be genuine tradeoffs between instrumental rationality and other desiderata. I don't see why sometimes in theory making dominated choices is worse than sacrificing other desiderata. Either way, you're losing something.

In my case, I'm willing to sacrifice some instrumental rationality to avoid fanaticism, so I'm sympathetic to some difference-making views.

  1. ^

    See Jeffrey Sanford Russell, and Yoaav Isaacs. “Infinite Prospects*.” Philosophy and Phenomenological Research, vol. 103, no. 1, Wiley, July 2020, pp. 178–98, https://doi.org/10.1111/phpr.12704, https://philarchive.org/rec/RUSINP-2

    That assumes independence of irrelevant alternatives, transitivity and completeness, but I'd think you can drop completeness and get a similar result, with "multi-utility functions".

I'd follow something like these:

  1. https://rethinkpriorities.org/research-area/invertebrate-sentience-useful-empirical-resource/
  2. https://www.frontiersin.org/journals/veterinary-science/articles/10.3389/fvets.2022.788289/full
  3. check for functions (causal roles) that can be reasonably interpreted as generating appearances of stimuli as good/desirable/worth promoting or bad/undesirable/worth avoiding. These are enough for moral status in my view, but pain and pleasure could be more specific. Or, what does it mean for something to be painful or pleasurable in functionalist terms? Develop that, and check for it in nematodes.

It's unlikely that any of this will be conclusive, but it can inform reasonable ranges of probabilities.

On the question of what they find painful or pleasurable, check what they tend to avoid and approach, respectively, especially through learned behaviour (and especially more general types of learning) or internal simulation of outcomes of actions, rather than in-built reflexive behaviour and very simple forms of learning like habituation.

EDIT: You can also validate with measures of brain activity and nociception. There are probably features common to (apparently) painful experiences in nematodes, and features common to pleasurable ones in nematodes, which could be identified and then checked for across experiences.

What about doing Welfare Footprint-like analysis (e.g. here), but including both positive and negative experiences, and investigating what kinds of behavioural tradeoffs they make between different (intensities of) experiences to weigh intensities?

Ok, that makes sense. I'd guess butterfly effects would be neutral in the median difference. The same could be the case for indirect effects on wild animals and the far future, although I'd say it's highly ambiguous (imprecise probabilities) and something to be clueless about, and not precisely neutral about.

Would you say you care about the overall distribution of differences, too, and not just the median and the EV?

(I'm guessing you mean difference-making risk aversion here, based on your options being implicitly compared to doing nothing.)

When considering the potential of larger indirect effects on wild invertebrates, the far future and other butterfly effects, which interventions do you think look good (better than doing nothing) on difference-making risk aversion (or difference-making ambiguity aversion)?

(I suspect there are none for modest levels of difference-making risk/ambiguity aversion, and we should be thinking about difference-making in different ways.)

Thanks for writing this!

I'm wondering about your factory farming analysis:

Consider the case of factory farming.[6] Even if aligned AI is committed to or neutral about animal wellbeing, it’s unclear how, or how quickly, it would “solve” factory farming. It’s possible that AI could invent a method for producing cultured meat at a fraction of the cost of conventional meat, which could cause an end to factory farming. Even so, it would likely take years to build out the infrastructure to produce lab-grown meat and make it economically competitive with traditional agriculture.

How many years do you have in mind in here? I could imagine this going pretty quickly, and much faster than historically for growing industries, because:

  1. AI lets us skip to much more efficient alt protein production processes, instead of iterative improvement over years of R&D.
  2. AI designs faster and more efficient resource extraction and infrastructure building processes. Or, AI designs alt protein production processes that can make good use of other processes and the market at the time.
  3. Capital investment could be very high because of
    1. AI-related economic growth,
    2. interest from now far wealthier AI investors, including billionaire tech (ex-)CEOs, and/or
    3. proof of efficient alt protein production process designs, rather than investors waiting for more R&D.

The time it takes to build alt protein production plants could be the main bottleneck, and many could be built in parallel, enough to exhaust expected demand after undercutting conventional animal products. Maybe this takes a couple of years after efficient alt protein processes are designed by AI?

Fairly speculative, of course. Seems like high ambiguity here.

 

More pessimistically, we probably won’t end factory farming through technology alone. People have been hesitant to switch to meat substitutes and lab-grown meat. Multinational corporations have significant financial interests in factory farming, and they will also use AI to promote their position. Cultural, political, and economic changes will be necessary.

I agree with this. However, I wonder how far off sufficient economic changes would be. People could become wealthy enough to pay for (or subsidize others for) high welfare animal products, and this could eliminate the rest of factory farming. Transitioning housing types could take some time, but with enough money thrown at it, it could be very quick. Again, fairly speculative.

RE #4, if technological chance is happening that quickly, it seems implausible that McDonald's will survive. They didn't have anything comparable to McDonald's 1000 years ago. They couldn't have even imagined McDonald's. I predict that a decade after TAI, if we're still alive, then whatever stuff we have will look nothing like McDonald's, in the same way that McDonald's looks nothing like the stuff people had in medieval times.

If we're still alive, most of the same people will still be alive, and their tastes, habits and values will only have changed so much. Think conservatives, people against alt proteins and others who grew up with McDonald's. 1000 years is enough time for dramatic shifts in culture and values, but 10 years doesn't seem to be. I suspect shifts in culture and values are primarily driven by newer generations just growing up to have different values and older generations with older values dying, not people changing their minds. 

And radical life extension might make some values and practices persist far longer than they would have otherwise, although I'm not sure how much people who'd still want to eat conventional meat would opt for radical life extension.

Load more