M

MichaelStJules

Independent researcher
12196 karmaJoined Working (6-15 years)Vancouver, BC, Canada

Bio

Philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals.

I've also done economic modelling for some animal welfare issues.

Sequences
3

Radical empathy
Human impacts on animals
Welfare and moral weights

Comments
2560

Topic contributions
12

For example, a preference utilitarian can think there is an objective moral fact that it is bad to, all else equal, do something to someone that they disprefer, even though the “badness” comes from their dispreference. It seems like that would be subjective only if the claim was that it was bad according to some observer.

I'm personally not sympathetic to such a claim. What makes it objective? Rather, to me, it's just bad to the person who disprefers it (and possibly other individuals). They are an observer. They are observers of their own mental states and things in the world, and they have attitudes about them.

The view I describe in this piece could be made objective in the way you describe, though.

On the welfare effects of slower growing breeds, I think suffering is reduced overall despite the increase in life expectancies, based on Welfare Footprint Institute's analysis:

  • Adoption of the Better Chicken Commitment, with use of a slower-growing breed reaching a slaughter weight of approximately 2.5 Kg at 56 days (ADG=45-46 g/day) is expected to prevent “at least” 33 [13 to 53] hours of Disabling pain, 79 [-99 to 260] hours of Hurtful and 25 [5 to 45] seconds of Excruciating pain for every bird affected by this intervention (only hours awake are considered). These figures correspond to a reduction of approximately 66%, 24% and 78% , respectively, in the time experienced in Disabling, Hurtful and Excruciating pain relative to a conventional scenario due to lameness, cardiopulmonary disorders, behavioral deprivation and thermal stress. 

    (...)

  • In general, the slower the growth rate, the shorter the cumulative time in pain experience over a lifetime, despite differences in lifespan. Should breeds with growth rates slower than those assumed in the reformed scenario be used, the time in pain averted with the reform would be longer, despite a longer lifespan. By the same logic, slower-growing breeds growing faster (e.g. 50g/day) should endure a longer time in pain, despite their shorter lifespan. In all cases, reforms promoting a transition to slower-growing breeds should expect a reduction of the cumulative time in pain (net positive change) for all breeds considered under the BCC scheme: the slower the growth rate, the higher the expected welfare impact.

Also, for those interested in animal welfare specifically: https://www.animaladvocacycareers.org/

Seems fine to direct people to 80,000 Hours for AI/x-risk, Animal Advocacy Careers for animal welfare and Probably Good more generally.

Couldn't a country just opt out unilaterally, and then others follow suit? And should we trust their assessment of s-risks even if proceeding by global consensus?

You could defend the idea that extinction risk reduction is net negative or highly ambiguous in value, even just within EA and adjacent communities. Convincing people to not work on things that are net negative by your lights seems not to break good heuristics or norms.

When I edited this comment, it removed my vote percentage from it.

(Edited to elaborate and for clarity.)

Thomas (2019) calls these sorts of person-affecting views "wide". I think "narrow" person-affecting views can be more liberal (due to incommensurability) about what kinds of beings are brought about.

And narrow asymmetric person-affecting views, as in Thomas, 2019 and Pummer, 2024, can still tell you to prevent "bad" lives or bads in lives, but, contrary to antinatalist views, "good" lives and goods in lives can still offset the bad. Pummer (2024) solves a special case of the Nonidentity problem this way, by looking at goods and bads in lives.

But these asymmetric views may be less liberal than strict/symmetric narrow person-affecting views, because they could be inclined to prevent the sorts of lives of which many are bad in favour of better average lives. Or more liberal, depending on how you think of liberalism. If someone would have a horrible life to which they would object, it seems illiberal to force them to have it.

I think these papers have made some pretty important progress in further developing person-affecting views.[1]

  1. ^

    I think they need to be better adapted to choices between more than 2 options, in order to avoid the Repugnant Conclusion and replacement (St. Jules, 2024). I've been working on this and have a tentative solution, but I'm struggling to find anyone interested in reading my draft.

For a given individual, can they have a higher probability of averting extinction (i.e. making the difference) or for a different long-term trajectory change? If you discount small enough probabilities of making a difference or are otherwise difference-making risk averse (as an individual), would one come out ahead as a result?

Some thoughts: extinction is a binary event. But there's a continuum of possible values that future agents could have, including under value lock-in. A small tweak in locked-in values seems more achievable counterfactually than being the difference for whether we go extinct. And a small tweak in locked-in values would still have astronomical impact if they persist into the far future. It seems like value change might depend less on very small probabilities of making a difference.

Since others have discussed the implications, I want to push a bit on the assumptions.

I worry that non-linear axiologies[1] end up endorsing egoism, helping only those whose moral patienthood you are most confident in or otherwise prioritizing them far too much over those of less certain moral patienthood. See Oesterheld, 2017 and Tarsney, 2023.

(I also think average utilitarianism in particular is pretty bad, because it would imply that if the average welfare is negative (even torturous), adding bad lives can be good, as long as they're even slightly less bad than average.)

Maybe you can get around this with non-aggregative or partially aggregative views. EDIT: Or, if you're worried about fanaticism, difference-making views.

  1. ^

    Assuming completeness, transitivity and the independence of irrelevant alternatives and each marginal moral patient matters less.

Load more