MSJ

Michael St Jules 🔸

Animal welfare grantmaking and advising
12710 karmaJoined Working (6-15 years)Vancouver, BC, Canada

Bio

Philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals.

I've also done economic modelling for some animal welfare issues.

Want to leave anonymous feedback for me, positive, constructive or negative? https://www.admonymous.co/michael-st-jules

Sequences
3

Radical empathy
Human impacts on animals
Welfare and moral weights

Comments
2625

Topic contributions
15

From 2000 to 2023, the number of species comprising 85% of aquaculture production grew from 14 to 22.

Does this account for the >1 trillion fish fry artificially propagated in China per year, a large share of which are probably fed live to mandarin fish? See my post here, and some (higher) estimates here. My sense is that these fish aren't counted in the FAO stats, because they're not slaughtered for food, and fish fed to mandarin fish are from a smaller number of species. From my post:

Li and Xia (2018) wrote “Almost all prey for mandarin fish is provided through artificial propagation”, and single out mud carp as the favourite feed fish, although others are reported elsewhere, e.g. FAO:

Common live foods for mandarin fish include mud carp (Cirrhinus molitorella), Wuchang fish (also called Chinese bream, Megalobrama amblycephala), silver carp (Hypophthalmichthys molitrix), bighead carp (H. nobilis), grass carp (Ctenopharyngodon idellus), crucian carp (Carassius carassius), common carp (Cyprinus carpio), stone moroko (Pseudorasbora parva) and other wild and trash fish. Wuchang fish fry is preferred at the start of food intake, then feeding bighead and silver carp follows. When body length reaches 25 cm, common and crucian carps are fed.

And “silver carp, bighead, grass carp, Wuchang fish or tilapia fry” (Kuanhong/FAO, 2009).

I agree with those benefits but there's no mention here of potential costs? Maybe you don't think those are significant? 

If we're assuming the post would be good quality, then I don't expect the costs (to me) to be significant, but I'm open to reasons otherwise. If the posts are sometimes low quality or repetitive, then AI could enable more of them, and that would be bad. I'd lean towards allowing 100% AI written posts and seeing what happens to the EA Forum, i.e. tracking the results and reassessing. 

Maybe the voting system, minimum karma to post, and throttling based on recent net negative karma posts/comments are enough to handle this without negatively affecting engagement. Banning 100% AI-written posts is a blunt tool, and it seems worth trying other things.

Its a completely different question but are you happy to receive 100% AI grant application as well?

I'd prefer human-written applications, because it can be hard to distinguish ~100% AI-written but primarily using the applicants' own ideas and reasoning from ~100% AI-generated, including writing, ideas and reasoning.[1] Grants are bets on the grantees' abilities, not just the project idea. However, I tend to also talk to applicants over calls or in person, and see their work in other ways.

I can imagine for a project for which communication by the applicant is an important part of the project's path to impact, if the application looks AI-written, I would ask them to resubmit or I would reject them, if and because the people the applicant would be communicating to dislike AI writing. This hasn't come up yet, though.

 

And would you be happy on your grantmaker end to allow your own AI to review that application or would you insist on reading it yourself

At this point, I'd insist on at least personally reading parts that are enough to be decisive one way or the other.

  1. ^

    Of course, this leaves another possibility (and others in between the different possibilities outlined so far, including no AI use): 100% of the ideas and reasoning come from AI, but the application is 100% written by the applicant. Hopefully by writing it themself, they've taken the time to understand what they're submitting, but it would still be better if the ideas came from the applicant.

How much of a post are you comfortable for AI to write?

The main benefits I have in mind from allowing an AI to write ~the whole post (over just helping in other ways):

  1. Someone who isn't confident in their (English) writing ability.
  2. Saving time or reducing other opportunity costs for the author.
  3. If for whatever reason they wouldn't post it, or not without substantial delay. This is usually because of 1 or 2, but maybe there'd be other reasons.

These are all context- and author-specific considerations. I can imagine preferring a post be AI-written than some possible counterfactuals: not posted at all, posted at much larger opportunity cost to the author, posted but worse to read than what an AI could have done. These point me towards permissibility and letting each author decide.

I think the author or another human should generally look over the post before the author posts it. I don't think it's necessary for them to insert their own voice.

Here are a few things you might need to address to convince a skeptic:

  1. Humans currently have access to, maintain and can shut down or destroy the hardware and infrastructure AI depends on. This is an important advantage.
  2. Ending us all can be risky from an AI's perspective, because of the risk of shutdown (or losing humans to maintain, extract resource for, and build infrastructure AI depends on without an adequate replacement).
  3. I'd guess we can make AIs risk-averse (or difference-making risk averse) for whatever goals they do end up with, even if we can't align them.
  4. Ending us all sounds hard and unlikely. There are many ways we are resilient and ways governments and militaries could respond to a threat of this level.

Thanks for sharing!

We could identify the most severe diseases through research and target them with vaccination campaigns. Yes, eliminating a disease would lead to population growth until another factor limits the population - but since it was such an unusually severe limiting factor, the net effect is likely to be positive.

This is interesting and seems possible to me, but I'd probably want to look more into any particular case and see population modelling to verify the logic more generally.

If this does work, I wonder if we'd have a general and reliable path forward for reducing wild animal suffering (whether disease or another cause, it sounds like you're more agnostic about it being diseases in the paper): just iteratively and incrementally reduce the causes of suffering in a population, roughly in order from the most severe (worst conditional on their occurrence[1]) to the mildest.

  1. ^

    However, the worst conditional on their occurrrence could be rare enough that this wouldn't look cost-effective. In that case, we might look for another animal population where it does look cost-effective to reduce the most severe cause of suffering.

Of interest, see the comments on Thornley's EA Forum post. I and others have left multiple responses to his arguments.

Here's how I'd think about "4 My argument", in actualist preference-affecting terms[1]:

My preferences will differ between pressing button 1 and not pressing button 1, because my preferences track the world's[2] preferences[3], and the world's preferences will differ by Frank's. Then:

  1. If I know ahead of time that if I press button 1, I will definitely press button 2 (because I will take Frank's interests into account), then, by backward induction, my options at the start are actually just pressing both and pressing neither. If I press neither button, it's not worse to Bob, and Frank won't exist, so it won't actually be worse to him either. So pressing button 1 is not better or obligatory, because if I don't press it, it's not worse to anyone. This matches the permissibility of not pressing a button that has the net effects of 1 and 2 together.
  2. If I know ahead of time there's some chance I won't press button 2 even if I press button 1, then pressing button 1 is better, because it would be worse in expectation to Bob if I didn't.

 

  1. ^

    See this piece for more on actualist preference-affecting views.

  2. ^

    Past, current and future, or just current and future.

  3. ^

    Or also desires, likes, dislikes, approval, disapproval, pleasures in things, displeasures in things, evaluative attitudes, etc., as in Really radical empathy.

I'd guess this is pretty illustrative of differences in how we think about person-affecting views, and why I think violations of "the independence of irrelevant alternatives" and "Losers Can Dislodge Winners" are not a big deal:

Narrow views imply that in the choice between 1 and 2, you can choose either. But why in the world would adding 3, which is itself impermissible to take, affect that?

Run through the reasoning on the narrow view with and without 3 available and compare them. The differences in reasoning, ultimately following from narrow person-affecting intuitions, are why. So, the narrow person-affecting intuitions explain why this happens. You're asking as if there's no reason, or no good reason. But if you were sufficiently sympathetic to narrow person-affecting intuitions, then you'd have good reasons: those narrow person-affecting affecting intuitions and how you reason with them.

(Not that you referred directly to "the independence of irrelevant alternatives" here, but violation of it is a common complaint against person-affecting views, so I want to respond to that directly here.) 3 is not an "irrelevant" alternative, because when it's available, we see exactly how it's relevant when it shows up in the reasoning that leads us to 2. I think "the independence of irrelevant alternatives" has a misleading name.

Adding some other choice you’re not allowed to take to an option set shouldn’t make you no longer allowed to choose a previously permissible option. This would be like if you had two permissible options: saving a child at personal risk, or doing nothing, and then after being offered an extra impermissible option (shooting a different child), it was no longer permissible to do nothing. WTF?

And then this to me seems disanalogous, because you don't provide any reason at all for how the third option would change the logic. We have reasons in the earlier hypothetical.

I think we have very different intuitions.

I don't think giving up axiology is much or any bullet to bite, and I find the frameworks I linked 

  1. better motivated than axiology, and, in particular, by empathy, and to better respect what individuals (would) actually care about,[1] which I take to be pretty fundamental and pretty much the point of "ethics", and
  2. better fits with subjectivism/moral antirealism.[2] 

The problems with axiology also seem worse to me, often as a consequence of failing to respect what individuals (would) actually care about and so failing at empathy, one way or another, as I illustrate in my sequence.

Giving up axiology to hold on to a not even very widely shared intuition?

What do you mean to imply here? Why would I force myself to accept axiology, which I don't find compelling, at the cost of giving up my own stronger intuitions?

And is axiology (or the disjunction of conjunctions of intuitions from which it would follow) much more popular than person-affecting intuitions like the Procreation Asymmetry?

 

Giving up the idea that the world would be better if it had lots of extra happy people and every existing person was a million times better?

I think whether or not a given person-affecting view has to give that up can depend on the view and/or the details of the hypothetical.

  1. ^

    At a basic level better, not necessarily the things they care about by derivation from other things they care about, because they can be mistaken in their derivations.

  2. ^

    Moral realism, that there's good or bad independently of individuals' stances (or evaluative attitudes, as in my first post) seems to me to be a non-starter. I've never seen anything close to a good argument for moral realism, maybe other than epistemic humility and wagers.

Load more