M

MichaelStJules

Independent researcher
11497 karmaJoined Working (6-15 years)Vancouver, BC, Canada

Bio

I mostly do philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty and cluelessness, and indirect effects on wild animals.

I've also done economic modelling for some animal welfare issues.

Sequences
3

Radical empathy
Human impacts on animals
Welfare and moral weights

Comments
2476

Topic contributions
12

Why would consciousness (or moral patienthood) require having a self-model?

From my comment above:

More on this kind of view here and here.

But to elaborate, the answer is illusionism about phenomenal consciousness, the only (physicalist) account of consciousness that seems to me to be on track to address the hard problem (by dissolving it and saying there are no phenomenal properties) and the meta-problem of consciousness. EDIT: To have an illusion of phenomenal properties, you have to model those phenomenal properties. The illusion is just the model, aspects of it, or certain things that depend on it. That model is (probably) some kind of model of yourself, or aspects of your own internal processing, e.g. an attention schema.

To prevent any misunderstanding, illusionism doesn't deny that consciousness exists in some form, it just denies that consciousness is phenomenal, or that there are phenomenal properties. It also denies the classical account of qualia, i.e ineffable and so on.

Rather than being convinced that cage-free is worse, I'm just not convinced it's better, so why support it?

I'm not convinced nest deprivation reaches the disabling intensity. It's definitely possible, and I not very unlikely, but it's hard to say either way based on the current evidence. And whether or not it does, maybe keel bone fracture inflammation pain could still just be at least few more times intense anyway.

Maybe it'll help for me to rephrase: if a being has more things it can attend to (be aware of, have in its attention) simultaneously, then it has more attention to pull. It can attend to more, all else equal, for example, if it has a richer/more detailed visual field, similar to more pixels in a computer screen.

We could imagine a very simple creature experience very little pain but be totally focused on it.

If it's very simple, it would probably have very little attention to pull (relatively), so the pain would not be intense under the hypothesis I'm putting forward.

But if I did, I think this would make me think animal consciousness is even more serious.  For simple creatures, pain takes up their whole world. 

I also give some weight to this possibility, i.e. that we should measure attention in individual-relative terms, and it's something more like the proportion of attention pulled that matters.

RP had some arguments against conscious subsystems affecting moral weight very significantly that I found pretty convincing. 

I might have written some of them! I still have some sympathy for the hypothesis and that it matters when you reason using expected values, taking the arguments into account, even if you assign the hypothesis like 1% probability. The probabilities can matter here.

 

In regards to your first point, I don't see either why we'd think that degree of attention correlates with neuron counts or determines the intensity of consciousness

I believe the intensity of suffering consists largely (maybe not exclusively) in how much it pulls your attention, specifically its motivational salience. Intense suffering that's easy to ignore seems like an oxymoron. I discuss this a bit more here.

Welfare Footprint Project's pain definitions also refer to attention as one of the criteria (along with other behaviours):

Annoying pain:

(...) Sufferers can ignore this sensation most of the time. Performance of cognitive tasks demanding attention are either not affected or only mildly affected. (...)

Hurtful pain:

(...) Different from Annoying pain, the ability to draw attention away from the sensation of pain is reduced: awareness of pain is likely to be present most of the time, interspersed by brief periods during which pain can be ignored depending on the level of distraction provided by other activities. (...)

Disabling pain:

(...) Inattention and unresponsiveness to milder forms of pain or other ongoing stimuli and surroundings is likely to be observed. (...)

Excruciating pain seems entirely behaviourally defined, but I would assume effects on attention like disabling pain or (much) stronger.

 

Then, we can ask "how much attention can be pulled?" And we might think:

  1. having more things you're aware of simultaneously (e.g. more details in your visual field) means you have more attention to pull, and
  2. more neurons allows you to be aware of more things simultaneously,

so brains with more neurons can have more attention to pull.

I also find RP's arguments against neuron counts completely devastating.

I worked on some of them with RP myself here.

FWIW, I found Adam's arguments convincing against the kinds of views he argued against, but I don't think they covered the cases in point 2 here.

though unlike Eliezer, I don’t come to my conclusions about animal consciousness from the armchair without reviewing any evidence

A bit of nitpick, but I think Eliezer has a very high bar for attributing consciousness and is aware of relevant evidence for that bar, e.g. evidence for theory of mind or a robust self-model.

And this gets into the kind of views to which I'm sympathetic.

 

I am quite sympathetic to the kind of view Eliezer seems to endorse and the importance of something like a self-model, but my bar for self-models is probably much lower and I think many animals have at least modest self-models, including probably all mammals and birds, but I'm not confident about others. More on this kind of view here and here.

 

On the other hand, I am also sympathetic to counting anything that looks like pleasure, unpleasantness, desire as motivational salience (an attentional mechanism) or beliefs about betterness/worseness/good/bad, basically any kind of evaluative attitude about anything, or any way of caring about anything. If some system cares about something, I want to empathize and try to care about that in the same way on their behalf.[1] I discuss such attitudes more here and this view more in a draft (which I'm happy to share).

And I'm inclined to count these attitudes whether they're "conscious" or not, however we characterize consciousness. Or, these processes just ground something worth recognizing as conscious, anyway.

Under this view, I probably end up basically agreeing with you about which animals count, on the basis of evidence about desire as motivational salience and/or pleasure/unpleasantness-like states. 

However, there could still be important differences in degree if and because they meet different bars, and I have some sympathy for some neuron count-related arguments that favour brains with more neurons (point 2 here). I also give substantial weight to the possibilities that:

  1. maximum intensities for desires as motivational salience (and maybe hedonic states like pleasure and unpleasantness) are similar,
  2. there's (often) no fact of the matter about how to compare them.

 

  1. ^

    Focusing on what they care about intrinsically or terminally, not instrumental or derived concerns. And, of course, I have to deal with intrapersonal and interpersonal trade-offs.

Note, from this post:

[The Navigation Fund] has made a non-binding commitment to sustain funding through 2026 for all of the recurring grantees that the OP Farm Animal Welfare Program will no longer be allowed to fund

I suppose this wouldn't include new orgs in wild animal welfare and invertebrate welfare, though.

THL UK is focusing on meat/broiler chicken welfare, while I'd guess THL is doing a lot of cage-free egg work, which I want to avoid.

Hey @Toby Tremlett🔹 , when people leave their rationales with their votes and they end up as comments here, they often don't say what they voted for, and it doesn't show in this thread. So, I don't know what orgs they're talking about. Is that intended?

I voted for The Humane League UK (meat/broiler chicken welfare), Fish Welfare Initiative, Shrimp Welfare Project and Arthropoda Foundation for cost-effective programs for animal welfare with low risk of backfire. I'm specifically concerned with backfire due to wild animal effects (also here), or increasing keel bone fractures for cage-free hens, so I avoid reducing animal product consumption/production and cage-free work.

Load more