Hide table of contents

Michael Tye is a prominent philosopher focusing on philosophy of mind. His interests include the philosophy of animal minds and consciousness and in 2016 he published the book Tense Bees and Shellshocked Crabs: Are Animals Conscious? This book features one of the most in-depth discussions of invertebrate consciousness available.

This interview is about potential phenomenal consciousness (especially conscious pain) in invertebrates (especially insects). This is part of an interview series where I interview leading experts on invertebrate consciousness to try and make progress on the question. You can find my previous interview with Jon Mallatt here, my previous interview with Shelley Adamo here, and a post where I justify my engagement with this question here.

1. You write that we are entitled to prefer the proposition that many animals including bees and crabs are phenomenally conscious because they appear to be conscious, and because supposing that they go through what looks like pain, but isn’t actually pain, is ad hoc. What about the counterargument that honeybees and crabs just have many fewer neurons than we do and must economize on space, and so it seems reasonable to imagine that they do not have some necessary component of what makes these experiences conscious in us?

Humans and mammals that are in pain behave in characteristic ways. This behavior is complex, involving much, much more than simply withdrawing the body from the damaging or noxious stimulus, and it is caused by the feeling of pain. (In my 2016, I list various components of this behavior.) If we find a very similar pattern of behavior in other animals, we are entitled to infer that the same cause is operative unless we have good reason to think that their case is different. This was a point that Sir Isaac Newton made long ago with respect to effects in nature generally and their causes. In the case of hermit crabs, we find the relevant behavioral pattern. So, we may infer that, like us, they feel pain. To be sure, they have many fewer neurons. But why should we think that makes a difference to the presence of pain? It didn’t make any difference with respect to the complex pattern of behavior the crabs display in response to noxious stimuli. Why should it make any difference with respect to the cause of that behavior? It might, of course. There is no question of proof here. But that isn’t enough to overturn the inference. One other minor point: pain is a feeling. As such it is inherently a conscious state. Necessarily, there is something it is like to undergo pain. So, the question: “What makes the experience of pain conscious?” is really not coherent. If the experience of pain is present, it is automatically conscious.

2. Do you expect that there will be clean determinate answers about exactly which entities are conscious now or at some point in the future? Or do you think there will always be a grey area with a judgement call involved?

Consciousness itself is not a grey matter. There may be greyness with respect to the content of consciousness (for example, am I feeling pain or pressure, as my tooth is being filled?) but not with respect to consciousness itself. Consciousness does not have borderline cases in the way that life does. Still, confronted with a much simpler organism, we may not be able to tell from its behavior whether it has a faint glimmer of consciousness. I see no reason to suppose that in each and every case, we will be able to know with any strong degree of certainty whether consciousness is present.

3. Do you think it is likely that C elegans (the nematode worm with around 300 neurons) is conscious?

Unlikely. There is nothing in the behavior of the nematode worm that indicates the presence of consciousness. It is a simple stimulus-response system without any flexibility in its behavior. The same is true of the leech.

4. Have you updated your position at all since your 2016 book?

I have finished the draft of a new book on vagueness and the evolution of consciousness. In it, I say something about whether there is a trigger for consciousness in mammal brains and I also say some additional things about the nature of consciousness from my perspective. This develops further claims I have made in the past and connects them to global workspace theory. The book will likely be published in 2020 or early 2021.

5. Do you have any ideas about what the next best steps could be to get to a more certain conclusion about invertebrate consciousness?

We need to look more closely at invertebrate behavior and see whether and how much it matches ours with respect to a range of experiences—bodily, perceptual and emotional.

6. Concerning digital minds, do you think that any mind that satisfied some general high-level conditions, such as behaving similarly to an animal we believe to be conscious, would also be conscious? Or do you think it would require a quite similar process or architecture to what we find in human brains?

Behavior is obviously not the same as mental states. But behavior is evidence for mental states, whether experiential states or not. If we manage to build highly complex systems whose behavior mirrors ours or at least is close to it for a range of mental states, we are entitled to infer that they are subject to mental states too, unless, as noted above, we have good reason to think that their case is different. Merely noting that they are made of silicon is not enough. After all, what reason is there to suppose that crucially makes a difference? Of course, if one endorsed a type identity theory for conscious mental states, according to which experiences are one and the same as specific physico-chemical brain states, that would give one a reason to deny that digital beings lacked consciousness. But why accept the type identity theory? Given the diversity of sentient organisms in nature, it is extremely implausible to hold that for each type of experience, there is a single type of brain state with which it is identical. The most plausible view is that experiences are multiply physically realized.

7. What do you think of the evidence that complex cognition is able to happen unconsciously in human sometimes. This should arguably make us conclude that consciousness is at least not strictly required to see many of the sorts of indicators of consciousness that we see in honeybees and crabs. Do you think this presents a challenge to claims that invertebrates might be conscious?

It is certainly true that cognition can occur without consciousness. Consider, for example, stimuli that are briefly presented to subjects, and that are then backwardly masked so as to make them unconscious. They may still be processed deeply, with the result that they have high-level content that can prime subsequent behavior. In some of these cases, with slightly different timing and intensity settings, the backwardly masked stimuli may still be visible. Where this happens, the immediate behavior of subjects is very different. Why? The obvious answer is that it is the fact that the subjects are conscious of the stimuli in these cases that makes their immediate behavior different. So, the issue again then is whether the behavior we see in honeybees and crabs is of the former sort or whether it is more like the behavior mammals undergo in response to their conscious states. The answer, I think, is that it is more like the latter. It is also worth pointing out that complex unconscious cognition in humans goes along with conscious activity too. Why think that if there is complex unconscious cognition in some cases in the invertebrate realm, it occurs there without consciousness being present in other cases?

8. We can distinguish between two aspects of pain and they can occur independently: sensory (including qualities such as burning, stabbing, and aching) and affective (the intensity or unpleasantness). If insects can feel conscious pain, do you think it is likely that they would feel a lower degree of affective pain than humans? In other words, would it make sense to say they feel only a fraction of the amount of affective pain as a human would ‘in similar circumstances.’

We know that patients who suffer intractable pain and who have undergone prefrontal leukotomies to reduce their pain level report that they still feel pain but they no longer mind it. For these patients, the affective component of pain has been removed. This is indicated in their behavior. The question for other organisms is again how much their ‘pain’ behavior is like ours. To the extent that they respond as we do, that is evidence that they feel what we do. If their behavior is more muted in various ways, that would be evidence that their pains are not as intense or unpleasant. In this regard, I might note that it makes sense to suppose that their pains are actually more intense than ours! After all, they are much less intelligent than we are, so it would not be unreasonable to suppose that Mother Nature would give them a bigger jolt of pain than we receive in response to noxious stimuli in order to get them to behave in ways most conducive to their survival.





Many thanks to an anonymous donor and the EA hotel for funding me to conduct this interview. Thanks also to Rhys Southan for providing suggestions and feedback.

32

0
0

Reactions

0
0

More posts like this

Comments5


Sorted by Click to highlight new comments since:

Congrats on all these great interviews!

There is nothing in the behavior of the nematode worm that indicates the presence of consciousness. It is a simple stimulus-response system without any flexibility in its behavior.

There are numerous papers on learning in C. elegans. Rankin (2004):

Until 1990, no one investigated the possibility that C. elegans might show behavioral plasticity and be able to learn from experience. This has changed dramatically over the last 14 years! Now, instead of asking “what can a worm learn?” it might be better to ask “what cannot a worm learn?” [...]

C. elegans has a remarkable ability to learn about its environment and to alter its behavior as a result of its experience. In every area where people have looked for plasticity they have found it.

Thank you! :)

Thanks for mentioning C. elegans behavioural flexibility. I had meant to comment about that, but forgot to. That's a great paper on the subject.

I think people sometimes unfairly minimize the cognitive abilities of some invertebrates because it gives them cleaner and more straightforward answers about which organisms are conscious, according to their preferred theory.

However, there do appear to be very clear behavioral capabilities differences between C. elegans and other invertebrates (e.g., honeybees) as can be seen in our invertebrate sentience table.

Thank you for doing this, Max (and the supporters). These are good questions that warrant their own book =)

I find this passage making a particularly good point, so I quote it below for those skipped that part:

In the case of hermit crabs, we find the relevant behavioral pattern. So, we may infer that, like us, they feel pain. To be sure, they have many fewer neurons. But why should we think that makes a difference to the presence of pain? It didn’t make any difference with respect to the complex pattern of behavior the crabs display in response to noxious stimuli. Why should it make any difference with respect to the cause of that behavior? It might, of course. There is no question of proof here. But that isn’t enough to overturn the inference.


We need to look more closely at invertebrate behavior and see whether and how much it matches ours with respect to a range of experiences—bodily, perceptual and emotional.

Comparing with humans, I suppose, should come with many caveats. Still, for ancient(?) feelings like fear and pain, the approach seems valid to my layman perspective in the area.

Of course, if one endorsed a type identity theory for conscious mental states, according to which experiences are one and the same as specific physico-chemical brain states, that would give one a reason to deny that digital beings lacked consciousness. But why accept the type identity theory? Given the diversity of sentient organisms in nature, it is extremely implausible to hold that for each type of experience, there is a single type of brain state with which it is identical.

If (globally bound) consciousness is "implemented" on a lower level, then it still may be possible for different physico-chemical brain states for the same qualia to be relevantly identical on that lower level. I mention this because IMO there are good reasons to be sceptical about digital consciousness.

[...] it is is extremely implausible to hold that [...]

A typo

You are very welcome! :)

That passage is also one of my favourite parts of his answers, thanks for highlighting it.

I'll take a look at that David Pearce post, thanks for the link.

Thanks for pointing at the typo, fixed it now.

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier