Hide table of contents

[Subtitle.] Artificial intelligence doesn’t meet the test.

This is a crosspost for Only What Is Alive Can Be Conscious by Nathan Gardels, which was originally published on Noema Magazine on 28 January 2026. Relatedly, you may be interested in listening to Anil Seth on The 80,000 Hours Podcast.

Anil Seth has been named the winner of the 2025 Berggruen essay prize in the English-language. Well-known as a leading proponent of the materialist theory of consciousness, Seth is a British neuroscientist and professor of cognitive and computational neuroscience at the University of Sussex.

His essay, published in Noema, “The Mythology Of Conscious AI,” is a rigorous and compelling challenge to the notion that complex computation can give rise to consciousness, which Seth argues is inseparable from biological life.

He offers a contrasting perspective to an earlier Noema essay by Google’s Blaise Agüera y Arcas and James Manyika, who argue that “life is inherently computational.” While they do not claim that AI can achieve consciousness, they posit a path in that direction since they see organic and inorganic intelligence following the same set of rules for self-organizing development in order to reproduce, grow and heal.

In many ways, Seth’s argument is a seminal refinement and update of the case made by Nobel laureate Gerald Edelman for the Age of AI.

As Edelman put it in a conversation back in 2004, well before the notion that artificial neural networks and large language models could one day produce consciousness:

“The brain is embodied, and the body is embedded in its environment. That trio must operate in an integrated way. You can’t separate the activity and development of the brain from the environment or the body. There is a constant interplay between what is remembered and envisioned — an image — and what is actually happening in the senses.

“The brain can speak to itself and the conscious brain can use its discriminations to plan the future, narrate the past and develop a social self.

“The most important thing to understand is that the brain is ‘context-bound’. It is not a logical system like a computer that processes only programmed information; it does not produce preordained outcomes like a clock.”

For Edelman, consciousness arises through a “selectional repertoire” forged by manifold recursive interactions of the body’s biological apparatus with the environment. “There is no singular mapping to create the mind; there is, rather, an unforetold plurality of possibilities,” he once told me.

What is noise to logical computation is what accounts for variation in humans and the ability to innovate, write poems, compose music, paint masterpieces and feel moods.

The Power Of The Wrong Metaphor

Seth sees the propensity to bundle intelligence and consciousness together as a result of three “baked-in psychological biases.”

“The first is anthropocentrism. This is the tendency to see things through the lens of being human: to take the human example as definitional, rather than as one example of how different properties might come together.

The second is human exceptionalism: our unfortunate habit of putting the human species at the top of every pile, and sometimes in a different pile altogether (perhaps closer to angels and Gods than to other animals, as in the medieval Scala naturae). And the third is anthropomorphism. This is the tendency to project humanlike qualities onto nonhuman things based on what may be only superficial similarities.”

Once we get beyond the temptation of these mistaken metaphors, it is possible to demarcate more clearly where algorithm-driven intelligence processing through an inorganic substrate differs fundamentally from the biological symbiosis that has evolved into awesome efficiency over millennia.

Incomparable Wetware

“Inside a brain,” Seth writes, “there’s no sharp separation between ‘mindware’ and ‘wetware’ as there is between software and hardware in a computer. The more you delve into the intricacies of the biological brain, the more you realize how rich and dynamic it is, compared to the dead sand of silicon.

Brain activity patterns evolve across multiple scales of space and time, ranging from large-scale cortical territories down to the fine-grained details of neurotransmitters and neural circuits, all deeply interwoven with a molecular storm of metabolic activity. Even a single neuron is a spectacularly complicated biological machine, busy maintaining its own integrity and regenerating the conditions and material basis for its own continued existence. (This process is called autopoiesis, from the Greek for ‘self-production.’ Autopoiesis is arguably a defining and distinctive characteristic of living systems.)

Unlike computers, even computers running neural network algorithms, brains are the kinds of things for which it is difficult, and likely impossible, to separate what they do from what they are.

Nor is there any good reason to expect such a clean separation. The sharp division between software and hardware in modern computers is imposed by human design. Biological evolution operates under different constraints and with different goals. From the perspective of evolution, there’s no obvious selection pressure for the kind of full separation that would allow the perfect interoperability between different brains as we enjoy between different computers. In fact, the opposite is likely true: Maintaining a sharp software/hardware division is energetically expensive, as is all too apparent these days in the vast energy budgets of modern server farms.”

Biological Time Vs. Computational Time

Seth also offers a fascinating insight into the difference between context-bound biological time and computational time.

In computational processing, he writes, “only sequence matters: A to B, 0 to 1. There could be a microsecond or a million years between any state transition, and it would still be the same algorithm, the same computation.

By contrast, for brains and for biological systems in general, time is physical, continuous and inescapable. Living systems must continuously resist the decay and disorder that lies along the trajectory to entropic sameness mandated by the inviolable second law of thermodynamics. This means that neurobiological activity is anchored in continuous time in ways that algorithms, by design, are not.

What’s more, many researchers — especially those in the phenomenological tradition — have long emphasized that conscious experience itself is richly dynamic and inherently temporal. It does not stutter from one state to another; it flows. Abstracting the brain into the arid sequence space of algorithms does justice neither to our biology nor to the phenomenology of the stream of consciousness.”

In short, all of this points to the understanding that consciousness is rooted in the imperative of living organisms to hone perceptions of where and how they are situated in the world, and then select behavior that favors their biological survival. As the neuroscientist Antonio Damasio has similarly pointed out, the positive and negative feedback signals of success or failure in this endeavor to survive and flourish  — “feelings” — are the origin of emotions that, in higher-order beings, evolve into culture.

As Seth summarizes the conclusions of his research:

“First, we have the glimmers of an explanatory connection between life and consciousness. Conscious experiences of emotion, mood and even the basal feeling of being alive all map neatly onto perceptual predictions involved in the control and regulation of bodily condition.

Second, the processes underpinning these perceptual predictions are deeply, and perhaps inextricably, rooted in our nature as biological systems, as self-regenerating storms of life resisting the pull of entropic sameness.

And third, all of this is non-computational, or at least non-algorithmic. The minimization of prediction error in real brains and real bodies is a continuous dynamical process that is likely inseparable from its material basis, rather than a meat-implemented algorithm existing in a pristine universe of symbol and sequence.”

The Breath Of Life

At the end of the day, Seth sees something essential at work: “We experience the world around us and ourselves within it — with, through and because of our living bodies. Perhaps it is life, rather than information processing, that breathes fire into the equations of experience.

If we conflate the richness of biological brains and human experience with the information-processing machinations of deepfake-boosted chatbots, or whatever the latest AI wizardry might be, we do our minds, brains and bodies a grave injustice. If we sell ourselves too cheaply to our machine creations, we overestimate them, and we underestimate ourselves.”

Perhaps what makes us us, Seth muses, “harks back to Ancient Greece and to the plains of India, where our innermost essence arises as an inchoate feeling of just being alive — more breath than thought and more meat than machine.”


Editor’s Note on the 2025 Berggruen Essay Prize for the Chinese language:

The 2025 Chinese-language Berggruen Essay prize was awarded to Xin Huang for “Language, Consciousness, and Computation: A Philosophical Analysis of the Token Concept in the Age of Intelligence,” and to Xiaoben Liu for “The First Paradigm of Consciousness Uploading: Mechanisms of Consciousness Evolution in the AI Axial Age and a Prospect toward Web4.”

Huang examines how experience and intention are translated into computable “tokens,” reframing the problem of consciousness in the age of intelligent systems and proposing new avenues for human-machine interaction. Liu advances a framework for consciousness evolution, arguing that language is the basic unit of consciousness and outlines a roadmap toward consciousness uploading, digital immortality and a future “Internet of Consciousness.” 

These essays were published in Cuiling, Noema’s counterpart in China.

15

0
4

Reactions

0
4

More posts like this

Comments7
Sorted by Click to highlight new comments since:

I quickly read this post and Anil Seth's essay and I don't see the part where they argue for the thesis. I see various statements about how human brains work and about how computers work, but I don't see how they connect the dots to "...and therefore computers can't be conscious."

For example, the articles make the claim that brains make no clear separation between hardware and software. Okay, that seems to be true. But so what? Why should I believe that a lack of hardware/software distinction is a necessary property for consciousness to arise?

I feel like I'm missing a lot of what they're trying to say, but I also feel like that's the authors' fault, not mine, because the pieces (especially Seth's original essay) are structured in a way that makes it really hard for me to identify the central arguments.

Thanks for the comment, Michael. I read the post and Seth's original essay, and listened to the episode of The 80,000 Hours Podcast with Seth. I would agree the title of the post is a bit of a misnomer. I think one may update towards a lower chance of digital systems being conscious as a result of Seth's arguments, but they are far from conclusive. I only know I am conscious right now (and I am very confident I was conscious moments ago). So I think a system which is more similar to me at a fundamental physical level should have a higher chance of being conscious. However, I have no idea about what this implies in terms of concrete probabilities of consciousness. As far as I can tell, the available evidence is compatible with frontier large language models (LLMs) having a probability of consciousness of 10^-6, but also 99.999 %.

I'm not convinced by Anil Seth's narrative about our biases in mind attribution.

I've been to his talk where he summarized these points. He talked about our inherent tendency to emotionally relate to entities that can use language. Later, he presented a picture of a transistor and a picture of a monkey and asked which seems more conscious on priors.

The prime mechanism by which human decide whether an entity is valuable and conscious is empathy. We are evolved to feel empathy - that is, modelling "what it is like to be them" - towards entities that have faces, limbs, fur and a squishy body. We feel a lot of empathy for pets and babies - entities that don't control language. And we feel zero empathy for the Chinese room.

The argument relies a lot on trying to depict computers as something rigid, cold and dead and life as something interesting, warm and energetic. This works well for our empathy module but does not convince me as a philosophical argument.

 

I'm curious whether there's any definition of brain's processes as "non-algorithmic" that doesn't end up in Russellian monism (which I'm inclined to support but suspect Seth isn't). Aren't the laws of physics themselves an algorithm? I see autopoiesis as the most interesting connection between consciousness and life but precisely when you find a clear conceptualization like this, it becomes unclear

  • why it couldn't be implemented digitally - e.g. aren't LLMs autopoietic systems, where each token determines the next one?
  • what predictions it makes about the variation in human consciousness (in terms of modalities, intensity and reportability)? E.g. if consciousness is dependent on the degree of embodiment, does it predict Stephen Hawking had a low intensity of consciousness? Is the variance found in human consciousness better explained by the computational differences or differences in the mentioned random biological interactions?

disclaimer: I am not too well-versed on the philosophy here so I could be saying dumb things, feel free to correct:

From my computational physics experience I know that it is physically impossible to simulate the exact electrical properties of a system of a couple hundred atoms on a classical digital computer, due to a blowup in computational complexity. 

The laws of physics could be described as an algorithm, but the algorithm in question is on a level of complexity that is impossible for digital simulations to match. I think it's generally agreed that some degree of complexity is required for consciousness: it doesn't seem insane to say that that complexity might lie past what is digitally simulatable in practice. 

The question of digital consciousness seems to depend on whether simulated abstracted approximations to the physical process of thinking are close enough to produce the same effect. 

Asking whether a process is "close enough [to the brain] to produce the same effect" implicitly begs the question - i.e. assumes consciousness is biological.

P-zombies who wouldn't describe their sensations in terms like "qualia" would likely have an evolutionary fit that's equal to humans. I don't know if they're possible, but I think it demonstrates evolution wasn't optimizing for consciousness. Therefore, we shouldn't ask "is such system sufficiently close to the brain" but "is it sufficiently close to the processes that happen to make brain (phenomenally) conscious".

In general, there isn't agreement about any correlate of consciousness within philosophy of mind - there are well regarded thinkers who claim it's not real (Frankish) or that it's the basic substance of the universe (Goff). I think it's possible consciousness is similar to, say, intelligence or humor, which means you need a complex system to meaningfully implement it. However, I think it's unlikely that "complexity itself" is what gives rise to consciousness, e.g. sunspots are very complex (~unpredictable interaction of many elements).

Hi Daniel and titotal. Thanks for the discussion.

I only know I am conscious right now (and I am very confident I was conscious moments ago). So I think a system which is more similar to me at a fundamental physical level should have a higher chance of being conscious. I have no idea about what this implies in terms of concrete probabilities of consciousness. As far as I can tell, the available evidence is compatible with frontier large language models (LLMs) having a probability of consciousness of 10^-6, but also 99.999 %.

As a side note, I would take for granted that all animals and digital systems are sentient, and focus on assessing the distribution of the intensity of subjective experiences. I think asking about the probability of sentience of an animal or digital system shares some of the issues of asking about the probability that an object is hot. People have different concepts about what "hot" means, and they do not depend just on temperature (for example, the minimum temperature for hot wood is higher than the minimum temperature for hot metal because this transfers heat more efficiently). I understand sentience as having subjective experiences whose intensity is not exactly 0. However, I suspect some people understand it as having subjective experiences which are sufficiently intense. Different bars for this will lead to different probabilities. Asking about the distribution of the intensity of subjective experiences mitigates this. For example, one could ask about the probability of the mean intensity of what an LLM experienced writing a message exceeding the mean intensity of human experiences. It still seems super hard to get numbers for this, but what they refer to may be more concrete than a vague concept like sentience.

I do not see how philosophical zombies (p-zombies) could be physically possible. If they were just like humans at a fundamental physical level, they would in fact be humans. So they would be as conscious as humans, which I assume are conscious (because I am a conscious human right now, and other humans do not seem relevantly different).

I endorse the temperature approach. I'm not sure illusionists would accept the question "What's the % probability that an entity is conscious?" as meaningful but maybe a similar question could indeed be universally accepted, like "Compared to your pain intensity 1 (being poked by a needle), what's your central estimate for the intensity of suffering experienced in scenario X?"

Just to clarify, my argument didn't concern classical p-zombies but what I call "honest p-zombies" - intelligent humanoid entities capable of metacognition but without any intuition similar to our phenomenal intuitions.

Curated and popular this week
Relevant opportunities