The idea that we can substitute carbon-based neurons with functionally equivalent artificial neurons (made of, say, silicon) is at the heart of many discussions about consciousness. For me, this used to be a load-bearing argument for caring about digital consciousness, and I think it still is for many people, including at Anthropic. However, it doesn’t withstand closer scrutiny, so I think it’s time to let go of this dearly beloved intuition.
This post was inspired by the video “Consciousness Isn't Substrate-Neutral: From Dancing Qualia & Epiphenomena to Topology & Accelerators” by Andrés Gómez-Emilsson, but it’s written in my personal capacity.
Cartoon neurons
Anesthesiologist Stuart Hameroff likes to speak of “cartoon neurons”—the overly simplified abstraction of neurons as switches that take in some inputs and fire (or not) depending on some activation function.
He calls this abstraction “an insult to neurons”:
Why are they an insult? Well, if you think, they're cells. Neurons are cells, right? And a single cell paramecium can swim, learn, avoid predators, find food, mates and have sex.
I think intra-neuron sex has not yet been documented, but the point still stands. As I wrote in a previous post:
Single-cell organisms are thought to be capable of learning, integrating spatial information and adapting their behavior accordingly, deciding whether to approach or escape external signals, storing and processing information, and exhibiting predator-prey behaviors, among others. Attempts to use artificial neural networks have been shown to be “inefficient at modelling the single-celled ciliate’s avoidance reactions.” (Trinh, Wayland, Prabakara, 2019). “Given these facts, simple ‘summation-and-threshold’ models of neurons that treat cells as mere carriers of adjustable synapses vastly underestimate the processing power of single neurons, and thus of the brain itself.” (Tecumseh Fitch, 2021)
Other processes happening at the neuronal level include dendritic computation, cytoskeletal dynamics, ephaptic coupling, oscillatory behaviors, possible quantum effects, ultraweak photon emissions, and many more.[1]
But does it matter that neurons are actually more complex than the cartoon model suggests? For instance, one could argue that:
- All the additional complexity is not relevant, so it’s fine to ignore it; or
- With sufficiently advanced technology, time, and dedication, we should be able to make artificial neurons (or simulations thereof) with an arbitrary level of detail.
I think none of these two arguments hold up.
Abstracting away complexity
Regarding (1):
Given the list of neuronal processes listed above, the burden of proof should be on whoever argues that all those processes can be ignored. For example, ephaptic coupling shows us that the electric field in the brain plays a causal role, and this type of interaction is not modeled in neural networks.
One could respond that such field interactions could, in principle, be modeled within existing neural network frameworks (e.g. through careful parameterization or new connection topologies). However, even if it was possible to reproduce the input/output behavior correctly (which I doubt, for reasons outlined later), getting the runtime complexity also right adds an entirely new dimension of computational requirements that make “functional equivalence,” I believe, impossible. I made this point in a previous post, but the basic idea is that the physical substrate used for computation (e.g. the electromagnetic field) determines how quickly and efficiently computation happens, in some cases allowing for near-instantaneous information processing, which matters for the web of causality. Imagine a self-driving car that can correctly recognize pedestrians but does so in seconds instead of milliseconds. This, I believe, goes all the way down.
The presence of electromagnetic phenomena in the brain (which can allow for near-instantaneous information processing) should be enough to make you at the very least very skeptical of the notion of functionally equivalent artificial neurons. But does it get more complicated? For example, are quantum effects also present in the brain?
In a sense, obviously yes:
I don’t think it’s even necessary to debate whether quantum phenomena manifest somehow at the macro level of the brain (whatever that even means), e.g. as suggested by Penrose but disputed by Tegmark. This is because quantum mechanics underlies all brain phenomena, so it necessarily partakes in the causal chain. Ignoring this when discussing functional equivalence is question-begging. And the debate over whether quantum phenomena manifest at the macro level is far from settled.[2] In fact, one could even argue that we already have beautiful proof of such macro-level effects from the field of anesthetics: Xenon isotopes with nonzero nuclear spin are much less effective anesthetics than isotopes without nuclear spin. And spin is, of course, a quintessential quantum property.[3][4]
A strong intuition behind wanting to ignore the added complexity is that neural networks (based on the cartoon model) are universal function approximators, and they seem to be working exceedingly well in machine learning. But just because a neural network is a universal function approximator doesn’t automatically mean that it is an appropriate substrate for consciousness. These are completely different problems. A priori, there should be no reason to abstract away all the complexity (and, in fact, I suspect some of the things LLMs struggle with, such as visual reasoning, are precisely the result of ignoring that extra complexity).[5]
Simulating the brain
Chalmers’ “dancing qualia” thought experiment (which relies on the notion of functionally equivalent artificial neurons) is often used to argue that consciousness is substrate-independent, which in turn implies that sufficiently detailed brain simulations could be conscious. Many people working on AI consciousness find this view compelling. But what counts as “sufficiently detailed”? What’s the appropriate level of abstraction at which we can ignore additional, underlying complexity?
Consider the following fluid dynamics simulation:
These sorts of fluid simulations can be extremely compute-intense but, if done properly, can model the underlying physical phenomena (flows, turbulence, etc.) really well. And, importantly, it’s totally fine to abstract away a lot of complexities! For example, many fluid models don’t require simulating the individual atoms, let alone the subatomic particles within those atoms. Can’t we do something similar with the brain?
Well, yes and no. Such a simulation, while useful, will have (by design) a narrow domain of validity, dictated by the underlying mathematical model and the granularity of the simulation. Introduce additional effects or constraints and the simulation stops being adequate. For instance, the simulation above won’t tell you what happens to the fluid at super high temperatures, or when it interacts with certain chemicals, or when you shoot a beam of electrons through it. You won’t get the full picture unless you simulate the system at the deepest level.
Sufficiently detailed replicas/simulations
Regarding (2) (i.e., that we should be able to make artificial neurons or simulations thereof with an arbitrary level of detail with sufficiently advanced technology, time, and dedication):
One could argue that, within a simulation, all that matters is the relative timing of events rather than absolute speed. So even if certain physical substrates can accelerate computation, we could just take our time until we get the relative timing of all the events right. However, past a certain level of detail, given that some fields of physics propagate at the speed of light and interact with each other in such intricate ways, you’ll eventually get to a point where you’d have to wait longer than the age of the universe to simulate even certain simple systems. If your argument requires asking to be given the age of the universe, I think you should reconsider your position.
The very final attempt to save this line of thinking is to claim that reality is already binary at the deepest level, and so maybe one day we’ll find a way to use that deepest substrate for computation. Andrés has already addressed this point eloquently, so I’ll just paraphrase.
Here is usually how the argument tends to go: “OK, sure, assume that you need quantum mechanics or topological fields to explain the behavior of a neural network in the brain. Why can't we just simulate that? Fundamentally, reality is already binary, made of tiny Planck-length ones and zeros in this massive network of causality, and that is deep reality. And so if that deep reality is capable of simulating our brains at an emerging level, and that creates our consciousness at an emerging level, then clearly some kind of fundamental non-binary unity at the lowest levels of reality is not necessary, right?” This is question-begging. This is something you're assuming. It is open to interpretation. Actually, physics—the facts—do not imply that. Quite on the contrary. For example, string theory postulates that the building blocks of reality are topological in nature, meaning that topological computing could be at the very base layer of reality.[6] And in that sense, no, reality would not be made of points interacting with each other at the Planck length.
Takeaways
In summary: The physical substrate used for computation matters for speed of computation. The brain’s substrate is no exception. Arguments that brush away this fact end up being question-begging. If you want to reproduce the full causal behavior of, say, a laser beam, the only way to do so is by using an actual laser beam.[7]
The position I’m trying to defend here is often portrayed very uncharitably: “Some people believe consciousness requires biological neurons!” The pejorative term “carbon chauvinist” further cements this narrative.[8] But I think such use of language oversimplifies the complexity of our physical reality.[9]
Rejecting substrate independence strongly challenges consciousness claims regarding:
I hope the ideas in this post make you at least question your assumptions about the validity of computational functionalism as an approach to consciousness. If not, then perhaps the binding / boundary / integrated information problem or the slicing problem will. To me, the evidence and arguments against computational functionalism are just overwhelming.
So what’s the alternative? I currently think it involves endorsing the notion that the fundamental fields of physics are fields of qualia. While this is an unintuitive idea with other unintuitive implications, such an ontology is, I think, much better suited to tackle the problems that a successful theory of consciousness should meet.[10]
- ^
Claude adds: Synaptic plasticity, retrograde signaling, gene expression regulation, protein synthesis at synapses, astrocyte interactions, extracellular matrix interactions, molecular memory, metabolic signaling, and intrinsic excitability changes.
- ^
I personally give some credence to the idea that quantum entanglement may be the underlying mechanism behind phenomenal binding, as argued by Atai Barkai.
- ^
See also the talk “Spintronics in Neuroscience” by Luca Turin at The Science of Consciousness conference 2023.
- ^
Even more trivially, neurotransmitters are just molecules, so their behavior is at least in part dictated by the laws of quantum mechanics.
- ^
For example, the brain might be using the electromagnetic field for nonlinear wave computing. LLMs might still be able to brute force visual reasoning to some extent as they become more advanced, but they still won’t be conscious.
- ^
String theory may turn out not to be correct, but the argument doesn’t depend on string theory in particular. There are other physics ontologies that challenge the “it from bit” assumption.
- ^
I was tempted to include a midwit meme here, with the left and right extremes showing the text “A simulation of a thing is not the thing itself,” but I don’t endorse this form of antagonism.
- ^
Personally, I much prefer “non-materialist physicalist”.
- ^
It’s kind of like saying “I don’t need a laser pointer! I can point at things just fine with this stick.” But maybe “ability to point at things” was never the point to begin with (and even if it was, using a laser would let you point at many more things and much faster).
I concluded around the age of 20 that something more than pure physicalism was needed in order to account for consciousness. I don't remember all the details, but I think even then I was struck by the idea that the conscious self has a kind of holistic unity that assemblages of particles don't possess. One of my first ideas was a property dualism in which the physical substrate would be knots of electromagnetic or gravitational flux, related systematically by some psychophysical law to the intentional states which I saw as being the real "substance" of consciousness.
I mention this to convey my sympathy for Andrés's idea that nontrivial topological structures provide the physical substrate of consciousness - that was my first idea too.
Years later, I had learned quantum mechanics and wondered if quantum entanglement could provide the complex ontological unities that consciousness seems to require. I worked for Stuart Hameroff for a year, and got to know his and Penrose's ideas quite well. The problem with entanglement is that it potentially gives you too much unity - you need an ontology in which the parts of the conscious self are objectively tied together, but are also objectively disjoint from other selves. In principle, you can have that in a quantum theory, but it implies a dynamics or an ontology that is a little unusual, from the perspective of the usual ontological options.
In the end I took up the study of truly fundamental physics (quantum field theory, string theory), because that seemed like the surest path to the correct quantum ontology, and it looked like I would need that for the correct ontology of mind. Also by that time, AI had advanced far enough that I wanted to know the correct ontology of mind, not just from a desire to know the truth, but because it would be needed for AI alignment. An AI might have the right values but the wrong ontology of personhood.
What would I say these days? First of all, the nature of the structures that hypothetically bridge fundamental physics and conscious states is still wide open, because the mathematics of fundamental physics is still wide open. Topological objects, Hilbert space objects, they are definitely contenders to be involved, but so are many other kinds of structure. One really needs to look for a convergence between the mathematical ontology of fundamental physics, the phenomenological ontology of consciousness, and the biology and biophysics of the brain.
Of these, I think the second is somewhat neglected by scientifically minded philosophers. Thanks to David Chalmers, qualia are taken seriously, but they seem inhibited about going beyond that, e.g. to talking of the self as something real. I suppose that sounds too much like a soul-substance; and also their habits of thought reduce everything to particles or to bits in atomistic interaction. Simple qualia, like points of color, are OK from this perspective, but larger wholes or Gestalts or complex unities run against their reductionist instincts. (Philosophical schools that explore consciousness without a materialistic prior, like Husserl's transcendental phenomenology, are much less inhibited about noticing complex ontology of mind, and taking it seriously.)
On the other hand, many people think they can get ontological wholes through systems or bound structures, made of parts that interact persistently. To give a contemporary example, many people in the schools of thought associated with Michael Levin and Karl Friston seem to think this way. Given the absence of clear evidence of e.g. quantum biology playing a role in consciousness (more on this in a moment), a critique of the systems approach to consciousness would be useful for people like Andrés and myself, who want to argue strongly for a "substance" theory of mind (in which fundamental substrate matters), rather than an "information" theory of mind (in which consciousness has to supervene on coarse-grained state machines).
For me, the core arguments against substrate-indifferent information-based theory of consciousness, revolve around vagueness. There is a kind of ontological exactness that states of consciousness must possess, whereas coarse-grained informational or computational states inherently have some vagueness from a microphysical perspective (or else must be made exact in arbitrary ways). But there are a number of challenges to this argument - aren't states of mind vague too? doesn't functional output provide an exact criterion for categorizing a state? - which require precision to be countered. Perhaps it's a shame that I never set out this argument as forcefully and successfully as Chalmers made the case for the importance of the "hard problem".
This is relevant to the present article, in regards to the hidden complexities of the neuronal state, which make biological neurons so much more complicated than their virtual artificial counterparts. If we're talking about replacing neurons in a living organism with digital emulators, then all those processes that take place alongside the action potential may be pragmatically relevant - they may also need to be represented in your emulator - but they do not actually challenge the computational theory of mind. They only require that your simulation is a little more fine-grained than we used to believe necessary.
In any case, at least for quantum theories of mind to become widely convincing, there needs to be some evidence that quantum biology is playing a role in conscious cognition, evidence which I believe is still quite lacking. Hameroff's microtubules are still by far the best candidate I have, for a biological locus of persistent coherent quantum states, but it seems difficult to get decisive evidence of coherence. The length of the debate about whether quantum coherence occurs in photosynthesis shows how difficult it can be.
Thanks so much for your thoughtful and detailed comment, Mitchell! It seems like we're roughly on the same page regarding the various constraints that a successful theory of consciousness should meet, as well as the class of approaches that seem most promising. Let me just share some immediate reactions I had while reading your comment. :)
Potentially, yes (though my understanding of entanglement is limited). On the other hand, as Atai has pointed out, "most binding-appreciators strongly, strongly underestimate just how 'insane' it is that we can have any candidate solution to the binding problem *at all* [entanglement] in a universe that remotely resembles the universe described by classical physics." (Here's his full writeup, which I find very compelling.) This makes me think that maybe we will find that entanglement gives us just the right amount of unity (though the specific mechanism might turn out to be pretty elaborate). Do you have any resources on the point about "too much unity"? I'd love to learn more.
Agree, and this is part of what motivates the argument outlined in the last paragraph of the section "Sufficiently detailed replicas/simulations" above.
Same for me. The paper "Are algorithms always arbitrary?" makes this case nicely.
Maybe, yeah, depending on how we define a state of mind. But as you pointed out, "there is a kind of ontological exactness that states of consciousness must possess," which I also agree with—namely, that at least some moments of experience seem to exhibit some amount of fundamentally integrated information / binding. So if an ontology can't accommodate that, it's doomed. I believe that's the case for information-based theories, since any unity is interpreted by us arbitrarily, i.e. it's epiphenomenal.
I think "a little more" is doing a lot of work here. If consciousness is a thing/substrate, then any emulation that abstracts away finer levels of granularity will, by definition, not be that substrate, and therefore not be conscious (unless maybe one commits to the claim that the deepest layer of reality is binary/bits, as pointed out above).
I confess I still don't fully understand why we need to definitively prove that coherence has to be sustained. QM plays a causal role in the brain because it plays a causal role in everything, as I was hoping to convey with my xenon example. But I'll keep thinking!
I'll add another candidate for quantum biology into the mix: the Posner molecule (also mentioned by Atai here).
Thanks again! :)
You might think it is important that the facts about consciousness contribute to our beliefs about them in some way. Our beliefs about consciousness are surely a phenomenon of the macro level. So if our beliefs are somehow sensitive to the facts, and the facts consist of quantum effects, we should expect those quantum effects to generate some marcoscopic changes.
This is the sticking point for me with quantum theories: there doesn't seem to be any obvious mechanism for general quantum level truths to exert the kinds of very targeted influences that would be necessary for them to explain our beliefs about consciousness. And if they don't, then it seems like we're left with beliefs insensitive to the truth, and that is deeply unintuitive. What do you think?
Thanks for reading and for your comment, Derek!
I think it will turn out that the mechanism will not be obvious, mainly because quantum mechanics and fundamental physics more broadly are extraordinarily complex (and I expect that understanding consciousness will be just as difficult as understanding, say, quantum field theory). But, that being said, I do think there exist candidate quantum mechanisms that might explain the macro-level phenomenon of binding, such as entanglement.
Another assumption behind my position (which I also outlined in Indirect realism illustrated (and why it matters so much for consciousness debates)) is that, since I believe consciousness/qualia are real (and a thing, not a process), the only sense in which they can be really real is for their 3rd-person physical correlates to be found at the deepest level of reality/physics. Any correlates that are not at the deepest level—however elaborate—are just useful fictions, and thus (IMO) no different than what e.g. computational functionalists claim.
Hope that makes my views a bit clearer.
I really don't see any issue with abstracting away complexity. My view is strictly binary: either there is something about the biological substrate which sustain consciousness in a way silicon cannot, or a silicon-based system can, in theory, simulate consciousness. It seems to be "relatively minor" to me whether silicon-based consciousness would be practical or not in terms of resources required, e.g. needing more silicon than can be found on earth in order to simulate a human mind. I say this in the sense that I'd love to know whether it's possible, before worrying about how practical it is. Suppose it really takes prohibitively vast computers to simulate consciousness, in which case there might be ways to combine the use of silicon and biological substrates that solve the material resources problem.
But you don't have data on that, I don't have data on that, neither of us know whether it's even meaningful to debate complexity. If anything, I would be tempted to interpret debate about complexity as implying weak faith in rebuttals of the substrate argument.
Executive summary: In a personal reflection inspired by recent critiques, the author argues that the widely held belief in "functionally equivalent artificial neurons"—central to digital consciousness discussions—is untenable, because real neurons' complexity and substrate-specific phenomena (like electromagnetic and quantum effects) cannot be abstracted away without losing essential causal properties critical to consciousness.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.