I recently came across this video clip where Michael Pollan argues that artificial systems cannot be conscious. His argument touches on several themes relevant to this post—specifically the conditions for the origin of sentience—but I believe it rests on a fundamental logical error.
Pollan’s core claim is that because feelings originate in the brainstem (a point that is scientifically sound) and are tied to biological vulnerability, they are inherently biological and cannot arise in artificial systems. His logic follows this structure:
This reasoning confuses evolutionary origin with functional requirement. Evolutionary history explains how a trait first appeared given the constraints of carbon-based nervous systems; it does not dictate the physical substrates capable of implementing that functional organization.
The "Feather Analogy" illustrates the flaw perfectly:
Clearly, the conclusion is false. If the brainstem's "feelings" are essentially the integration, valuation, and prioritization of internal states—all of which are computational processes—then the relevant question is whether that functional architecture can be implemented in non-biological substrates.
Pollan’s second argument—that sentience begins with "feelings" rather than "thoughts"—is again, backed by solid science. However, he then lapses into what I can only describe as a "word salad" regarding biological vulnerability. He claims feelings "have no weight" and require a mortal, sensible body. This ignores a crucial neurological fact: affective states do not require peripheral sensory input. For example:
These examples suggest that "feeling" is a representational state within a processing system. Crucially, we should not assume that affective states emerge only when there is a functional need for self-monitoring or goal-valuation. Instead, it is highly plausible that valence and sentience are emergent properties of the information-processing itself.
If a system architecture reaches a certain level of complexity and integration, the resulting "feelings" are ontologically real. To dismiss these states as "less real" because they lack a biological anchor or a "vulnerable body" is a category error; the reality of the experience is a property of the system's internal organization, not its hardware’s chemistry.
Bottom line: Pollan mistakes the "wetware" of our specific evolutionary path for the universal requirements of consciousness. From a welfare perspective, the possibility of sentience in digital minds remains a robust—and high-stakes—concern.
Thanks for organizing this debate week — it looks very valuable.
We’d like to suggest a piece we recently wrote that may be relevant to the discussion: AI Can Help Animal Advocacy More Than It Can Help Industrial Farming. It examines how the structural limits of intensive animal farming may constrain the gains AI can bring to that sector, while AI could be far more transformative for animal welfare through advocacy, transparency, welfare measurement, and alternatives.
While the piece focuses on near-term AI rather than AGI directly, we think it complements the debate well, especially around questions about whether AI might prolong factory farming or instead help expose and reduce animal suffering.
If helpful, we’d be glad for it to be considered for the reading list.
In our upcoming post, we introduce human-anchored reference categories (Annoying(h), Hurtful(h), Disabling(h), Excruciating(h)) to provide a pragmatic shared coordinate system for cross-species discussion. So if one wants to talk about “acuity/resolution between A and B,” it’s reasonable to treat A and B as positions (or intervals) on that human-anchored scale.
But no — we’re not defining acuity as #levels/(B−A), because that requires meaningful distances between A and B. At this stage the (h) scale is best treated as ordinal: it supports “higher/lower ceiling” comparisons, not subtraction or ratios.
Thanks, Vasco — I see where the confusion is coming from.
The difficulty is that in our framework, resolution is not defined as a mathematical density of bins over a fixed, external intensity axis (e.g., dN/dI). That framing assumes there is already a continuous welfare scale “out there,” and resolution simply tells us how finely the organism partitions it.
In our usage, resolution refers to the organism’s internal discriminative granularity — how finely differences in affective magnitude can be distinguished and behaviorally prioritized within whatever range the organism has.
So resolution is not the derivative of category-count with respect to an external intensity variable. Rather, it is a property of the encoding architecture itself.
That’s why it is orthogonal to range. A system may:
• Have a narrow range but very fine discriminative structure within that range.
• Have a wide range but coarse internal discrimination.
• Increase both independently.
Your car analogy is actually helpful. A car can move very fast over a short distance — speed is not determined by total range. Likewise, high resolution does not require a wide affective range, and vice versa.
So resolution is better understood as internal discriminative power, not as bin density over a pre-specified global welfare scale.
Thanks. In our framework, resolution is not simply (i) the number of distinct welfare intensities, nor (ii) a strict ratio relative to the total range. It refers to the functional granularity with which differences in intensity can be discriminated and behaviorally prioritized.
The key point we raise in the post is that resolution is orthogonal to range: a system can evolve high resolution while maintaining a modest range, expand its range while keeping coarse resolution, or increase both simultaneously.
Matthew — I appreciate the engagement, but I don’t think your critique engages with the core argument.
On “energy vs energy services”: of course what ultimately matters are services like industrial heat, refrigeration, sterilization, and computation. The post does not deny that. The point is that scaling a replacement for industrial animal agriculture requires large quantities of reliable energy services. Efficiency reduces intensity per unit, but it does not eliminate capacity constraints when the objective is a global industrial transition. If you believe efficiency alone removes energy as a binding factor, it would help to specify which services and what magnitude of gains you expect — and on what timeline.
On primary energy efficiency: that varies by pathway. Many plant-based alternatives appear already competitive or favorable on energy efficiency compared to conventional animal products. Other approaches — especially cultivated meat and some fermentation systems — are widely described in published assessments as energy-uncertain and potentially energy-intensive depending on process assumptions, scale, and energy source. In several analyses, energy demand and energy mix emerge as key drivers of environmental performance and cost. If you have a source showing that cell-cultured meat is already more primary-energy efficient than conventional meat under commercially realistic conditions, I’d welcome it.
More importantly, per-kg primary energy is only one variable in scaling. Cost, reliability, capital intensity, throughput, and integration into existing industrial systems all shape whether alternatives can replace tens of billions of animals per year. Even if energy intensity is favorable on paper, energy price, grid reliability, or institutional bottlenecks can still affect siting decisions and scaling speed. If you think energy availability is not meaningfully binding in practice, pointing to industrial evidence would make that case stronger.
You also mention unsupported claims about advocate intentions and scaling drivers. I’m open to correction — but that requires specificity. Which claims are incorrect? What do you see as the dominant constraints on replacing factory farming at scale?
The practical claim here is not that all substitutes are more energy-intensive than animal products. It is that for at least some of the most ambitious and scalable replacement pathways—particularly cultivated meat and certain fermentation systems—energy availability, energy price, and reliability are likely to be decisive constraints as we move from pilot facilities to global production.
If energy becomes expensive, unreliable, or institutionally constrained, the path from promising prototype to mass adoption slows. And delays are paid for in animal suffering.
Hi Vasco — thanks for inviting me to comment your post. I think we’ve already clarified this in an earlier exchange and found that we’re working from genuinely different aggregation frameworks, and your nematode vs. torture example makes that divergence especially explicit. Since we’d essentially reached an “agree to disagree” already, I’ll leave it here rather than reopening a long back-and-forth.
Happy to revisit once we have a better empirical handle on ceilings / affective capacity.
Great post, Aaron — I completely agree with your framing of why the lab-to-farm leap feels overdue. Most published welfare research is tied up in universities and controlled settings, which often are not only expensive and slow but often miss how animals actually experience their environments on commercial operations.
I love your emphasis on starting with engaged farmers — that is a low-friction entry point, especially because so much welfare-relevant data is already being collected in everyday farm management but never shared or analyzed. If even a handful of farmers were willing to share anonymized data, we could begin to extrapolate welfare trends across operations and turn those insights into sector-wide benchmarks that signal progress in a measurable way.
Crucially, this model can offer upside for participating companies and farms: a feedback loop with welfare experts that helps them refine practices, capture wins, and communicate them externally — for example, “Our data-driven changes reduced mortality (or time in intense pain) by 15% — here’s how.” That kind of concrete progress, grounded in real farm data, can add a powerful real-world layer to academic research and help translate existing knowledge into practice at scale.
Good catch, Jim — and thanks for flagging the terminology. This field is already complicated enough that we really don’t need parallel vocabularies for the same underlying idea. One of the reasons we post publicly is exactly to get this kind of “conceptual linting” from the community.
I think Birch et al.’s acuity is basically what we mean by resolution: sensitivity to small differences — the ability to discriminate fine gradations (what psychophysics would call “just-noticeable differences”). Where we’re being careful is in separating that from range, which refers to the maximum intensity an organism can plausibly access.
So in short: acuity ≈ resolution in our usage; it’s distinct from range.
Thanks Vasco — that’s a reasonable concern, but I think it assumes a stronger claim than the framework is actually making.
We are not attempting to define a finely resolved ratio scale covering the entire possible range of pain intensities across taxa. The four intensities are intended as coarse phenomenological anchors, chosen as a practical balance between resolution and scientific tractability.
As we explain in a earlier paper:
So the goal is not to cover the entire theoretical intensity range with fine granularity, but to provide a small number of biologically interpretable categories that can be applied with reasonable consistency. Adding many more levels would only be an improvement if they could be assigned reliably; otherwise it would risk creating false precision.
And importantly, the framework is not committed to four categories as a final solution. If future work supports a better-validated scale with additional intermediate levels, those could be incorporated without difficulty. For now, four levels seem to provide a workable and defensible balance between usability and epistemic caution.