Thanks, Vasco — I see where the confusion is coming from.
The difficulty is that in our framework, resolution is not defined as a mathematical density of bins over a fixed, external intensity axis (e.g., dN/dI). That framing assumes there is already a continuous welfare scale “out there,” and resolution simply tells us how finely the organism partitions it.
In our usage, resolution refers to the organism’s internal discriminative granularity — how finely differences in affective magnitude can be distinguished and behaviorally prioritized within whatever range the organism has.
So resolution is not the derivative of category-count with respect to an external intensity variable. Rather, it is a property of the encoding architecture itself.
That’s why it is orthogonal to range. A system may:
• Have a narrow range but very fine discriminative structure within that range.
• Have a wide range but coarse internal discrimination.
• Increase both independently.
Your car analogy is actually helpful. A car can move very fast over a short distance — speed is not determined by total range. Likewise, high resolution does not require a wide affective range, and vice versa.
So resolution is better understood as internal discriminative power, not as bin density over a pre-specified global welfare scale.
Thanks. In our framework, resolution is not simply (i) the number of distinct welfare intensities, nor (ii) a strict ratio relative to the total range. It refers to the functional granularity with which differences in intensity can be discriminated and behaviorally prioritized.
The key point we raise in the post is that resolution is orthogonal to range: a system can evolve high resolution while maintaining a modest range, expand its range while keeping coarse resolution, or increase both simultaneously.
Matthew — I appreciate the engagement, but I don’t think your critique engages with the core argument.
On “energy vs energy services”: of course what ultimately matters are services like industrial heat, refrigeration, sterilization, and computation. The post does not deny that. The point is that scaling a replacement for industrial animal agriculture requires large quantities of reliable energy services. Efficiency reduces intensity per unit, but it does not eliminate capacity constraints when the objective is a global industrial transition. If you believe efficiency alone removes energy as a binding factor, it would help to specify which services and what magnitude of gains you expect — and on what timeline.
On primary energy efficiency: that varies by pathway. Many plant-based alternatives appear already competitive or favorable on energy efficiency compared to conventional animal products. Other approaches — especially cultivated meat and some fermentation systems — are widely described in published assessments as energy-uncertain and potentially energy-intensive depending on process assumptions, scale, and energy source. In several analyses, energy demand and energy mix emerge as key drivers of environmental performance and cost. If you have a source showing that cell-cultured meat is already more primary-energy efficient than conventional meat under commercially realistic conditions, I’d welcome it.
More importantly, per-kg primary energy is only one variable in scaling. Cost, reliability, capital intensity, throughput, and integration into existing industrial systems all shape whether alternatives can replace tens of billions of animals per year. Even if energy intensity is favorable on paper, energy price, grid reliability, or institutional bottlenecks can still affect siting decisions and scaling speed. If you think energy availability is not meaningfully binding in practice, pointing to industrial evidence would make that case stronger.
You also mention unsupported claims about advocate intentions and scaling drivers. I’m open to correction — but that requires specificity. Which claims are incorrect? What do you see as the dominant constraints on replacing factory farming at scale?
The practical claim here is not that all substitutes are more energy-intensive than animal products. It is that for at least some of the most ambitious and scalable replacement pathways—particularly cultivated meat and certain fermentation systems—energy availability, energy price, and reliability are likely to be decisive constraints as we move from pilot facilities to global production.
If energy becomes expensive, unreliable, or institutionally constrained, the path from promising prototype to mass adoption slows. And delays are paid for in animal suffering.
Hi Vasco — thanks for inviting me to comment your post. I think we’ve already clarified this in an earlier exchange and found that we’re working from genuinely different aggregation frameworks, and your nematode vs. torture example makes that divergence especially explicit. Since we’d essentially reached an “agree to disagree” already, I’ll leave it here rather than reopening a long back-and-forth.
Happy to revisit once we have a better empirical handle on ceilings / affective capacity.
Great post, Aaron — I completely agree with your framing of why the lab-to-farm leap feels overdue. Most published welfare research is tied up in universities and controlled settings, which often are not only expensive and slow but often miss how animals actually experience their environments on commercial operations.
I love your emphasis on starting with engaged farmers — that is a low-friction entry point, especially because so much welfare-relevant data is already being collected in everyday farm management but never shared or analyzed. If even a handful of farmers were willing to share anonymized data, we could begin to extrapolate welfare trends across operations and turn those insights into sector-wide benchmarks that signal progress in a measurable way.
Crucially, this model can offer upside for participating companies and farms: a feedback loop with welfare experts that helps them refine practices, capture wins, and communicate them externally — for example, “Our data-driven changes reduced mortality (or time in intense pain) by 15% — here’s how.” That kind of concrete progress, grounded in real farm data, can add a powerful real-world layer to academic research and help translate existing knowledge into practice at scale.
Good catch, Jim — and thanks for flagging the terminology. This field is already complicated enough that we really don’t need parallel vocabularies for the same underlying idea. One of the reasons we post publicly is exactly to get this kind of “conceptual linting” from the community.
I think Birch et al.’s acuity is basically what we mean by resolution: sensitivity to small differences — the ability to discriminate fine gradations (what psychophysics would call “just-noticeable differences”). Where we’re being careful is in separating that from range, which refers to the maximum intensity an organism can plausibly access.
So in short: acuity ≈ resolution in our usage; it’s distinct from range.
Thanks, Jim — that’s a thoughtful attempt to restate our terms, and it touches on something important you asked in the other thread about bandwidth–acuity vs range–resolution.
Our range concept maps fairly closely onto the welfare range used in Rethink Priorities’ Moral Weight project — it refers to the upper bound of affective intensity an organism can plausibly access .
Where we’d adjust is resolution. In the earlier draft your summary makes it sound like resolution is just “precision within a bounded range,” but that framing risks suggesting that resolution is always subordinate to range in motivational function. In fact, from a pure information-encoding perspective, resolution is as versatile as range for enabling intensity-based prioritization, because in principle both could be increased indefinitely: range by extending the extreme ends of the scale, and resolution by subdividing any given range into arbitrarily fine gradations. We develop this point in the The Function and Evolution of Affective Scales section of the Do primitive sentient organisms feel extreme pain? paper
So the distinction isn’t “range = strength, resolution = detail”; it’s that range and resolution are two orthogonal axes along which affective systems can vary, each capable of supporting graded prioritization. A system with high resolution but modest range could still distinguish and act on nuanced motivational differences without accessing extreme affective intensities at all.
Thanks Jim
On the cost point you raised — “extra integration, valuation, and modulatory capacity are costly only if they decrease fitness in some way, right?” — selection indeed acts on net fitness. Still, it’s both useful and standard to keep costs and benefits analytically separate before recombining them. A trait can be costly in terms of resources or architecture even when it increases fitness overall; brains and immune systems are classic examples.
On your footnote #3 — “the question of whether organisms with narrower welfare ranges could feel extreme pain” — I think there may be a bit of a contradiction in terms. If an organism has a genuinely narrower welfare range, then by definition (or at least under the operational definitions I’m using), it does not reach disabling or excruciating levels of negative affect. In that framing, the relevant question is precisely where the negative-intensity ceiling lies.
Thanks, Jim — that does get to the crux.
I think your scenario is plausible in principle: once an alarm is “loud enough,” further increases in intensity could be selectively neutral, so unnecessarily loud alarms might persist by drift, much like neutral variants in molecular evolution.
My hesitation is about how often extreme felt intensity actually falls into that neutral regime. For neutrality to hold, extra intensity must add no benefit and impose no additional costs or constraints. If affective states are whole-organism control states rather than simple sensory readouts, then escalating intensity plausibly requires extra integration, valuation, or modulatory capacity. In that case, intensity beyond “loud enough” would not be strictly neutral, and drift would be limited.
So I see neutral drift as a live alternative, but not the default. The framework is meant to clarify when neutrality is plausible versus when selection should instead cap, reshape, or avoid extreme intensity altogether.
In our upcoming post, we introduce human-anchored reference categories (Annoying(h), Hurtful(h), Disabling(h), Excruciating(h)) to provide a pragmatic shared coordinate system for cross-species discussion. So if one wants to talk about “acuity/resolution between A and B,” it’s reasonable to treat A and B as positions (or intervals) on that human-anchored scale.
But no — we’re not defining acuity as #levels/(B−A), because that requires meaningful distances between A and B. At this stage the (h) scale is best treated as ordinal: it supports “higher/lower ceiling” comparisons, not subtraction or ratios.