Notes
The following text explores, in a speculative manner, the evolutionary question: Did high-intensity affective states, specifically Pain, emerge early in evolutionary history, or did they develop gradually over time?
Note: We are not neuroscientists; our work draws on our evolutionary biology background and our efforts to develop welfare metrics that accurately reflect reality and effectively reduce suffering. We hope these ideas may interest researchers in neuroscience, comparative cognition, and animal welfare science.
This discussion is part of a broader manuscript in progress, focusing on interspecific comparisons of affective capacities—a critical question for advancing animal welfare science and estimating the Welfare Footprint of animal-sourced products.
Key points
Ultimate question: Do primitive sentient organisms experience extreme pain intensities, or fine-grained pain intensity discrimination, or both?
Scientific framing: Pain functions as a biological signalling system that guides behavior by encoding motivational importance. The evolution of Pain signalling —its intensity range and resolution (i.e., the granularity with which differences in Pain intensity can be perceived)— can be viewed as an optimization problem, where neural architectures must balance computational efficiency, survival-driven signal prioritization, and adaptive flexibility.
Mathematical clarification: Resolution is a fundamental requirement for encoding and processing information. Pain varies not only in overall intensity but also in granularity—how finely intensity levels can be distinguished.
Hypothetical Evolutionary Pathways: by analysing affective intensity (low, high) and resolution (low, high) as independent dimensions, we describe four illustrative evolutionary scenarios that provide a structured framework to examine whether primitive sentient organisms can experience Pain of high intensity, nuanced affective intensities, both, or neither.
Introdu
Do I need a technical background to work on AI Governance? I think no, not really. Quick take because I don't justify many of my claims.
Context: I haver been a technical ML engineer and (briefly) a researcher, and I'm now trying to work on AI governance (and spending a lot of time speaking to people who do work on AI governance).
Examples of things that are useful to understand to do AI governance:
1. Knowing about the train, test, deploy cycle at industrial AI companies.
2. 1. Knowing the psyche of ML engineers at those orgs.
3. Knowing which media channels machine learning engineers & researchers use to stay on top of news, including twitter & ML companies.
You don't get any of those insights by doing an ML coursera course. It might be fun / gratifying to do that course for other reasons, but I think it won't make you better at governance. It's better to have a few friends who are ML engineers and to get them to sketch out what it's like at a lab, some day (or - more costly but more thorough - to take a role at a lab, technical or nontechnical).
What I do think you need to engage with technically is not to be afraid to read below the surface of techincal memes - but I think not much below the surface.
Concrete example: watermarking.
It's enough for policymakers to be able to read a few watermarking papers and understand:
a) watermarking is a way of tagging your model's outputs to prove it was produced by AI
b) There are no tried & tested, reliable watermarking methods at the moment.
Where I see nontechnincal folk fall down (less so in this community) is when they throw out the term 'watermarking' but couldn't tell you about what methods can be used or what the reliability of those methods is. I think that can be read about, and you don't need to have direct experience having tried to watermarking something (I certainly haven't).