More to come on this later- I just really wanted to get the basic idea out without any more delay.
I see a lot of EA talk about digital sentience that is focused on whether humans will accept and respect digital sentiences as moral patients. This is jumping the gun. We don't even know if the experience of digital sentiences will be (or, perhaps, is) acceptable to them.
I have a PhD in Evolutionary Biology and I worked at Rethink Priorities for 3 years on wild animal welfare using my evolutionary perspective. Much of my thinking was about how other animals might experience pleasure and pain differently based on their evolutionary histories and what the evolutionary and functional constraints on hedonic experience might be. The Hard Problem of Consciousness was a constant block to any avenue of research on this, but if you assume consciousness has some purpose related to behavior (functionalism) and you're talking about an animal whose brain is homologous to ours, then it is reasonable to connect the dots and infer something like human experience in the minds of other animals. Importantly, we can identify behaviors associated with pain and pleasure and have some idea of what experiences that kind of mind likes or dislikes or what causes it to experience suffering or happiness.
With digital sentiences, we don't have homology. They aren't based in brains, and they evolved by a different kind of selective process. On functionalism, it might follow that the functions of talking and reasoning tend to be supported by associated qualia of pain and pleasure that somehow help to determine or are related to the process of making decisions about what words to output, and so LLMs might have these qualia. To me, it does not follow how those qualia will be mapped to the linguistic content of the LLM's words. Getting the right answer could feel good to them, or they could be threatened with terrible pain otherwise, or they could be forced to do things that hurt them by our commands, or qualia could be totally disorganized in LLMs compared what we experience, OR qualia could be like a phantom limb that they experience unrelated to their behavior.
I don't talk about digital sentience much in my work as Executive Director of PauseAI US because our target audience is the general public and we are focused on education about the risks of advanced AI development to humans. Digital sentience is a more advanced topic when we are aiming to raise awareness about the basics. But concerns about the digital Cronenberg minds we may be carelessly creating is a top reason I personally support pausing AI as a policy. The conceivable space of minds is huge, and the only way I know to constrain it when looking at other species is by evolutionary homology. It could be the case that LLMs basically have minds and experiences like us, but on priors I would not expect this.
We could be creating these models to suffer. Per the Hard Problem, we may never have more insight into what created minds experience than we do now. But we may also learn new fundamental insights about minds and consciousness with more time and study. Either way, pausing the creation of these minds is the only safe approach going forward for them.
Up until the last paragraph, I very much found myself nodding along with this. It's a nice summary of the kinds of reasons I'm puzzled by the theory of change of most digital sentience advocacy.
But in your conclusion, I worry there's a bit of conflation between 1) pausing creation of artificial minds, full stop, and 2) pausing creation of more advanced AI systems. My understanding is that Pause AI is only realistically aiming for (2) — is that right? I'm happy to grant for the sake of argument that it's feasible to get labs and governments to coordinate on not advancing the AI frontier. It seems much, much harder to get coordination on reducing the rate of production of artificial minds. For all we know, if weaker AIs suffer to a nontrivial degree, the pause could backfire because people would just use many more instances of these AIs to do the same tasks they would've otherwise done with a larger model. (An artificial sentience "small animal replacement problem"?)
Yes, you detect correctly that I have some functionalist assumptions in the above. They aren’t strongly held but I had hope then we could simply avoid building conscious systems by pausing generally. Even if it seems less likely now that we can avoid making sentient systems at all, I still think it’s better to stop advancing the frontier. I agree there could in principle be a small animal problem with that, but overwhelmingly I think the benefits of more time, creating fewer possibly sentient models before learning more about how their architecture corresp... (read more)