This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025.
[Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue.
For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2]
Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.
Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.
I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
In terms of feedback/reaction: I work on AI alignment, game theory, and cooperative AI, so Moloch is basically my key concern. And from that position, I highly approve of the overall talk, and of all of the content in particular --- except for one point, where I felt a bit so-so. And that is the part about what the company leaders can do to help the situation.
The key thing is 9:58-10:09 ("We need leaders who are willing to flip the Moloch's playbook. ...") , but I think this part then changes how people interpret 10:59-10:11 ("Perhaps companies can start competing over who ... "). I don't mean to say that I strongly disagree here --- rather, I mean that this part seems objectively speculative, which was in contrast with everything else in the talk (which seemed super solid).
More specifically, the talk's formulation suggested to me that the key thing is whether the leaders would be willing to not play the Moloch game. In contrast, it seems quite possible that this by itself wouldn't help at all, for example because they would just get fired if they tried. My personal guess is that "the key thing" is affordance the leaders have for not playing the Moloch game / the costs they incur for doing so. Or perhaps the combination of this and the willingness to not play the Moloch game. And this is also how I would frame the 10:59-10:11 part --- that we should try to make it such that the companies can compete on those other things that turn this into a race to the top. (As opposed to "the companies should compete on those other things".)