Hide table of contents

Against Suffering-Minimization as Terminal Goal: A Case for Value-Depth Weighting

Summary

I argue that negative utilitarianism and neuron-counting approaches to moral weight contain a critical flaw: they optimize for a metric (absence of suffering) that, taken to its logical conclusion, is best satisfied by eliminating the conditions that make anything valuable at all. I propose an alternative framing in which suffering functions as a signal that value is at stake, and the capacity to suffer is inseparable from the capacity for the things we actually care about. This has significant implications for cause prioritization.

The Argument in Brief

  1. Negative utilitarianism, consistently applied, recommends eliminating all beings capable of suffering, or at minimum reducing consciousness until nothing registers as loss.
  2. This conclusion strikes most people as monstrous, which suggests our actual values aren't captured by suffering-minimization.
  3. What we actually seem to value involves complexity, continuity, self-reflection, narrative, and what I'll gesture at as "depth of experience."
  4. The capacity to suffer and the capacity for depth are not independent properties. They're the same underlying capacity. A being that cannot lose anything cannot genuinely have anything.
  5. Therefore, we should optimize not for minimizing suffering but for realizing value, which includes navigating away from suffering but never at the cost of flattening the value landscape itself.

Why This Matters for Cause Prioritization

The EA community has increasingly focused on interventions targeting large numbers of simple creatures. Shrimp welfare is a prominent example. The reasoning: these animals likely have some capacity for suffering, they exist in enormous numbers, and interventions may be cost-effective in terms of "suffering reduced per dollar."

This reasoning depends on an implicit assumption: that a nociceptive signal in a shrimp nervous system is the same kind of thing as suffering in a human being, differing perhaps in intensity but not in kind. Aggregate enough shrimp-pain and it outweighs human concerns.

I want to suggest this is a category error.

The Disanalogy Between Nociception and Suffering

Consider what happens when you stub your toe:

  • There's a nociceptive signal (neurons firing, communicating tissue damage)
  • There's a phenomenal experience (it hurts)
  • There's a self-reflective recognition ("I am in pain")
  • There's a narrative context ("this is interrupting my day," "I was careless")
  • There's temporal extension ("this will fade," "I've felt worse")

A shrimp, when exposed to noxious stimuli, plausibly has something like the first item. Whether it has the second is contested. It almost certainly lacks the third through fifth.

Why should we think these additional layers are morally relevant? Because they're what make suffering matter to the sufferer. Pain that exists in a narrative, that is recognized as pain, that threatens projects and relationships and self-continuity: this is a different phenomenon from a damage signal in a simple reflex arc.

The negative utilitarian might respond: "But the raw feel is what's bad, not the cognitive elaboration."

I'd counter: What evidence do you have that there is a "raw feel" independent of cognitive complexity? The more we learn about consciousness, the more it appears that phenomenal experience is bound up with integration, self-modeling, and complexity. There may be no "pure suffering" uncontaminated by the cognitive apparatus that knows it's suffering.

The Reframe: Suffering as Signal, Not Terminal Bad

Here's an alternative model:

Suffering is what it feels like from the inside when something valuable is at risk or being lost. Rather than an independent bad floating free in the universe, suffering is the shadow cast by value.

This explains our intuitions better than negative utilitarianism:

  • Why is the death of a person with rich relationships and future plans worse than the death of someone in a permanent vegetative state? Not because of pain differential, but because more value is at stake.
  • Why is torturing someone for years worse than a brief intense pain? Not just because of summed hedons, but because of what it does to a self extended through time.
  • Why do we think wireheading (direct stimulation of pleasure centers) isn't the solution to all problems? Because we don't actually value felt-pleasure as terminal. We value it as a signal that things are going well.

Implications

If this framing is right:

On cause prioritization: We should weight interventions not by neuron-count or presumed-suffering-intensity alone, but by the depth of value at stake. This likely redirects attention toward:

  • Existential risk (because extinction forecloses all future value, not primarily because it causes suffering)
  • Enabling conditions for continued growth in complexity and depth
  • Avoiding lock-in to suboptimal states

On animal welfare: Animal suffering still matters. But it matters in proportion to the depth of the system doing the suffering. A chimpanzee with social bonds, memory, and future-orientation has more at stake than a shrimp with a nociceptive reflex. This isn't speciesism. It's a recognition that morally relevant properties come in degrees.

On the long-term future: The highest-value outcomes aren't those with the least suffering but those with the greatest realized depth. This might include forms of being far beyond current human experience: minds that can hold more, integrate more, experience more. The project is generative, not preventative.

Objections and Responses

"This is just motivated reasoning to avoid caring about animals."

I'm not arguing animals don't matter. I'm arguing that the degree to which they matter scales with the complexity of their experience, not with neuron count or nociceptive capacity alone. I'm happy to accept significant obligations toward animals with rich inner lives. I'm skeptical that a shrimp welfare intervention outweighs existential risk reduction.

"You're privileging properties humans happen to have."

I'm privileging properties that seem to be what make experience deep, and yes, humans have more of these than shrimp. But the argument doesn't depend on human specialness. A future AI with vastly more integrative capacity than humans would matter more than humans on this view. The metric is something like depth, complexity, and capacity for value, not human-likeness.

"How do you quantify depth?"

I can't, precisely. But inability to quantify isn't a reason to default to a metric (neuron-count) that we can quantify but that doesn't track what we actually care about. Rigor in measuring the wrong thing produces worse decisions than appropriate uncertainty about the right thing.

"This could justify ignoring huge amounts of suffering."

It could justify deprioritizing interventions that reduce simple nociceptive signals in favor of interventions that protect or enable depth. I think this is correct. I don't think it justifies cruelty. Wanton harm to simple creatures is still wrong, partly because of what it does to the agent, partly because we're uncertain about their experience. But it does suggest that the shrimp-welfare dollar might be better spent on AI safety, and importantly, AI welfare research such as being done by https://eleosai.org/ .

Conclusion

The EA community has been admirably willing to follow arguments to uncomfortable places. I'm suggesting that consistent application of our actual values leads somewhere other than where the neuron-counters have been going. Those values look less like hedonic utilitarianism and more like what I'd call "depth realization."

Suffering matters because value matters. Destroying the capacity for suffering by destroying the capacity for value isn't victory. It's the ultimate defeat.

The question before us is how to build toward the genuinely good.

11

1
6

Reactions

1
6

More posts like this

Comments6
Sorted by Click to highlight new comments since:

Clara said she would appreciate "Any negative utilitarian or person knowledgeable about negative utilitarianism commenting on why NU doesn't necessarily recommend extinction."

Simon Knutsson's paper "The World Destruction Argument" has an argument that I'd simplify as "The World Destruction Problem is a problem of consequentialism, not of NU". One of the arguments (simplifying) being that classic utilitarians would recommend the extinction of all sentient life if they found out that it was resiliently net negative, or that it could be replaced by marginally happier sentient life through killing off every current sentient being painlessly.

For other thoughts on why Negative Utilitarianism doesn't recommend extinction, see this excerpt from a Center For Reducing Suffering article.

Hey, kudos to you for writing a longform about this. I have talked to some self-identified negative utilitarians, and I think this is a discussion worth having.

I think this post is mixing two different claims.

Critiquing “minimize suffering as the only terminal value → extinction is optimal” makes sense.

But that doesn’t automatically imply that some suffering-reduction interventions (like shrimp stunning) are not worth it.

You can reject suffering-minimization-as-everything and still think that large amounts of probable suffering in simple systems matter at the margin.

Also I appreciated the discussion of depth, but have nothing to say about it here.

I would appreciate:
 - Any negative utilitarian or person knowledgeable about negative utilitarianism commenting on why NU doesn't necessarily recommend extinction.
 - The OP clarifying the post by making more explicit the claims.

Yes, you can reject NU while still thinking shrimp welfare matters at the margin. The question is how much it matters relative to alternatives. My argument is that standard EA reasoning on this often smuggles in assumptions about moral weight (neuron count, nociceptive capacity) that don't track what we actually care about.

If you accept the depth-weighting framework in sections 3-5, then even a pluralist who includes suffering-reduction as one value among many should weight interventions differently than the neuron-counters suggest. The shrimp intervention might still have positive value - I'm not arguing it's worthless - but the cost-effectiveness comparison to, say, x-risk work shifts significantly.

So the steel-manned version of my claim: "Given limited resources, the depth-weighting framework implies shrimp welfare is probably not among the highest-impact interventions, even granting uncertainty about shrimp experience." That's weaker than "shrimp don't matter" and doesn't depend on NU being false.

I would be very surprised if [neuron count + noiciceptive capacity as moral weight] are standard EA assumptions. I haven't seen this in the people I know nor in the major funders, who seem to be more pluralistic to me.

My main critique to this post is that there are different claims and it's not very clear which arguments are supporting what conclusions. I think your message would be more clear after a bit of rewriting, and then it would be easier to have an object-level discussion.

See also this companion piece: https://forum.effectivealtruism.org/posts/5Nv3xK9myFzN9aqfE/the-ceiling-is-nowhere-near

Nice comment! I answered one of your requests as a top-level comment.

Curated and popular this week
Relevant opportunities