Hide table of contents

Naively, to someone with a negative utilitarian perspective, saving lives is a net harm, because those individuals will have some suffering in the remainder of their lives. However, the death of children might cause more psychological pain for others than if they survived to old age. Has anyone looked into how such a "grief differential" compares to the typical amount of suffering in a human life? 

I ask as an increasingly committed negative utilitarian starting to take seriously the idea that maybe I should stop doing things that save kids' lives. 

6

0
0

Reactions

0
0
New Answer
New Comment


2 Answers sorted by

Answering this or similar questions will be challenging for any worldview that takes into account second-order and long-run consequences of actions, not just negative utilitarianism.

Saving a child has many such effects that will be very difficult to account for: not just effects on loved ones but also effects on the ecosystem, climate change, demand for meat, the economy more generally, etc. So assessing the grief experienced by loved ones is probably only a small piece of the answer to your overall question. At the same time, it might be particularly salient or important because the bond is personal and irreplaceable. If this life is not saved, we can do little to offset that harm.

For what it’s worth, a negative utilitarian theory might also include the frustration of preferences in the evaluation of an action. To the extent that the child wants to continue living, this would provide reasons to save them, even by negative utilitarian lights. Whether this is a decisive reason is another matter of course. 

If you do find negative utilitarianism or other suffering-focused views compelling, I think it makes more sense to ask the question: according to this view, what could be the very best thing I could be doing with my time and money? Most people who have asked this question have come up with interventions that seem much more impactful than saving lives directly -- regardless of whether the latter would overall be a good thing. Here is one person's attempt to answer this very difficult question: https://reducing-suffering.org/

How I think of the impact of saving a life (by donating to the likes of AMF):

  • a life is saved, and the grief caused by that death is averted
  • the person whose life is saved lives the rest of their life
  • Total fertility rates reduce because of lower child mortality
  • In terms of total number of lives lived, the saving-lives effect and the reducing-fertility rates effect probably roughly cancel each other out in places were the current fertility is high (source: David Roodman on GiveWell blog)

So saving the life helps us, one life at a time, to transition to a world where people have fewer children and are able to invest more in each of them (and averts plenty of bereavement grief along the way)

I am glad you are seriously considering the implications of your philosophical beliefs -- this is laudable. I very much hope you don't conclude it's bad to save children's lives.

Thanks, Sanjay! David Roodman's findings had trickled through to me with a distortion, and it's very good to have that corrected. Saving lives somewhere like Chad or Niger (where apparently the offset is significantly less than 1:1) doesn't come into the career decision I'm making right now, so it looks like I'm safe. 

Though I think I'll want to make sure to do more reading on this before I donate to the GiveWell Maximum Impact Fund again.  Unless they've made it a policy not to support life-saving work in places where the fertility-mortality offset is weaker? 

2
Sanjay
I don't think they do. I seem to remember that this topic was debated some time back and GiveWell clarified their view that they don't see it this way, but rather they just consider the immediate impact of saving a life as an intrinsic good. (although I would be more confident claiming that this is a fair representation of GiveWell's views if I could find the place where they said this, and I can't remember where it is, so apologies if I'm misremembering)
Curated and popular this week
 ·  · 13m read
 · 
Notes  The following text explores, in a speculative manner, the evolutionary question: Did high-intensity affective states, specifically Pain, emerge early in evolutionary history, or did they develop gradually over time? Note: We are not neuroscientists; our work draws on our evolutionary biology background and our efforts to develop welfare metrics that accurately reflect reality and effectively reduce suffering. We hope these ideas may interest researchers in neuroscience, comparative cognition, and animal welfare science. This discussion is part of a broader manuscript in progress, focusing on interspecific comparisons of affective capacities—a critical question for advancing animal welfare science and estimating the Welfare Footprint of animal-sourced products.     Key points  Ultimate question: Do primitive sentient organisms experience extreme pain intensities, or fine-grained pain intensity discrimination, or both? Scientific framing: Pain functions as a biological signalling system that guides behavior by encoding motivational importance. The evolution of Pain signalling —its intensity range and resolution (i.e., the granularity with which differences in Pain intensity can be perceived)— can be viewed as an optimization problem, where neural architectures must balance computational efficiency, survival-driven signal prioritization, and adaptive flexibility. Mathematical clarification: Resolution is a fundamental requirement for encoding and processing information. Pain varies not only in overall intensity but also in granularity—how finely intensity levels can be distinguished.  Hypothetical Evolutionary Pathways: by analysing affective intensity (low, high) and resolution (low, high) as independent dimensions, we describe four illustrative evolutionary scenarios that provide a structured framework to examine whether primitive sentient organisms can experience Pain of high intensity, nuanced affective intensities, both, or neither.     Introdu
 ·  · 2m read
 · 
A while back (as I've just been reminded by a discussion on another thread), David Thorstad wrote a bunch of posts critiquing the idea that small reductions in extinction risk have very high value, because the expected number of people who will exist in the future is very high: https://reflectivealtruism.com/category/my-papers/mistakes-in-moral-mathematics/. The arguments are quite complicated, but the basic points are that the expected number of people in the future is much lower than longtermists estimate because: -Longtermists tend to neglect the fact that even if your intervention blocks one extinction risk, there are others it might fail to block; surviving for billions  (or more) of years likely  requires driving extinction risk very low for a long period of time, and if we are not likely to survive that long, even conditional on longtermist interventions against one extinction risk succeeding, the value of preventing extinction (conditional on more happy people being valuable) is much lower.  -Longtermists tend to assume that in the future population will be roughly as large as the available resources can support. But ever since the industrial revolution, as countries get richer, their fertility rate falls and falls until it is below replacement. So we can't just assume future population sizes will be near the limits of what the available resources will support. Thorstad goes on to argue that this weakens the case for longtermism generally, not just the value of extinction risk reductions, since the case for longtermism is that future expected population  is many times the current population, or at least could be given plausible levels of longtermist extinction risk reduction effort. He also notes that if he can find multiple common mistakes in longtermist estimates of expected future population, we should expect that those estimates might be off in other ways. (At this point I would note that they could also be missing factors that bias their estimates of
 ·  · 7m read
 · 
The company released a model it classified as risky — without meeting requirements it previously promised This is the full text of a post first published on Obsolete, a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to Build Machine Superintelligence. Consider subscribing to stay up to date with my work. After publication, this article was updated to include an additional response from Anthropic and to clarify that while the company's version history webpage doesn't explicitly highlight changes to the original ASL-4 commitment, discussion of these changes can be found in a redline PDF linked on that page. Anthropic just released Claude 4 Opus, its most capable AI model to date. But in doing so, the company may have abandoned one of its earliest promises. In September 2023, Anthropic published its Responsible Scaling Policy (RSP), a first-of-its-kind safety framework that promises to gate increasingly capable AI systems behind increasingly robust safeguards. Other leading AI companies followed suit, releasing their own versions of RSPs. The US lacks binding regulations on frontier AI systems, and these plans remain voluntary. The core idea behind the RSP and similar frameworks is to assess AI models for dangerous capabilities, like being able to self-replicate in the wild or help novices make bioweapons. The results of these evaluations determine the risk level of the model. If the model is found to be too risky, the company commits to not releasing it until sufficient mitigation measures are in place. Earlier today, TIME published then temporarily removed an article revealing that the yet-to-be announced Claude 4 Opus is the first Anthropic model to trigger the company's AI Safety Level 3 (ASL-3) protections, after safety evaluators found it may be able to assist novices in building bioweapons. (The