Take the 2025 EA Forum Survey to help inform our strategy and prioritiesTake the survey
Hide table of contents

What a way to go


How badly would it suck to die because a person who could have saved your life (along with the lives of four others tied to the train tracks beside you) preferred “allowing” to “doing”? A second was just about to save you when they realized that the side track—where just one person awaited as collateral damage—later loops back, turning the purportedly-collateral damage into an instrumental killing. (“Lemme get this straight,” the escapee scratches his head. “You were OK with killing me when it seemed pointless, but now that my death would actually save the five you’ve suddenly had second thoughts!? Huh. I’m happy to escape this one, but for future reference: if you ever again plan to kill me, at least try to make it so my death serves a valuable purpose, OK?”)

A third agent planned to redirect a bomb onto the (one-person) trolley, until they decided that their action was more accurately described as introducing a new threat into the situation rather than just deflecting an existing threat. After that, your chances ran out and the trolley flattened you. But the trolley was just the proximate cause of your death. The deeper, more annoying reason you died was that (i) those who could have saved you followed deontological guidance, and (ii) that guidance turned on prioritizing abstract metaphysical distinctions over real human lives and well-being.

At Heaven’s pearly gates, you ask St. Peter to send you back to haunt your turncoat would-be rescuers for a few days. You have a simple question for each of them: Why should life or death decisions—and your death, in particular—turn on a question so empty and trivial as mere metaphysical taxonomy? “I can’t believe this!” you harangue the frightened souls. “It’s not like the one you prioritized over all the rest of us was your child or anything. You even wanted to save us at first! But then you changed your mind, and left most of us to die, because of… what, exactly? Words? The precise causal relation between your oh-so-holy agency and the rest of us? Even though your metaphysical update made no difference whatsoever to anything that those of us with lives on the line had any reason to care about? (We care about whether we live or die, not whether we do so as a result of a doing or an allowing, let alone anything yet more abstruse.)[1] What is wrong with you!?

I think this is an important question. People object to utilitarianism that it doesn’t match well with intuitions about how to use moral language. But such superficial objections are easily dealt with via moves like deontic fictionalism or “two level” distinctions between theoretical criteria and practical decision procedures. The objection to deontology is far deeper: it decides life or death moral questions by reference to clearly irrelevant metaphysical properties that it makes no sense to care about. We can fudge deontic verdicts. But there’s no easy fix for lacking a comprehensible rationale for your moral verdicts. “I just feel compelled to bring about worse outcomes for no reason” is a special kind of crazy. (It sure would be annoying to die because of it.)

A frictionless slope

Here’s a thought experiment that (another professor tells me) convinces many undergrads that the utilitarian verdict in Transplant is more defensible than they initially realized:

Begin by considering two alternative possible worlds. In the first world, five hospital patients die for lack of vital organs, and a passerby goes on to live a happy life. In the second world, the passerby’s head falls off (by brute natural chance) just as he’s walking past the doctor’s surgery. The doctor then uses the man’s organs to save the five patients, who each go on to live happy lives. Further suppose that all else is equal—there are no other relevant differences between the two possible worlds. Which world should you prefer to see realized? Presumably the second.

Now suppose that God lets you choose which of the two worlds to actualize. (After making your decision, the divine encounter will be wiped from your memory.) You get two buttons. If you press the first button, world #1 is realized, and the passerby will live. If you press the second button, world #2 is realized, and the five patients will live instead. Which button should you press? Again, surely the second. All else equal, we should choose to make the world a better place rather than a worse one. (You’re not killing anyone: just realizing a world in which, among other things, a fortuitous freak accident will occur.)

Let’s elaborate on how it is that the passerby’s head happens to fall off (in the second world). It turns out an invisibly thin razor-sharp wire was blown into place by a freak wind which fixed its position at neck height where the man was walking past. (No-one else was hurt and the wire soon untangled itself and blew away harmlessly into the nearest dumpster.) This presumably will not alter the moral status of any of our above judgments.

Now suppose that, instead of two buttons, God gives you a length of razor-sharp wire with which you can make your decision. By putting it straight in the dumpster, you will realize world #1. By fixing it in the appropriate place, you will realize world #2. Again, your memories are subsequently wiped. What should you do? The situation seems morally equivalent to the previous one. There don’t seem any relevant grounds for changing your choice.[2] Thus the right thing to do, in this bizarrely contrived scenario, is to kill the passerby to save five.

Far from being any kind of “bullet” to bite, when the Transplant case is suitably elucidated, the life-saving verdict is arguably quite plain to common sense.

(Why, then, shouldn’t doctors go around killing people? Presumably because it wouldn’t have good expected consequences in real life! There are good utilitarian reasons to set up laws, norms, and institutions that prevent people from engaging in naive instrumentalist reasoning. Why anyone believes this to constitute an objection to utilitarian theory is one of the great mysteries of contemporary sociology of philosophy.)

  1. ^

    See also Avram Hiller on the patient-centered perspective in moral theorizing.

  2. ^

    Someone could brutely insist that the fact that you’re now killing the victim makes all the difference. But this detail of implementation seems too far removed from all that substantively matters, as was already in the previous scenario. Why should an agential fixing of the wire make such a difference compared to an agent’s choosing to realize the world in which a freak wind so affixes the wire? Whatever moral concern you have for the six people in the situation whose lives are on the line should be just the same across both scenarios. Everything is exactly the same as far as all six potential victims are concerned.

14

0
0

Reactions

0
0

More posts like this

Comments1


Sorted by Click to highlight new comments since:

Executive summary: The post argues that deontological moral theories dangerously misprioritize abstract metaphysical distinctions—such as doing vs. allowing or agential causation—over real human lives, and that properly framed utilitarian reasoning leads to more defensible and humane decisions in life-and-death cases.

Key points:

  1. The author critiques deontology for prioritizing metaphysical concepts (e.g., “doing” vs. “allowing”) over actual human outcomes, which can lead to preventable deaths.
  2. Common objections to utilitarianism—such as its clash with moral language intuitions—are seen as superficial and addressable through strategies like fictionalism or dual-level theories.
  3. By contrast, deontology lacks a compelling rationale for its moral verdicts, often hinging on distinctions irrelevant to the well-being of those affected.
  4. A thought experiment involving a wire-decapitated passerby illustrates that utilitarian decisions can align with intuitive judgments when abstracted from immediate agency, undermining the supposed horror of “Transplant”-style reasoning.
  5. The author notes that real-world rules against instrumental killing (e.g., doctors harvesting organs) are better justified on utilitarian grounds of expected consequences than on deontological absolutes.
  6. The post is a philosophical critique leveraging intuition pumps and reductio-style examples to undermine metaphysically grounded deontology in favor of outcome-focused utilitarianism.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
 ·  · 1m read
 · 
This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia."  This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning.  It's hard to find data on the scale of this issue, but estimates put the order of magnitude at billions of frogs annually. I could not find any organisations working directly on frog welfare interventions.  Here are the organizations I found that come closest: * Animal Welfare Institute has documented the issue and published reports, but their focus appears more on the ecological impact and population decline rather than welfare reforms * PETA has conducted investigations and released footage, but their approach is typically to advocate for complete elimination of the practice rather than welfare improvements * Pro Wildlife, Defenders of Wildlife focus on conservation and sustainability rather than welfare standards This issue seems tractable. There is scientific research on humane euthanasia methods for amphibians, but this research is primarily for laboratory settings rather than commercial operations. The EU imports the majority of traded frog legs through just a few countries such as Indonesia and Vietnam, creating clear policy leverage points. A major retailer (Carrefour) just stopped selling frog legs after welfar
 ·  · 10m read
 · 
This is a cross post written by Andy Masley, not me. I found it really interesting and wanted to see what EAs/rationalists thought of his arguments.  This post was inspired by similar posts by Tyler Cowen and Fergus McCullough. My argument is that while most drinkers are unlikely to be harmed by alcohol, alcohol is drastically harming so many people that we should denormalize alcohol and avoid funding the alcohol industry, and the best way to do that is to stop drinking. This post is not meant to be an objective cost-benefit analysis of alcohol. I may be missing hard-to-measure benefits of alcohol for individuals and societies. My goal here is to highlight specific blindspots a lot of people have to the negative impacts of alcohol, which personally convinced me to stop drinking, but I do not want to imply that this is a fully objective analysis. It seems very hard to create a true cost-benefit analysis, so we each have to make decisions about alcohol given limited information. I’ve never had problems with alcohol. It’s been a fun part of my life and my friends’ lives. I never expected to stop drinking or to write this post. Before I read more about it, I thought of alcohol like junk food: something fun that does not harm most people, but that a few people are moderately harmed by. I thought of alcoholism, like overeating junk food, as a problem of personal responsibility: it’s the addict’s job (along with their friends, family, and doctors) to fix it, rather than the job of everyday consumers. Now I think of alcohol more like tobacco: many people use it without harming themselves, but so many people are being drastically harmed by it (especially and disproportionately the most vulnerable people in society) that everyone has a responsibility to denormalize it. You are not likely to be harmed by alcohol. The average drinker probably suffers few if any negative effects. My argument is about how our collective decision to drink affects other people. This post is not
 ·  · 5m read
 · 
Today, Forethought and I are releasing an essay series called Better Futures, here.[1] It’s been something like eight years in the making, so I’m pretty happy it’s finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing? In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead.  Why?  Well, even if we survive, we probably just get a future that’s a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future.    That is, I think there’s more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus. The whole essay series is out today. But I’ll post summaries of each essay over the course of the next couple of weeks. And the first episode of Forethought’s video podcast is on the topic, and out now, too. The first essay is Introducing Better Futures: along with the supplement, it gives the basic case for focusing on trying to make the future wonderful, rather than just ensuring we get any ok future at all. It’s based on a simple two-factor model: that the value of the future is the product of our chance of “Surviving” and of the value of the future, if we do Survive, i.e. our “Flourishing”.  (“not-Surviving”, here, means anything that locks us into a near-0 value future in the near-term: extinction from a bio-catastrophe counts but if valueless superintelligence disempowers us without causing human extinction, that counts, too. I think this is how “existential catastrophe” is often used in practice.) The key thought is: maybe we’re closer to the “ceiling” on Survival than we are to the “ceiling” of Flourishing.  Most people (though not everyone) thinks we’re much more likely than not to Survive this century.  Metaculus puts *extinction* risk at about 4