No-one can reasonably want others to successfully follow it
My paper ‘Preference and Prevention: A New Paradox of Deontology’ has just been published in the inaugural issue of the open access journal Free & Equal.[1] As is often the case with ambitious papers, finding a good home took several years and tens of thousands of words of revisions and responses to referees, but I’m very happy with how it turned out in the end![2] I’m especially delighted that it’s open access—and I hope my paper helps contribute to a good start for Free & Equal.[3]
Overview
The paper undertakes three main tasks.
First, it introduces and analyses the distinction between “quiet” vs “robust” deontology as rival answers to the strikingly neglected question, How should we feel about optimific rights violations? Robust deontology answers: in general,[4] we should all oppose rights-violating actions. For any given choice-point we consider, we should prefer that the agent at that choice-point chooses a permissible alternative rather than acting seriously wrongly. Quiet deontologists, by contrast, join utilitarians in hoping that the agent maximizes value, no matter what deontic constraints might say. (The constraints are “quiet” in that they speak exclusively to the agent; others have no reason to care about them.)
Second, it argues that there are strong reasons for deontologists to prefer the robust view. (I’ll highlight just a couple of these, below.) Though you could disagree on this point without it undermining the rest of the paper.
Third, it presents the “new paradox” that I take to refute the robust view.
The surprising upshot:
Either deontic normativity is “quiet”, or deontology is false. Preferring that others respect constraints is no longer on the table. |
The Costs of Quiet Deontology
Section II.B. details four reasons to prefer the robust view over quiet deontology. Here are two:
First, the robust view preempts any possible charges of egoism, self-indulgence, or “clean hands” fetishism, fitting well with an attractively “patient-centered” conception of moral concern. If, as many deontologists claim, the inviolable nature of human dignity calls for respect rather than promotion, for example, it would seem rather unprincipled to suddenly deny that this extends to third party attitudes. (Why shouldn’t bystanders also respect one’s inviolable dignity, by opposing one’s being treated as a mere means to the greater good? Side-constraints should constrain attitudes as well as actions.) Consider how absurd it would seem for a committed deontologist bystander to mentally cheer on constraint-violating acts of utilitarian sacrifice for the greater good. Such attitudes seem unfitting by the lights of deontological principles.
…
Fourth, only the robust view respects the datum that moral perfection is not lamentable: fully-informed agents are not morally required to do things that an ideal spectator (or God) would prefer that they not do. Consider: what could be the point of such a lamentable morality as quiet deontology posits? We’d be better off casting it into the flames and speaking no more of the accursed thing. At least, we should want all others to do so (and they should want the same of us): quiet deontology is deeply self-effacing in this way.
I don’t think many (any?) agent-relative deontologists have sufficiently grappled with this implication of their view. There is something deeply bizarre about their anti-consequentialist advocacy: why are you deliberately trying to make the world a worse place? (There’s no deontic constraint forcing you to proselytize about deontology, after all.) Like an egoist encouraging others to be more selfish, encouraging others to focus on their agent-relative goals does not advance your agent-relative goals. It seems utterly irrational.[5] Whether others have agent-relative reasons or not, we all have most reason to prefer that they successfully act on their agent-neutral ones: the reasons that they share with the rest of us.[6]
As the paper elaborates:
The shift to a quiet view of constraints renders deontology surprisingly self-effacing. And this isn’t just the superficial point (sometimes erroneously presented as an “objection” to consequentialism) that some people may better achieve moral goals by believing and aiming at something other than the moral truth. Quiet deontology is lamentable in the deeper sense that we shouldn’t even want others to successfully follow it. We all have decisive moral reason to prefer that others instead comply with a moral code that is better supported by agent-neutral reasons.
Given how publicly hostile to consequentialism many deontologists are, this result is big news that should change their attitudes and behavior. Even if they are personally constrained against lying for the greater good, they should at least be happy to see sincere consequentialists winning out in the marketplace of ideas. Depending on the details of their view, it may even be wrong for them to interfere by discouraging consequentialist thought (and action) in others. “Government house” utilitarianism was criticized for wanting only an elite few to know the truth. Quiet deontologists shouldn’t even want elites to hear it, no matter how competent they may be. Robust deontology avoids this cost, and allows us to always hope that agents do the objectively right thing. Quiet deontology (unlike both consequentialism and robust deontology) bizarrely makes rightness itself lamentable.
“Ok,” you say. “Quiet deontology is clearly a non-starter. Simple fix: adopt the robust view!” There’s only one problem…
The New Paradox
Suppose five murder victims could be rescued, but only by an agent wrongly killing another innocent person. As robust deontologists, we don’t want them to do it. OK so far. But here’s the problem: Suppose the agent goes ahead with the wrongful killing anyway. (Damn shame.) You didn’t want the one to be killed in this way. Still, they have been. Sunk cost. Now, how much do you care about whether the rescue attempt—actually saving the other five—is successful or not? A lot, right!?
If the one is going to stay dead either way (and trust me, they’re not getting back up), the two possible futures we’re now comparing differ only in terms of whether five additional, entirely gratuitous murders happen or are prevented. That’s all. There is no respect whatsoever in which the “failed prevention” completion of this scenario is in any way morally preferable to the “successful prevention” version. It’s simply five more murders vs zero more, from this point. So we should all very strongly prefer successful over failed prevention. To be more precise: I claim that the strength of our preference for successful over failed prevention should be even stronger than our generic preference against a single gratuitous murder (since the latter case instead involves five). Remarkably, robust deontology is incompatible with this verdict.
Here’s the proof—using ‘≻’ to indicate preferability to an ideal observer, prefaced by ‘◊’ to indicate a permissible (rather than required) preference,[7] and ‘≻≻’ to indicate vast preferability, strictly stronger than the preferability of avoiding one generic murder:
(1) Protagonist acts wrongly in One Killing to Prevent Five, due to violating an important deontic constraint, and ought instead to bring about the world of Five Killings. (For reductio)
(2) If an agent can bring about just W1 or W2, and it would be wrong for them to bring about W1 (but not W2) due to violating an important deontic constraint, then W2 ◊≻ W1. (Weak robust constraints)
(3) Five Killings ◊≻ One Killing to Prevent Five. (From 1, 2)
(4) One Killing to Prevent Five ≻≻ Failed Prevention. (Premise)
(5) Failed Prevention ≽ Six Killings. (Premise)
(6) Five Killings ◊≻≻ Six Killings. (3–5, transitivity)
(7) It is not the case that Five Killings ◊≻≻ Six Killings. (Definition of ‘≻≻’)
# Contradiction (6, 7).
Read the full paper for further explanation, anticipated objections,[8] etc. I conclude:
The upshot of this paper is a deep paradox for ethical theory, as the following four features turn out to be mutually inconsistent:
1. Deontic constraints,
2. Robust normative authority,
3. Normative guidance, and
4. Adequate respect and concern for those who can be rescued at no further cost.
This is an extremely surprising result. Setiya, for example, presents an attractive-looking view with the first three features, unaware that this commits him to violating the fourth. Even deontologists who are less drawn to an agent-neutral conception of constraints may be surprised to learn that (at cost of permitting moral disrespect) they cannot even permit bystanders to prefer that Protagonist rightly refrain from committing a violation-minimizing violation. The robust authority of constraints is thus lost. Whether we end up endorsing consequentialism or quiet deontology for ourselves, we must all prefer that others consign deontology to the flames.
Which of the four features are you most inclined to give up, and how might you try to cushion the blow?[9]
- ^
From the editorial board that resigned from Wiley’s highly regarded commercial journal, Philosophy & Public Affairs.
- ^
My thanks to all the referees whose constructive feedback helped to improve the paper. (For anyone who read an earlier version of the paper: you may wish to check out the final version, as it has improved a lot!)
- ^
I can report that the journal was fantastic to work with, offering detailed, helpful comments, and proving much faster to go from ‘acceptance’ to ‘publication’—mere weeks!—than any other journal I can recall. Highly recommended, especially for any junior philosophers out there.
- ^
There may be exceptions if you have strong interests at stake, e.g. if your child was one of the five who would be saved by the agent wrongly killing an innocent person.
- ^
Of course, if deontologists ultimately endorse utilitarian goals but just think these goals may be better promoted in practice by encouraging belief in deontology, that would be more understandable. But I don’t think this strategic attitude explains the kind of anti-utilitarian hostility expressed by many academics. Some really hate the view, in a way which seems incompatible with secretly hoping that others successfully do as it recommends.
- ^
Robust deontology avoids this problem by positing that deontological reasons are also agent-neutral: even bystanders should want the agent to do the right thing, rather than merely “maximize value”, because value is not all that we morally ought to care about—either in our own actions or those of others. Much more principled!
- ^
You can read a simplified version (without the permissive weakenings) in this old blog post. But it’s especially striking that the argument still goes through if you so much as permit any neutral party to prefer the “deontological” outcome.
- ^
My favorite objection is in footnote 39 on p.190, courtesy of Eden Lin. (I was extra-delighted when I figured out how to address it!)
- ^
Regular readers will be unsurprised that I recommend abandoning deontic constraints (at the level of fundamental theory), while continuing to endorse associated rights and norms on purely instrumental grounds. As the paper’s second footnote flags: “Two-level consequentialists… may endorse deontic constraints as part of a value-promoting decision procedure for fallible agents, without thinking that these constraints really yield decisive objective normative reasons at a more fundamental level. For such consequentialists, constraints may constitute part of instrumental rationality (for non-ideal agents): part of how we can best hope to achieve goals that themselves make no essential reference to such constraints. For deontologists, by contrast, the moral significance of constraints is non-instrumental and essential to ethics, such that they would guide infallible angels as well as fallible humans.”
In other words: on utilitarian grounds, we should generally want people to be disposed to respect rights (and not easily override this disposition since their naive “calculations” are unreliable). But since this reason is merely instrumental, we should of course prefer the better outcome in any situation where it is stipulated that overriding this disposition would actually turn out for the best. (This theoretical verdict is compatible with not trusting people in real life to judge for themselves whether this condition is actually met!)
Thanks for sharing! This seems like a really interesting and strong argument, and I think this perspective on deontology has been under-appreciated.
But I think maybe you push the practical implications further than the arguments justify. For example, you say:
>Given how publicly hostile to consequentialism many deontologists are, this result is big news that should change their attitudes and behavior. Even if they are personally constrained against lying for the greater good, they should at least be happy to see sincere consequentialists winning out in the marketplace of ideas. Depending on the details of their view, it may even be wrong for them to interfere by discouraging consequentialist thought (and action) in others.
But I don't think this implication really follows from your argument (as I understand it), because your argument depends on heavily stylized examples where all the crucial factors are stipulated.
As you say in a footnote:
>on utilitarian grounds, we should generally want people to be disposed to respect rights (and not easily override this disposition since their naive “calculations” are unreliable). But since this reason is merely instrumental, we should of course prefer the better outcome in any situation where it is stipulated that overriding this disposition would actually turn out for the best.
But the quiet deontologist could believe:
1. They have strong reasons to avoid committing rights violations, while finding it preferable that others violate rights when that would lead to better outcomes overall.
2. It makes sense to publicly advocate against consequentialism, because the cases in which violating rights actually leads to better outcomes overall are quite rare and unlikely to be decision-relevant — something the utilitarians often admit!
So this helps make the quiet deontologist's public advocacy for their view more explicable and sensible. You might think this then puts the quiet deontologist in a bizarre position where the reason they advocate for a view and the reason they hold it sharply diverge. But I think they'd say that advocating for the true view of what individual reasons each person has will actually make things go better overall is consistent and a sufficient justification for advocacy of deontology.
Though it's possible I'm missing something here — curious what you think!
Yeah, I think that's broadly right. Most ethical theorists are engaged in "ideal theory", so that's the frame I'm working within here. And I find it notable that many deontologists seem to find utilitarianism repugnant, which doesn't seem warranted if you (should) actually want people to successfully perform the actions it identifies as "right".
But it's certainly true that quiet deontologists could—like "government house" consequentialists—predict that, due to widespread agential incompetence, their desired (consequentialist) goals would be better achieved by most people believing deontology instead. They could then coherently advocate their deontology in certain contexts, on "non-ideal theory" grounds.
Care would need to be taken to determine in which contexts one's goals are better achieved by urging people to aim at something completely different. It seems pretty unlikely to extend to public policy, for example, especially as regards the high-stakes issues discussed in the
mainfollow-up post. Insofar as most real-life deontologists don't seem especially careful about any of this, I think it's still true that my theoretical arguments should prompt them to rethink their moral advocacy. In particular, they should probably end up much happier with "two-level consequentialism" (the branch of consequentialism that really takes seriously human incompetence and related "non-ideal theory" considerations) than is typical for deontologists.[Updated to fix reference to post discussing "high stakes" policy issues.]
Yeah that all seems plausible to me! I think your argument here should successfully deflate a lot of the motivation deontologists should have in advocating against consequentialism, especially if they concede (which many seem to) that consequentialists don't tend to act like naive consequentialists.
Of course, some philosophers may just like talking about which theory they think is true, regardless of whether their theory would imply that they should do that. :)