Hide table of contents

Consequentialism asserts that whether an action is morally right or wrong is determined by its outcomes. The commonly held perspective is that there are moral theories which distinguish themselves from consequentialism, examples of which include deontology, virtue ethics, and contractualism. However, a deeper analysis of the underlying motivations and principles of these theories suggests a different picture. If we were to trace the logical progression of any moral statement within any moral theory back to its origins, the pursuit of improving outcomes would inevitably emerge. After all, what other purpose would a Kantian imperative, an Aristotelian virtue or a Rawlsian contract serve, if not to ultimately improve the world in some way? Detaching a moral theory from its outcomes would render it arbitrary and devoid of purpose. Put another way, a "consequentialist moral theory" is a tautology. The real question is, which strategy produces the best outcomes, including consideration of highest-order consequences. Should we evaluate each action individually or consistently apply certain heuristics? More precisely, to what extent should we delegate ethical autonomy to individual consciousness moments in order to optimise outcomes? Different moral theories provide distinct answers to this question. Depending on the moral theory one subscribes to, optimal outcomes result from following certain rules (deontology), cultivating certain character traits in people (virtue ethics), adhering to social contracts (contractualism), maximising utility (utilitarianism) etc. In light of these considerations, is every moral theory inherently consequentialist?

3

0
1

Reactions

0
1
New Answer
New Comment

3 Answers sorted by

You might say that as a psychological reality, a moral theory is unlikely to be successful unless people believe its adoption tends to promote good consequences.

But there's nothing logically that would require moral theories to ultimately dissolve into actions that promote good consequences... Kant's categorical imperative famously forbids lying or sparing murderers capital punishment regardless of whether the whole world burns as a result.

While Kant's ethics doesn't logically reduce to consequentialism, the categorical imperative seems to rest on assumptions about long-term outcomes. Kant's insistence on a universal prohibition against lying appears grounded in the belief that a strict norm of truth-telling creates a more stable and morally reliable society, even if it leads to worse outcomes in rare cases. So while consequences aren't the explicit justification, they seem to determine the principles we find reasonable and can will to become a universal law.

1
tobycrisford 🔸
If the claim is that every moral theory is equivalent to 'rule consequentialism', maybe you have more of a case. But 'act consequentialism' is very distinct I think.

You might find the pages 7-8 in this pdf helpful.

I really like the way Derek Parfit distinguishes between consequentialist and non-cosequentialist theories in 'Reasons and Persons'.

All moral theories give people aims. A consequentialist theory gives everyone the same aims (e.g. maximize total happiness). A non-consequentialist theory gives different people different aims (e.g. look after your own family).

There is a real important difference there. Not all moral theories are consequentialist.

Parfit's distinction between agent-neutral aims (e.g. maximise happiness) and agent-relative aims (e.g. care for your family) strikes me as more semantic than substantive. All moral reasoning depends on the agent's situation, and identity (like being a parent) can be viewed as part of that situation. Take Peter Singer's drowning child for example: the moral responsibility to act arises because you are standing there and you can save the child. That situational fact is decisive, much like being a parent is. In this sense, even utilitarianism relies on agent-specific facts, making it functionally agent-relative. I'm not sure any moral theory is truly agent-neutral in practice.

I disagree, I think the difference is substantive.

A utilitarian form of consequentialism might tell Alice to save the drowning child in front of her, while it tells Bob to donate to the AMF, but despite acting differently, both Alice and Bob are pursuing the same ultimate agent-neutral aim: to maximize welfare. The agent-relative 'aims' of saving the child or making the donation are merely instrumental aims. They exist only as a means to an end, the end being the fundamental agent-neutral aim that both both Alice and Bob have in common.

This might sound lik... (read more)

Curated and popular this week
Relevant opportunities