𝐀𝐧𝐭𝐭𝐒 𝐏𝐨𝐬𝐭𝐒

Philosophy student @ University of Turku, Finland
1 karmaJoined Pursuing an undergraduate degree

Comments
5

I don't really understand what you mean by protection, but there seems to be something fundamentally wrong in the way you both equate and differentiate it with consequences. You claim that consequentialism tracks this β€œprotection,” but it is clearly something different. If it weren't, your framework would lead to the same conclusions as consequentialism on all three of these problems. This is because you put the highest priority on protection out of all your proposed principles.Β 

Resolution ethics doesn't really have any more explanatory power than consequentialism for our moral intuitions in the case of moral luck. Any sensible version of consequences clearly recognizes that the actual results are more important than potential consequences while also recognizing that the act itself is equally wrong or right in both cases. Also, these violations of the principle of protection still seem to be based completely on luck regardless of terminology.

What bothers me the most is that you seem to completely dismiss the possibility that the so-called β€œrepugnant conclusion” is actually acceptable. This also applies to the case of the drowning child. I don't find it self-evident that these conclusions you are so determined to resist are actually bad.

Hi! I find your idea interesting, but I have one concern: how would you prevent people from abusing this system? For example, some people who are already vegan could just subscribe for some free food. You might think that ethical vegans would not commit this kind of misendeavor, but not all vegans are vegans for health reasons. I can also see that anti-vegan people could subscribe just to make the food go to waste.Β 

Thanks for posting! Your axiom feels very familiar to me, but I am not sure if I have seen anyone seriously present it as the only required moral axiom. It would be surprising, though, if no one has done it before. Anyway, there are a few problems that immediately catch my attention.

Firstly, you don't seem to give any arguments to explain why we should accept this core principle. I interpret that you base it on intuition, which is something I dislike, but that is besides the point. After all, I don't think that the consequences of such a worldview are very intuitive. For example, parenting seems impossible if it is never permissible to restrict someone's will.

Another issue is that your theory doesn't seem to distinguish good will from bad. If all will is to be equally protected, then trying to prevent someone from murdering a person is morally wrong. Also, the idea that reducing harm can matter when comparing unavoidable tragedies implies that harm reduction matters. This goes directly against your claim of having only one moral principle.

Lastly, I find it really problematic that you think a model like ChatGPT can be used as a source to analyze moral frameworks. The question of the foundations of morality is one of the deepest and most crucial questions of life, and I refuse to believe a language model could solve questions of this nature.
Β 

I completely agree! The fact that people don't understand this is probably one of the main reasons so many reject utilitarianism as too demanding. They don't get that maximizing utility is an ideal, not a minimum requirement for being a good person. I have usually referred to ideals simply as directions, but I really like how you compare them to the North Star. The idea is the same, but your wording is a bit more poetic.

It seems to me that this distinction between ethical theory and its practical applications doesn't apply neatly to rule utilitarianism. That is because it already has practical considerations interwoven in it, as the idea is to act in a way that maximizes utility as a (practical) rule. Do you agree?