TLDR: I think morality is subjective. My ideal society would maximize total happiness while minimizing happiness inequality for as many beings as possible. My morals could change, and I don’t always do what I feel is moral.
I don’t think there is an objective morality.
I can’t prove that slavery is wrong. I can’t prove child porn is wrong. I can’t prove anything is morally right or wrong.
I’m not 100% certain what the correct morality for me is either. At times, I struggle to determine what I believe.
But, overall, I’ve formed many opinions. Some are more strongly held than others.
And I encourage others to agree with my beliefs. Generally, the more values people share with me, the more inclined we’ll be to work together. We can help each other make the world better to us.
If morality is subjective, why do I form moral opinions and try to act on them? I think I do that for the same reason I think I do anything else. To be happy.
My Moral Code
I think everyone matters equally. As much as I love myself, I can’t bring myself to believe I deserve more happiness than others.
I didn’t control my genes. I could’ve had a mental or physical disability. I could’ve inherited genes that made me more likely to have the “dark triad” traits of narcissism, Machiavellianism, and psychopathy. There may be genes that lead to pedophilia too.
I didn’t control the environment I was born into. I could’ve been born into slavery. I could’ve been born as an animal on a factory farm. I could’ve been born into a dystopian future.
I could’ve been anyone. I’m fortunate.
To me, the ideal society would maximize total happiness while minimizing happiness inequality for as many beings as possible.[1]
Morality Isn’t That Simple For Me
While everyone matters equally to me, some people make more of an impact than others. Imagine a hypothetical scenario where you have to go back in time to 1920. Imagine you have to kill Mahatma Gandhi or five random people. My gut instinct is to save Gandhi. I’d bet he’s done more to maximize and equalize happiness than the average five people.[2]
I’d feel more confident about my answer if I could know what would’ve happened if Gandhi died in 1920. If other leaders would’ve stepped up to make the same impact as Gandhi, I’d probably choose to kill Gandhi.[3]
Equality
Equality (of happiness) matters to me. I’m not sure how much. I couldn’t tell you if I’d prefer all beings to have 1% more total happiness if that increased happiness inequality by 5%.
My uncertainty about how much to value equality doesn't often make it difficult to make decisions. So I'm not planning to determine how much equality matters to me anytime soon.[4]
Uncertainty
My values have changed many times in the past. They’ll probably change again.[5] If I was born as a Christian white man in 1600s Europe, I probably would’ve been racist, sexist, and intolerant of other religions.[6] I opposed gay marriage until 2004.
If I could live a few hundred more years, I’d bet my beliefs change significantly. So I won’t advocate for anything that leads to significant value lock-in.
I don’t think that future people’s morals are necessarily better. As I said, I don’t think morality is objective. My point is that I’ve been happy with how moral views have evolved. I’m cautiously optimistic that won’t change.[7]
Why I Don’t Always Follow My Morality
I can’t scientifically explain my behavior.[8] I often feel like there are different parts of me fighting each other.[9] Sometimes I feel like a “moral part” of me loses control to another part of me. For example, a fearful part of me could push me to try to please someone. Other times, I look back and feel like one part of me has deluded my “moral part.” That’s how I’ve convinced myself it was productive to play One Night Ultimate Werewolf to help me develop my idea for a reality show.[10] I don’t think that’ll help anymore.[11]
I suspect the “part of me” that always wins out is the one that brings me the most immediate happiness.
The strongest part of me right now is writing this post. I don’t know if that’s a moral part of me, a part of me that wants to fulfill my potential as a writer, or a part of me that wants people to like me. It’s probably some combination of all of them and more.
But the strongest moral part of me right now reminds me that I didn’t have to be me. I could’ve been anyone. It hopes I remember that more.
(cross-posted from my blog: https://utilitymonster.substack.com/p/my-morality)
- ^
I count clones (and copies of sentient AIs) as beings. Humans are 99.9% the same anyway. I’d feel bad for someone who wasn’t valued because they have the same genes as someone else born earlier.
And Joshua Bach theorizes that beings could merge together in the future. (Search the word substrate to see when he alludes to merging.) In a vacuum, if the merging beings are an equal part of the merged being, and the merged being is as happy as the average of the beings combining, I’d support this.
- ^
I used Gandhi in this example because I thought he represented an uncontroversial “good” figure. Since publishing this post, I’ve learned that he isn’t as well regarded as I’d thought.
- ^
I’d decide based on the amount of happiness Gandhi and the average person in 1920 have.
- ^
To determine how much equality matters to me, I’d pretend I could quantify happiness. Next, I’d ask myself hypotheticals such as "Would I rather give 1 happiness point to person A who has -1 million happiness points or give 1 million happiness points to person B, who has 1 million happiness points?" I’d use these answers to help me determine a mathematical formula that expresses the tradeoffs I’d make in any situation.
- ^
My morals have already changed since I published this post. Originally, I’d said I wanted to maximize total utility while minimizing utility inequality for as many beings as possible. I’ve now replaced the term utility (i.e., what anyone fundamentally wants) with happiness (i.e., positive emotional states, good feelings) At the time, I said I used utility instead of happiness because people have told me their desires don’t reduce to happiness. And if anyone ultimately wanted other things or feelings besides happiness, I wanted them to have that.
I no longer feel that way. If someone fundamentally wants freedom, justice, dignity, or whatever they claim to value, and none of those things make them happy, I don’t care if they get them.
- ^
This article’s similar claim inspired this thought. It seems reasonable. But I can’t find any surveys on racism, sexism, or religious intolerance from 1600s Europe.
- ^
Over the long term. At some point, I'd bet I'll think something like my values align more with people in 2022 than 2023.
- ^
I think there’s ultimately a scientific way to explain my behavior. But I don’t know enough science to do that. So instead I use mumbo jumbo.
- ^
The yearning octopus from this article describes these feelings well.
- ^
If someone who shares enough of my values wants to produce a reality show, I’d be excited to explain my idea to them. I think it has some promise, but it’s complicated and unpolished.
- ^
If I use the original cards.
I appreciate your philosophy being written in a manner that does not require decoding.
"I don’t think there is an objective morality. "
- If a person, such as myself, believes that the value we give to the pursuit of happiness and avoidance of pain is arbitrary (in the sense that we appear to be programmed to give worth to these emotionally attractive ideas for evolutionary survival purposes), then a foundation for objective morality is lost and any selfish or selfless behaviour is ultimately performed to induldge our comfortable delusions.
"I can’t scientifically explain my behavior.[5] I often feel like there are different parts of me fighting each other.[6] Sometimes I feel like a “moral part” of me loses control to another part of me. For example, a fearful part of me could push me to try to please someone."
- I think we're ultimately controlled by our emotions. While beliefs do alter emotions, other factors may overpower them. For this reason, I suppose our behaviour can only, at best, roughly approximate our belief about what our behaviour ought to be (utilitarian or otherwise).