I finally finished my series of 5 essays on my life philosophy (Valuism)! In a nutshell, Valuism suggests that you work to figure out what you intrinsically value, and then that you try to use effective methods to create more of what you intrinsically value.
While simple and straightforward at first glance, few people seem to approach life this way, and Valuism ends up with a number of surprising implications and, I think, provides a perspective that can help shed light on a number of different domains.
Interestingly, Effective Altruism is implied by Valuism plus a specific set of strong intrinsic values that most Effective Altruists have (reducing suffering + truth).
Here is the sequence of essays, if you feel like checking them out:
Part 1: Doing what you value as a life philosophy – an introduction to Valuism
Part 2: What to do when your values conflict?
Part 3: Should Effective Altruists be Valuists instead of utilitarians?
Part 4: What would a robot value? An analogy for human values
Part 5: Valuism and X: how Valuism sheds light on other domains
A big shoutout goes to Amber Dawn Ace who wrote these essays with me.
I feel like 'valuism' is redefining utilitarianism, and the contrasts to utilitarianism don't seem very convincing. For instance, you define valuism as noticing what you intrinsically value and trying to take effective action to increase that. This seems identical to a utilitarian whose utility function is composed of what they intrinsically value.
I think you might be defining utilitarianism such that they are only allowed to care about one thing? Which is sort of true, in that utilitarianism generally advocates converting everything into a common scale, but that common scale can measure multiple things. My utility function includes happiness, suffering, beauty, and curiosity as terms. This is totally fine, and a normal part of utilitarian discourse. Most utilitarians I've talked to are total preference utilitarians, I've never met a pure hedonistic utilitarian.
Likewise, I'm allowed to maintain my happiness and mental health as an instrumental goal for maximizing utility. This doesn't mean that utilitarianism is wrong, it just means we can't pretend we can be utility maximizing soul-less robots. I feel like there is a post on folks realizing this at least every few months. Which makes sense! It's an important realization!
Also, utilitarianism also doesn't need objective morality any more than any other moral philosophy, so I didn't understand your objection there.
Preference utilitarianism and valuism don't have much in common.
Preference utilitarianism: maximize the interests/preferences of all beings impartially.
First, preferences and intrinsic values are not the same thing. For instance, you may have a preference to eat Cheetos over eating nachos, but that doesn't mean you intrinsically value eating Cheetos or that eating Cheetos necessarily gets you more of what you intrinsically value than eating nachos will. Human choice is driven by a lot of factors other than just intrinsic values (though intrinsic values play a role).
Second, preference utilitarianism is not about your own preferences, it's about the preferences of all beings impartially.