I finally finished my series of 5 essays on my life philosophy (Valuism)! In a nutshell, Valuism suggests that you work to figure out what you intrinsically value, and then that you try to use effective methods to create more of what you intrinsically value.
While simple and straightforward at first glance, few people seem to approach life this way, and Valuism ends up with a number of surprising implications and, I think, provides a perspective that can help shed light on a number of different domains.
Interestingly, Effective Altruism is implied by Valuism plus a specific set of strong intrinsic values that most Effective Altruists have (reducing suffering + truth).
Here is the sequence of essays, if you feel like checking them out:
Part 1: Doing what you value as a life philosophy – an introduction to Valuism
Part 2: What to do when your values conflict?
Part 3: Should Effective Altruists be Valuists instead of utilitarians?
Part 4: What would a robot value? An analogy for human values
Part 5: Valuism and X: how Valuism sheds light on other domains
A big shoutout goes to Amber Dawn Ace who wrote these essays with me.
The way you define values in your comment:
"From the AI "engineering" perspective, values/valued states are "rewards" that the agent adds themselves in order to train (in RL style) their reasoning/planning network (i.e., generative model) to produce behaviours that are adaptive but also that they like and find interesting (aesthetics). This RL-style training happens during conscious reflection."
is just something different than what I'm talking about in my post when I use the phrase "intrinsic values."
From what I can tell, you seem to be arguing:
[paraphrasing] "In this one line of work, we define values this way", and then jumping from there to "therefore, you are misunderstanding values," when actually I think you're just using the phrase to mean something different than I'm using it to mean.