I used to think that things have independent positive value and that this value would aggregate and intercompare over time and space.
In other words, I used to believe in some sort of Cosmic Scoreboard where I could weigh, for example, someone’s lifetime happiness against their, or someone else’s, moment of suffering.
I now think that this clinging to things as independently valuable and aggregable contributes to theoretical problems like utility monsters, wireheading, the repugnant conclusion (and other intuitively grotesque outweighing scenarios), infinitarian paralysis, and disagreements in cause prioritization.
I now feel that my previous beliefs in independent positive value, intercomparable aggregates of experiences, and the Cosmic Scoreboard more generally were convenient fictions that helped me avoid ‘EA guilt’ from not helping to prevent the suffering I could—by believing that there could be more important things, or that the suffering could be outweighed instead of prevented.
I’d now say that no kind of outweighing helps the suffering, because the suffering is separate in spacetime; outweighing is a tool of thought we use to prioritize our decisions so as to not regret them later, not a physical process like mixing red and green liquids to see which color wins. We have limited attention, and each moment of suffering is worth preventing for its own sake.
We don’t minimize statistics of aggregate suffering on the Cosmic Scoreboard except as a tool of thought, while in actuality we arrange ourselves and the world so as to prevent as many moments of suffering as we can. Suffering is not a property of lives, populations, or worlds, but of phenomenally bound moments, and those moments are what (I think) we ultimately care about, are moved by, and want to long-term equalize and minimize, from behind the Veil of ignorance.
For more on the internal process that is leading me to let go of independent positive values, replacing them with their interdependent value (in terms of their causal relationships to preventing suffering), here is my comment on the recent post, You have more than one goal, and that's fine:
I don’t see a way to ultimately resolve conflicts between an (infinite) optimizing (i.e., maximizing or minimizing) goal and other goals if they’re conceptualized as independent from the optimizing goal. Even if we consider the independent goals as something to only suffice (i.e., take care of “well enough”) instead of optimize as much as possible, it’ll be the case that our optimizing goal, by its infinite nature, wants to negotiate to itself as much resources as possible, and its reasons for earning its living within me are independently convincing (that’s why it’s an infinite goal of mine in the first place).
So an infinite goal of preventing suffering wants to understand why my conflicting other goals require a certain amount of resources (time, attention, energy, money) for them to be sufficed, and in practice this feels to me like an irreconcilable conflict unless they can negotiate by speaking a common language, i.e., one which the infinite goal can understand.
In the case of my other goals wanting resources from an {infinite, universal, all-encompassing, impartial, uncompromising} compassion, my so-called other goals start to be conceptualized through the language of self-compassion, which the larger, universal compassion understands as a practical limitation worth spending resources on – not for the other goals’ independent sake, but because they play a necessary and valuable role in the context of self-compassion aligned with omnicompassion. In practice, it also feels most sustainable and long-term wise to usually if not always err on the side of self-compassion, and to only gradually attempt moving resources from self-compassionate sub-goals and mini-games towards the infinite goal. Eventually, omnicompassion may expect less and less attachment to the other goals as independent values, acknowledging only their interdependent value in terms of serving the infinite goal, but it is patient and understands human limitations and growing pains and the counterproductive nature of pushing its infinite agenda too much too quickly.
If others have found ways to reconcile infinite optimizing goals with sufficing goals without a common language to mediate negotiations between them, I’d be very interested in hearing about them, although this already works for me, and I’m working on becoming able to write more about this, because it has felt like an all-around unified “operating system”, replacing utilitarianism. :)
As a parallel comment, here is more (from a previous discussion) of why I am gravitating towards suffering as the only independent (dis)value and everything else as interdependently valuable in terms of preventing suffering:
––––
I experience all of the things quoted in Complexity of value,
but I don’t know how to ultimately prioritize between them unless they are commensurable. I make them commensurable by weighing their interdependent value in terms of the one thing we all(?) agree is an independent motivation: preventable suffering. (If preventable suffering is not worth preventing for its own sake, what is it worth preventing for, and is this other thing agreeable to someone undergoing the suffering as the reason for its motivating power?) This does not mean that I constantly think of them in these terms (that would be counterproductive), but in conflict resolution I do not assign them independent positive numerical values, which pluralism would imply one way or another.
Any pluralist theory begs the question of outweighing suffering with enough of any independently positive value. If you think about it for five minutes, aggregate happiness (or any other experience) does not exist. If our first priority is to prevent preventable suffering, that alone is an infinite game; it does not help to make a detour to boost/copy positive states unless this is causally connected to preventing suffering. (Aggregates of suffering do not exist either, but each moment of suffering is terminally worth preventing, and we have limited attention, so aggregates and chain-reactions of suffering are useful tools of thought for preventing as many as we can. So are many other things without requiring our attaching them independent positive value, or else we would be tiling Mars with them whenever it outweighed helping suffering on Earth according to some formula.)
My experience so far with this kind of unification is that it avoids many (or even all) of the theoretical problems that are still considered canonical challenges for pluralist utilitarianisms that assign both independent negative value to suffering and independent positive value to other things. I do not claim that this would be simple or intuitive – that would be analogous to reading about some Buddhist system, realizing its theoretical unity, and teleporting past its lifelong experiential integration – but I do claim that a unified theory with grounding in a universally accepted terminal value might be worth exploring further, because we cannot presuppose that any kind of CEV would be intuitive or easy to align oneself with.
[...]
People also differ along their background assumptions on whether AGI makes the universally life-preventing button a relevant question, because for many, the idea of an AGI represents an omnipotent optimizer that will decide everything about the future. If so, we want to be careful about assigning independent positive value to all the things, because each one of those invites this AGI to consider {outweighing suffering} with {producing those things}, since pluralist theories do not require a causal connection between the things being weighed.