As per title. I often talk to people that have views that I think should straightforwardly imply a larger focus on s-risk than they think. In particular, people often seem to endorse something like a rough symmetry between the goodness of good stuff and the badness of bad stuff, sometimes referring to this short post that offers some arguments in that direction. I'm confused by this and wanted to quickly jot down my thoughts - I won't try to make them rigorous and make various guesses for what additional assumptions people usually make. I might be wrong about those.
Views that IMO imply putting more weight on s-risk reduction:
- Complexity of values: Some people think that the most valuable things possible are probably fairly complex (e.g. a mix of meaning, friendship, happiness, love, child-rearing, beauty etc.) instead of really simple (e.g. rats on heroin, what people usually imagine when hearing hedonic shockwave.) People also often have different views on what's good. I think people who believe in complexity of values often nonetheless think suffering is fairly simple, e.g. extreme pain seems simple and also just extremely bad. (Some people think that the worst suffering is also complex and they are excluded from this argument.) On first pass, it seems very plausible that complex values are much less energy-efficient than suffering. (In fact, people commonly define complexity by computational complexity, which translates directly to energy-efficiency.) To the extent that this is true, this should increase our concern about the worst futures relative to the best futures, because the worst futures could be much worse than the best futures.
(The same point is made in more detail here.)
- Moral uncertainty: I think it's fairly rare for people to think the best happiness is much better than worst suffering is bad. I think people often have a mode at "they are the same in magnitude" and then uncertainty towards "the worst suffering is worse". If that is so, you should be marginally more worried about the worst futures relative to the best futures. The case for this is more robust if you incorporate other people's views into your uncertainty: I think it's extremely rare to have an asymmetric distribution towards thinking the best happiness is better in expectation.[1]
(Weakly related point here.) - Caring about preference satisfaction: I feel much less strongly about this one because thinking about the preferences of future people is strange and confusing. However, I think if you care strongly about preferences, a reasonable starting point is anti-frustrationism, i.e. caring about unsatisfied preferences but not caring about satisfied preferences of future people. That's because otherwise you might end up thinking, for example, that it's ideal to create lots of people who crave green cubes and give them lots of green cubes. I at least find that outcome a bit bizarre. It also seems asymmetric: Creating people who crave green cubes and not giving them green cubes does seem bad. Again, if this is so, you should marginally weigh futures with lots of dissatisfied people more than futures with lots of satisfied people.
To be clear, there are many alternative views, possible ways around this etc. Taking into account preferences of non-existent people is extremely confusing! But I think this might be an underappreciated problem that people who mostly care about preferences need to find some way around if they don't want to weigh futures with dissatisfied people more highly.
I think point 1 is the most important because many people have intuitions around complexity of value. None of these points imply that you should focus on s-risk. However, they are arguments towards weighing s-risk higher. I wanted to put them out there because people often bring up "symmetry of value and disvalue" as a reason they don't focus on s-risk.
- ^
There's also moral uncertainty 2.0: People tend to disagree more about what's most valuable than they disagree about what's bad. For example, some people think only happiness matters and others think justice, diversity etc. also matter. Roughly everybody thinks suffering is bad. You might think a reasonable way to aggregate is to focus more on reducing suffering, which everyone agrees on, at least whenever most efficiently increasing happiness trades off with justice or diversity.
Related to your point 1 :
I think one concrete complexity-increasing ingredient that many (but not all) people would want in a utopia is for one's interactions with other minds to be authentic – that is, they want the right kind of "contact with reality."
So, something that would already seem significantly suboptimal (to some people at least) is lots of private experience machines where everyone is living a varied and happy life, but everyone's life in the experience machines follows pretty much the same template and other characters in one's simulation aren't genuine, in the sense that they don't exist independently of one's interaction with them (meaning that your simulation is solipsistic and other characters in your simulation may be computed to be the most exciting response to you, but their memories from "off-screen time" are fake). So, while this scenario would already be a step upwards from "rats on heroin"/"brains in a vat with their pleasure hotspots wire-headed," it's still probably not the type of utopia many of us would find ideal. Instead, as social creatures who value meaning, we'd want worlds (whether simulated/virtual or not doesn't seem to matter) where the interactions we have with other minds are genuine. That these other minds wouldn't just be characters programmed to react to us, but real minds with real memories and "real" (as far as this is a coherent concept) choices. Utopian world setups that allow for this sort of "contact with reality" presumably cannot be packed too tightly with sentient minds.
By contrast, things seem different for dystopias, which can be packed tightly. For dystopias, it matters less whether they are repetitive, whether they're lacking in options/freedom, or whether they have solipsistic aspects to them. (If anything, those features can make a particular dystopia more horrifying.)
To summarize, here's an excerpt from my post on alignment researchers arguably having a comparative advantage in reducing s-risks: