As per title. I often talk to people that have views that I think should straightforwardly imply a larger focus on s-risk than they think. In particular, people often seem to endorse something like a rough symmetry between the goodness of good stuff and the badness of bad stuff, sometimes referring to this short post that offers some arguments in that direction. I'm confused by this and wanted to quickly jot down my thoughts - I won't try to make them rigorous and make various guesses for what additional assumptions people usually make. I might be wrong about those.
Views that IMO imply putting more weight on s-risk reduction:
- Complexity of values: Some people think that the most valuable things possible are probably fairly complex (e.g. a mix of meaning, friendship, happiness, love, child-rearing, beauty etc.) instead of really simple (e.g. rats on heroin, what people usually imagine when hearing hedonic shockwave.) People also often have different views on what's good. I think people who believe in complexity of values often nonetheless think suffering is fairly simple, e.g. extreme pain seems simple and also just extremely bad. (Some people think that the worst suffering is also complex and they are excluded from this argument.) On first pass, it seems very plausible that complex values are much less energy-efficient than suffering. (In fact, people commonly define complexity by computational complexity, which translates directly to energy-efficiency.) To the extent that this is true, this should increase our concern about the worst futures relative to the best futures, because the worst futures could be much worse than the best futures.
(The same point is made in more detail here.)
- Moral uncertainty: I think it's fairly rare for people to think the best happiness is much better than worst suffering is bad. I think people often have a mode at "they are the same in magnitude" and then uncertainty towards "the worst suffering is worse". If that is so, you should be marginally more worried about the worst futures relative to the best futures. The case for this is more robust if you incorporate other people's views into your uncertainty: I think it's extremely rare to have an asymmetric distribution towards thinking the best happiness is better in expectation.[1]
(Weakly related point here.) - Caring about preference satisfaction: I feel much less strongly about this one because thinking about the preferences of future people is strange and confusing. However, I think if you care strongly about preferences, a reasonable starting point is anti-frustrationism, i.e. caring about unsatisfied preferences but not caring about satisfied preferences of future people. That's because otherwise you might end up thinking, for example, that it's ideal to create lots of people who crave green cubes and give them lots of green cubes. I at least find that outcome a bit bizarre. It also seems asymmetric: Creating people who crave green cubes and not giving them green cubes does seem bad. Again, if this is so, you should marginally weigh futures with lots of dissatisfied people more than futures with lots of satisfied people.
To be clear, there are many alternative views, possible ways around this etc. Taking into account preferences of non-existent people is extremely confusing! But I think this might be an underappreciated problem that people who mostly care about preferences need to find some way around if they don't want to weigh futures with dissatisfied people more highly.
I think point 1 is the most important because many people have intuitions around complexity of value. None of these points imply that you should focus on s-risk. However, they are arguments towards weighing s-risk higher. I wanted to put them out there because people often bring up "symmetry of value and disvalue" as a reason they don't focus on s-risk.
- ^
There's also moral uncertainty 2.0: People tend to disagree more about what's most valuable than they disagree about what's bad. For example, some people think only happiness matters and others think justice, diversity etc. also matter. Roughly everybody thinks suffering is bad. You might think a reasonable way to aggregate is to focus more on reducing suffering, which everyone agrees on, at least whenever most efficiently increasing happiness trades off with justice or diversity.
Another argument for asymmetric preference views (including antifrustrationism) and preference-affecting views over total symmetric preference views is that the total symmetric views are actually pretty intrapersonally alienating or illiberal in principle, and possibly in practice in the future with more advanced tech or when we can reprogram artificially conscious beings.
Do you care a lot about your family or other goals? Nope! I can make you care and approve way more about having a green cube and your new life centered on green cubes, abandoning your family and goals. You'll be way better off. Even if you disprefer the prospect now, I'll make sure you'll be way more grateful after with your new preferences. The gain will outweigh the loss.
Basically, if you can manipulate someone's mind to have additional preferences that you ensure are satisfied, as long as the extra satisfaction exceeds the frustration from involuntarily manipulating them, it’s better for them than leaving them alone.
Asymmetric and preference-affecting views seem much less vulnerable to this, as long as we count as bad the frustration involved in manipulating or eliminating preferences, including preferences against certain kinds of manipulation and elimination. For example, killing someone in their sleep and therefore eliminating all their preferences has to still typically be bad for someone who would disprefer it, even if they don’t find out. The killing both frustrates and eliminates their preferences basically simultaneously, but we assume the frustration is still bad. And new satisfied preferences wouldn't make up for the frustration on these views.
This is the problem of replacement/replaceability, applied intrapersonally to preferences and desires.