I think any functionalist definition for the intensity of either would have to be asymmetric, at least insofar as intense pleasures (e.g. drug highs or euphoria associated with temporal lobe epilepsy) are associated with extreme contentedness rather than desperation for it to continue. Similarly-intense pains, on the other hand, do create a strong urgency for it to stop. This particular asymmetry seems present in the definitions you linked, so I'm a little sceptical of the claim that "super-pleasure" would necessitate an urgency for it to continue.
I'm not sure whether these kinds of functional asymmetries give much evidence one way or the other - it seems like it could skew positive just as much as negative. I agree that our understanding might very well be human-relative; I think that the cognitive disruptiveness of pain could be explained by the wider activation of networks across the brain compared to pleasure, for instance. I think a pleasure of the sort that activates a similar breadth of networks would feel qualitatively different, and that experiencing such a pleasure might change our views here.
Are any of these arguments against symmetry fleshed out anywhere? I'd be interested if there's anything that goes into these in more detail.
Excruciating pain makes us desperate to end it. The urgency seems inherent to its intensity, and its subjective urgency lifts to its moral urgency and importance when we weight individuals' subjective wellbeing.
I'm not sure I buy that the urgency of extreme pain is a necessary component of its intensity. It makes more sense to me that the intensity drives the urgency rather than the other way around, but I'm not sure. You could probably define the intensity of pain by the strength of one's preference to stop it, but this just seems like a very good proxy to me.
Suffering is also cognitively disruptive in a way pleasure seems not to be. And pain seems to be more tied to motivation than pleasure seems to be
I suspect these are due to implementation details in the brain that aren't guaranteed to hold in longtermism (if we leave open the possibility of advanced neurotechnology).
Cheers for the response; I'm still a bit puzzled as to how this reasoning would lead to the ratio being as extreme as 1:a million/bajillion/quadrillion, which he mentions as something he puts some non-negligible credence on (which confuses me as even a small probability of this being the case would surely dominate & make the future net-negative.)
(X-posting from LW open thread)
I'm not sure if this is the right place to ask this, but does anyone know what point Paul's trying to make in the following part of this podcast? (Relevant section starts around 1:44:00)
Suppose you have a P probability of the best thing you can do and a one-minus P probably the worst thing you can do, what does P have to be so it’s the difference between that and the barren universe. I think most of my probability is distributed between you would need somewhere between 50% and 99% chance of good things and then put some probability or some credence on views where that number is a quadrillion times larger or something in which case it’s definitely going to dominate. A quadrillion is probably too big a number, but very big numbers. Numbers easily large enough to swamp the actual probabilities involved
[ . . . ]
I think that those arguments are a little bit complicated, how do you get at these? I think to clarify the basic position, the reason that you end up concluding it’s worse is just like conceal your intuition about how bad the worst thing that can happen to a person is vs the best thing or damn, the worst thing seems pretty bad and then the like first-pass responses, sort of have this debunking understanding, or we understand causally how it is that we ended up with this kind of preference with respect to really bad stuff versus really good stuff.
If you look at what happens over evolutionary history. What is the range of things that can happen to an organism and how should an organism be trading off like best possible versus worst possible outcomes. Then you end up into well, to what extent is that a debunking explanation that explains why humans in terms of their capacity to experience joy and suffering are unbiased but the reality is still biased versus to what extent is this then fundamentally reflected in our preferences about good and bad things. I think it’s just a really hard set of questions. I could easily imagine maybe shifting on them with much more deliberation.
It seems like an important topic but I'm a bit confused by what he's saying here. Is the perspective he's discussing (and puts non-negligible probability on) one that states that the worst possible suffering is a bajillion times worse than the best possible pleasure, and wouldn't that suggest every human's life is net-negative (even if your credence on this being the case is ~.1%)? Or is this just discussing the energy-efficiency of 'hedonium' and 'dolorium', which could potentially be dealt with by some sort of limitation on compute?
Also, I'm not really sure if this set of views is more "a broken bone/waterboarding is a million times as morally pressing as making a happy person", or along the more empirical lines of "most suffering (e.g. waterboarding) is extremely light, humans can experience far far far far far^99 times worse; and pleasure doesn't scale to the same degree." Even a tiny chance of the second one being true is awful to contemplate.
Specifically:
Then you end up into well, to what extent is that a debunking explanation that explains why humans in terms of their capacity to experience joy and suffering are unbiased but the reality is still biased
I'm not really sure what's meant by "the reality" here, nor what's meant by biased. Is the assertion that humans' intuitive preferences are driven by the range of possible things that could happen in the ancestral environment & that this isn't likely to match the maximum possible pleasure vs. suffering ratio in the future? If so, how does this lead one to end up concluding it's worse (rather than better)? I'm not really sure how these arguments connect in a way that could lead one to conclude that the worst possible suffering is a quadrillion times as bad as the best bliss is good.
I'm interested in arguments surrounding energy-efficiency (and maximum intensity, if they're not the same thing) of pain and pleasure. I'm looking for any considerations or links regarding (1) the suitability of "H=D" (equal efficiency and possibly intensity) as a prior; (2) whether, given this prior, we have good a posteriori reasons to expect a skew in either the positive or negative direction; and (3) the conceivability of modifying human minds' faculties to experience "super-bliss" commensurate with the badness of the worst-possible outcome, such that the possible intensities of human experience hinge on these considerations.
Picturing extreme torture - or even reading accounts of much less extreme suffering - pushes me towards suffering-focused ethics. But I don't hold a particularly strong normative intuition here and I feel that it stems primarily from the differences in perceived intensities, which of course I have to be careful with.
Stuff I've read so far:
Are pain and pleasure equally energy-efficient?
Simon Knutsson's reply
Hedonic Asymmetries