Negative utilitarianism has some traction with effective altruists. This is in my view a shame given that it is false. I shall spell out why I hold this view.
The most basic version of negative utilitarianism which says that only the avoidance of pain is morally relevant is trivially false. Preventing a pinprick is less valuable than bringing about googolplex utils. However, this view is not widely believed and thus not particularly worth discussing.
A more popular form of negative utilitarianism takes the form of Lexical Threshold views, according to which certain forms of suffering are so terrible that they cannot be outweighed by any amount of happiness. This view is defended by people like Simon Knutsson, Brian Tomasik, and others. My main objection to this view is that it falls prey to the sequencing objection. Suppose we believe that the badness of a horrific torture cannot be outweighed by any amount of happiness. Presumably we believe that the badness of a mild headache can be outweighed by some amounts of happiness. Therefore, the badness of horrific torture can't be outweighed by any amount of headaches (or similar harms, headaches were just the example that I picked.)
This view runs into a problem. There are certainly some types of extreme headaches whose badness are as bad as brutal tortures at least in theory. Suppose that the badness of these horrific headaches are 100,000 units of pain and that benign headaches contain 100 units of pain. Presumably 5 headaches with 99,999 units of pain would be in total worse than 1 headache with 100,000 units of pain. Additionally, presumably 25 headaches with 99,998 units of pain would be worse than 5 headaches with 99,999 units of pain. We can keep decreasing the amount of pain and making it affect more people, until 1 headache with 100,000 units of pain is found to be less bad than some vast number of headaches with 100 units of pain. The Lexical Threshold Negative Utilitarian view would have to say that there's some threshold of pain below which no amount of pain experienced can outweigh any amount of pain above the threshold, regardless of how many people experience the pain. This is deeply implausible. If the threshold is set at 10,000 units of pain, then 10^100^100 people experiencing 9,999 units of pain would be preferable to one person experiencing 10,001 units of pain.
The negative utilitarian might object that there is no neat cutoff. However, this misunderstands the argument. If there is no neat cutoff point then the gradual decrease in pain, despite being applied to an increasing number of people, would always be preferrable to the previous point with far fewer people experiencing marginally more pain.
The negative utilitarian might say that pain can't be neatly delineated into precise units. However, precise units are only used to represent pain. It's very intuitive that pain that is very bad can be made gradually less bad until it's reduced to being only a little bit bad. This process requires the negative utilitarian to declare that at some point along the continuum, they've passed a threshold whereby no amount of the things below the threshold can ever outweigh the things above the threshold. Being scalded in boiling water can be made gradually less unpleasant by lowering the temperature of the water until it's reduced to merely a slight inconvenience.
Simon Knutsson responds to this basic objection saying "Third, perhaps Ord overlooks versions of Lexical Threshold NU, according to which the value of happiness grows less and less as the amount of happiness increases. For example, the value of happiness could have a ceiling, say 1 million value “units,” such that there is some suffering that the happiness could never counterbalance, e.g., when the disvalue of the suffering is 2 million disvalue units." However, the way I've laid out the argument proves that even the most extreme forms of torture are only as bad as large amounts of headaches. If this is the case, then it seems strange and ad hoc to say that no amount of happiness above 1 million units can outweigh the badness of a headache. Additionally, a similar approach can be done on the positive end. Surely googol units of happiness for one person and 999,999 units for another is better than 1,000,000 units for two people.
The main argument given for negative utilitarianism is the intuition that extreme suffering is very bad. When one considers what it's like to starve to death, it's hard to imagine how any amount of happiness can outweigh it. However, we shouldn't place very much stock in this argument for a few reasons.
First, it's perfectly compatible with positive utilitarianism (only in the sense of being non negative, not in the sense of saying that only happiness matters) to say that suffering is in general far more extreme than happiness. Given the way the world works right now, there is no way to experience as much happiness as one experiences suffering when they get horrifically tortured. However, this does not imply that extreme suffering can never be counterbalanced--merely that it's very difficult to counterbalance it. No single thing other than light travels at the speed of light, but that does not mean that light speed is lexically separate from separate speeds, such that no number of other speeds can ever add up to greater than light speed. Additionally, transhumanism opens the possibility for extreme amounts of happiness, as great as the suffering from brutal torture.
Second, it's very hard to have an intuitive grasp of very big things. The human brain can't multiply very well. Thus, when one has an experience of immense misery they might conclude it's balance can't be counterbalanced by anything, when in reality they're just perceiving that it's very bad. Much like how people confuse things being astronomically improbable with impossible, people may have inaccurate mental maps, and perceive extremely bad things as bad in ways that can't be counterbalanced.
Third, it would be very surprising a priori for suffering to be categorically more relevant than well-being. One can paint a picture of enjoyable experiences being good and unenjoyable experiences being bad. It's hard to imagine why unenjoyable experiences would have a privileged status, being unweighable against positive experiences.
I'd be interested in hearing replies from negative utilitarians to these objections.
I'll first respond to the first article you linked. The problem I see with this solution is it violates some combination of completeness and transitivity. Vinding says that we can say that for this list (1 e′-object, 1 e-object, 2 e-objects, 3 e-objects) we can say that 3e-objects are categorically worse than any number of 1e' objects but that some number of 1e' objects can be worse than 1e-objects, which can be worse than 2 e-objects, and so on. This runs into an issue.
If we say that 1000 e' objects are worse than 1 e-object and 1000 e-objects are worse than 1 2 e-object, and 1000 2e-objects are worse than 1 3e-object than we get the following inequality
1 trillion e' objects > 1 billion e - objects> 1 million 2e-objects>1000 3e-objects.
The fifth example runs into a similar problem to the one addressed in this post. We can just apply the calculation at the level of populations. Surely inflicting 10000 units of pain on one person is less bad than inflicting 9999 units of pain on 10^100^100 people.
The second article that you linked runs into a similar problem. It says that what matters is the experience rather than the temperature--thus, it claims that steadily lowering the temperature and asking the NU at what point they'd pinpoint a firm threshold is misleading. However, we can use units of pain rather than temperature. While it's hard to precisely quantify units of pain, we have an intuitive grasp of very bad pain, and we can similarly grasp the pain being lessened slightly.
Next, Vinding argues consent views avoid this problem. Consent views run into several issues.
1 Contrary to Vinding's indication, there is in fact a firm point at which people no longer consent. For any people, if offered googolplex utils per second unit of torture, there is a firm point at which they would stop consenting. The consent views would have to say that misery slightly above this threshold categorically outweighs misery slightly below this threshold.
2 Consent views seem to argue that the badness of pain has to do with the weakness of will (or more specifically, the willingness of people to endure pain, independent of the badness of the pain). For example, suppose we have a strict negative utilitarian who argues that no amount of pain is worth any amount of pleasure. This person would never consent. However, it seems bad to say that a pinprick for this person is considerably worse than a pinprick for someone else, who experience the same amount of pain.
3 It seems we can at least imagine a type of pleasure which one would not consent to ceasing. A person experiencing unfathomable joy might be willing to endure future torture for one more moment of bliss. Thus, this view seems to imply caring about pleasure as well. Love songs often contain sentiments relating to the singer being willing to endure anything for another moment of being with the object of the love song. However, it seems strange to say that love is lexically better than all other goods.
Next, the repugnant conclusions of positive utilitarianism are presented, including creating hell to please the blissful. This is a bullet I'm very happy to bite. Much like it would be reasonable to, from the perspective of a person, endure temporary misery for greater joy, on a population level, it would be reasonable to inflict misery for greater joy. I perceive ethics as being about what an egoist would do if they experienced the sum total of human experience--from this perspective it does not seem particularly counterintuitive. Additionally, as I argued in the article, we suck at multiplying. Hypotheticals involving vast numbers melt our intuitions.
Additionally, as I argue here, many of our intuitions that go against positive utilitarianism crumble upon reflection. I'll try to argue against the creating hell objection. Presumably a pinprick to please the blissful would be permissible, assuming enough blissful people. As the argument I've presented shows, a lot of pinpricks are as bad as hell. Thus, enough pleasure for the blissful is worth hell.
I agree that we should be careful to consider the full implications of all else equal, however, I don't think that refutes any part of the argument I've presented. When people experience joy, even when they're not suffering at all, they regard more joy as desirable.
You argue for axiological monism, that seems fully consistent with utilitarianism. Much like there are positive and negative numbers, there are positive and negative experiences. It seems that we regard the positive experiences as good, much like how we regard the negative experiences as bad.
It seems very strange for well-being to be morally neutral, but suffering to be morally bad. If you were imagining a world without having had any experiences, it seems clear one would expect the enjoyable experiences to be good, and the unenjoyable experiences to be bad. Evolutionarily, the reason we suffer is to deter actions, while the reason we feel pleasure is to encourage actions. Whatever mechanism causes suffering to be bad, a similar explanation would seem to cause well-being to be good.
Thanks for the comment, you've given me some interesting things to consider!