Summary: The alleged inevitable convergence between efficiency and methods that involve less suffering is one of the main arguments I’ve heard in favor of assuming the expected value of the future of humanity is positive, and I think it is invalid. While increased efficiency luckily converges with less biological suffering so far, this seems to be due to the physical limitations of humans and other animals rather than due to their suffering per se. And while past and present suffering beings all have severe physical limitations making them “inefficient”, future forms of sentience will likely make this past trend completely irrelevant. Future forms of suffering might even be instrumentally very useful and therefore “efficient”, such that we could make the reverse argument. Note that the goal of this post is not to argue that technological progress is bad, but simply to call out one specific claim that, despite its popularity, is – I think – just wrong.
The original argument
[W]e should expect there to only be suffering in the future if that suffering enables people to be lazier [(i.e., if it is instrumentally “efficient”.] The most efficient solutions to problems don’t seem like they involve suffering. [...] Therefore, as technology progresses, we will move more towards solutions which don’t involve suffering[.]
Like most people I’ve heard use this argument, he illustrates his point with the following two examples:
- Factory farming exists because the easiest way to get food which tastes good and meets various social goals people have causes cruelty. Once we get more scientifically advanced though, it will presumably become even more efficient to produce foods without any conscious experience at all by the animals (i.e. clean meat); at that point, the lazy solution is the more ethical one.
- (This arguably is what happened with domestic work animals on farms: we now have cars and trucks which replaced horses and mules, making even the phrase “beat like a rented mule” seem appalling.)
- Slavery exists because there is currently no way to get labor from people without them having conscious experience. Again though, this is due to a lack of scientific knowledge: there is no obvious reason why conscious experience is required for plowing a field or harvesting cocoa, and therefore the more efficient solution is to simply have nonconscious robots do these tasks.
- (This arguably is what happened with human slavery in the US: industrialization meant that slavery wasn’t required to create wealth in a large chunk of the US, and therefore slavery was outlawed.)
Why this argument is invalid
While I tentatively think the “the most efficient solutions to problems don’t seem like they involve suffering” claim is true if we limit ourselves to the present and the past, I think it is false once we consider the long-term future, which makes the argument break apart.
Future solutions are more efficient insofar as they overcome past limitations. In the relevant examples that are enslaved humans and exploited animals, suffering itself is not a limiting factor. It is rather the physical limitations of those biological beings, relative to machines that could do a better job at their tasks.
I don't see any inevitable dependence between their suffering and these physical limitations. If human slaves and exploited animals were not sentient, this wouldn't change the fact that machines would do a better job.
The fact that suffering has been correlated with inefficiency so far seems to be a lucky coincidence that allowed for the end of some forms of slavery/exploitation of biological sentient beings.
Potential future forms of suffering (e.g., digital suffering) do not seem to similarly correlate with inefficiency, such that there seems to be absolutely no reason to assume future methods will engender less suffering by default.
In fact, there are reasons to assume the exact opposite, unfortunately. We may expect digital sentience/suffering to be instrumentally useful for a wide range of activities and purposes (see Baumann 2022a; Baumann 2022b).
Ben West, himself, acknowledges the following in a comment under his post:
[T]he more things consciousness (and particularly suffering) are useful for, the less reasonable [my “the most efficient solutions to problems don’t seem like they involve suffering” point] is.
For the record, he even wrote the following in a comment under another post six years later:
The thing I have most changed my mind about since writing the  post of mine [...] is adjacent to the "disvalue through evolution" category: basically, I've become more worried that disvalue is instrumentally useful. E.g. maybe the most efficient paperclip maximizer is one that's really sad about the lack of paperclips.
While I find his particular example not very convincing (compared to examples in Baumann 2022a or other introductions to s-risks), he seems to agree that we might expect suffering to be somewhat “efficient”, in the future.
I should also mention that in the comments under his 2017 post, a few people have made a case somewhat similar to the one I make in the present post (see Wei Dai’s comment in particular).
The point I make here is therefore nothing very original, but I thought it deserved its own post, especially given that people didn’t stop making strong claims based on this flawed argument after 2017 when those comments were written. (Not that I expect my post to make the whole EA community realize this argument is invalid and that I'll never hear of it again, but it seems worth throwing this out there.)
I also do not want readers to perceive this piece as a mere critique of West’s post but as a
- “debunking” of an argument longtermists make quite often, despite its apparent invalidity (assuming I didn’t miss any crucial consideration; please tell me if you think I did!), and/or as a
- justification for the claim made in the title of the present post, or potentially for an even stronger one, like Future technological progress negatively correlates with methods that involve less suffering.
Again, the point of this post is not to argue that the value of the future of humanity is negative because of this, but simply that we need other arguments if we want to argue for the opposite. This one doesn’t pan out.
In fact, West makes two distinct arguments: (A) We’ll move towards technological solutions that involve less suffering thanks to the most efficient methods involving less suffering, and (B) We’ll move towards technological solutions that involve less suffering thanks to technology lowering the amount of effort required to avoid suffering. In this post, I only argue that (A) is invalid. As for (B), I tentatively think it checks out (although it is pretty weak on its own), for what it’s worth.
One could also imagine biological forms of suffering in beings that have been optimized to be more efficient, such that they’d be much more useful than enslaved/exploited sentient beings we’ve known so far.