Hide table of contents


(This post also has a Russian version, translated from the present original by K. Kirdan.)

Summary: The alleged inevitable convergence between efficiency and methods that involve less suffering is one of the main arguments I’ve heard in favor of assuming the expected value of the future of humanity is positive, and I think it is invalid. While increased efficiency luckily converges with less biological suffering so far, this seems to be due to the physical limitations of humans and other animals rather than due to their suffering per se. And while past and present suffering beings all have severe physical limitations making them “inefficient”, future forms of sentience will likely make this past trend completely irrelevant. Future forms of suffering might even be instrumentally very useful and therefore “efficient”, such that we could make the reverse argument. Note that the goal of this post is not to argue that technological progress is bad, but simply to call out one specific claim that, despite its popularity, is – I think – just wrong. 

The original argument

While I’ve been mostly facing this argument in informal conversation, it has been (I think pretty well) fleshed out by Ben West (2017)[1]: (emphasis is mine)

[W]e should expect there to only be suffering in the future if that suffering enables people to be lazier [(i.e., if it is instrumentally “efficient”.] The most efficient solutions to problems don’t seem like they involve suffering. [...] Therefore, as technology progresses, we will move more towards solutions which don’t involve suffering[.]

Like most people I’ve heard use this argument, he illustrates his point with the following two examples: 

  1. Factory farming exists because the easiest way to get food which tastes good and meets various social goals people have causes cruelty. Once we get more scientifically advanced though, it will presumably become even more efficient to produce foods without any conscious experience at all by the animals (i.e. clean meat); at that point, the lazy solution is the more ethical one.
    1. (This arguably is what happened with domestic work animals on farms: we now have cars and trucks which replaced horses and mules, making even the phrase “beat like a rented mule” seem appalling.)
  2. Slavery exists because there is currently no way to get labor from people without them having conscious experience. Again though, this is due to a lack of scientific knowledge: there is no obvious reason why conscious experience is required for plowing a field or harvesting cocoa, and therefore the more efficient solution is to simply have nonconscious robots do these tasks.
    1. (This arguably is what happened with human slavery in the US: industrialization meant that slavery wasn’t required to create wealth in a large chunk of the US, and therefore slavery was outlawed.)

Why this argument is invalid

While I tentatively think the the most efficient solutions to problems don’t seem like they involve suffering” claim is true if we limit ourselves to the present and the past, I think it is false once we consider the long-term future, which makes the argument break apart.

Future solutions are more efficient insofar as they overcome past limitations. In the relevant examples that are enslaved humans and exploited animals, suffering itself is not a limiting factor. It is rather the physical limitations of those biological beings, relative to machines that could do a better job at their tasks.

I don't see any inevitable dependence between their suffering and these physical limitations. If human slaves and exploited animals were not sentient, this wouldn't change the fact that machines would do a better job.

The fact that suffering has been correlated with inefficiency so far seems to be a lucky coincidence that allowed for the end of some forms of slavery/exploitation of biological sentient beings.

Potential future forms of suffering (e.g., digital suffering)[2] do not seem to similarly correlate with inefficiency, such that there seems to be absolutely no reason to assume future methods will engender less suffering by default.

In fact, there are reasons to assume the exact opposite, unfortunately. We may expect digital sentience/suffering to be instrumentally useful for a wide range of activities and purposes (see Baumann 2022aBaumann 2022b).

Ben West, himself, acknowledges the following in a comment under his post:

[T]he more things consciousness (and particularly suffering) are useful for, the less reasonable [my “the most efficient solutions to problems don’t seem like they involve suffering” point] is.

For the record, he even wrote the following in a comment under another post six years later: 

The thing I have most changed my mind about since writing the [2017] post of mine [...] is adjacent to the "disvalue through evolution" category: basically, I've become more worried that disvalue is instrumentally useful. E.g. maybe the most efficient paperclip maximizer is one that's really sad about the lack of paperclips.

While I find his particular example not very convincing (compared to examples in Baumann 2022a or other introductions to s-risks), he seems to agree that we might expect suffering to be somewhat “efficient”, in the future.

I should also mention that in the comments under his 2017 post, a few people have made a case somewhat similar to the one I make in the present post (see Wei Dai’s comment in particular). 

The point I make here is therefore nothing very original, but I thought it deserved its own post, especially given that people didn’t stop making strong claims based on this flawed argument after 2017 when those comments were written. (Not that I expect my post to make the whole EA community realize this argument is invalid and that I'll never hear of it again, but it seems worth throwing this out there.) 

I also do not want readers to perceive this piece as a mere critique of West’s post but as a

  • “debunking” of an argument longtermists make quite often, despite its apparent invalidity (assuming I didn’t miss any crucial consideration; please tell me if you think I did!), and/or as a
  • justification for the claim made in the title of the present post, or potentially for an even stronger one, like Future technological progress negatively correlates with methods that involve less suffering

Again, the point of this post is not to argue that the value of the future of humanity is negative because of this, but simply that we need other arguments if we want to argue for the opposite. This one doesn’t pan out.

  1. ^

    In fact, West makes two distinct arguments: (A) We’ll move towards technological solutions that involve less suffering thanks to the most efficient methods involving less suffering, and (B) We’ll move towards technological solutions that involve less suffering thanks to technology lowering the amount of effort required to avoid suffering. In this post, I only argue that (A) is invalid. As for (B), I tentatively think it checks out (although it is pretty weak on its own), for what it’s worth.

  2. ^

    One could also imagine biological forms of suffering in beings that have been optimized to be more efficient, such that they’d be much more useful than enslaved/exploited sentient beings we’ve known so far.

64

0
0
1

Reactions

0
0
1

More posts like this

Comments12


Sorted by Click to highlight new comments since:

Thanks Jim! I think this points in a useful direction, but I'm not sure I would describe this argument as "debunked". Instead, I think I would say that the following claim from you is the crux:

Potential future forms of suffering (e.g., digital suffering)[2] do not seem to similarly correlate with inefficiency

As an example of why this claim is not obviously true: Quicksort is provably the most efficient way to sort a list, and I'm fairly confident it doesn't involve suffering. If you told me that you had an algorithm which suffered while sorting a list, I would feel fairly confident that this algorithm would be less efficient than quicksort (i.e suffering is anti-correlated with efficiency).

Will this anti-correlation generalize to more complex algorithms? I don't really know. But I would be surprised if you were >90% confident that it would not.

Interesting, thanks Ben! I definitely agree that this is the crux. 

I'm sympathetic to the claim that "this algorithm would be less efficient than quicksort" and that this claim is generalizable.[1] However, if true, I think it only implies that suffering is -- by default -- inefficient as a motivation for an algorithm

Right after making my crux claim, I reference some of Tobias Baumann's (2022a, 2022b) work which gives some examples of how significant amounts of suffering may be instrumentally useful/required in cases such as scientific experiments where sentience plays a key role (where the suffering is not due to it being a strong motivator for an efficient algorithm, but for other reasons). Interestingly, his "incidental suffering" examples are more similar to the factory farming and human slavery examples than to the Quicksort example.

  1. ^

    To be fair, it's been a while since I've read about stuff like suffering subroutines (see, e.g., Tomasik 2019) and its plausibility, and people might have raised considerations going against that claim.

Right after making my crux claim, I reference some of Tobias Baumann's (2022a, 2022b) work which gives some examples of how significant amounts of suffering may be instrumentally useful/required in cases such as scientific experiments where sentience plays a key role (where the suffering is not due to it being a strong motivator for an efficient algorithm, but for other reasons).

I think it would be helpful if you provided some of those examples in the post.

Yeah, I find some of Baumann's examples plausible, but in order for the future to be net negative we don't just need some examples, we need the majority of computation to be suffering.[1]

I don't think Baumann is trying to argue for that in the linked pieces (or if they are, I don't find it terribly compelling); I would be interested in more research looking into this.

  1. ^

    Or maybe the vast majority to be suffering. See e.g. this comment from Paul Christiano about how altruists may have outsized impact in the future.

I do not mean to argue that the future will be net negative. (I even make this disclaimer twice in the post, aha.) :)

I simply argue that the convergence between efficiency and methods that involve less suffering argument in favor of assuming it'll be positive is unsupported.

There are many other arguments/considerations to take into account to assess the sign of the future.

Ah yeah sorry, what I said wasn't precise; I mean that is not enough to show that there exists one instance of suffering being instrumentally useful, you have to show that this is true in general.

(Unless I misunderstood your post?)

 If I want to prove that technological progress generally correlates with methods that involve more suffering, yes! Agreed.

But while the post suggests that this is a possibility, its main point is that suffering itself is not inefficient, such that there is no reason to expect progress and methods that involve less suffering to correlate by default (much weaker claim).

This makes me realize that the crux is perhaps this below part more than the claim we discuss above. 



While I tentatively think the the most efficient solutions to problems don’t seem like they involve suffering” claim is true if we limit ourselves to the present and the past, I think it is false once we consider the long-term future, which makes the argument break apart.

Future solutions are more efficient insofar as they overcome past limitations. In the relevant examples that are enslaved humans and exploited animals, suffering itself is not a limiting factor. It is rather the physical limitations of those biological beings, relative to machines that could do a better job at their tasks.

I don't see any inevitable dependence between their suffering and these physical limitations. If human slaves and exploited animals were not sentient, this wouldn't change the fact that machines would do a better job.

Sorry for the confusion and thanks for pushing back! Helps me clarify what the claims made in this post imply and don't imply. :)

Interesting post, Jim!

In the relevant examples that are enslaved humans and exploited animals, suffering itself is not a limiting factor.

I think suffering may actually be a limiting factor. There is a point beyond which worsening the conditions in factory-farms would not increase productivity, because the increase in mortality and disability (and therefore suffering) would not be compensated by the decrease in costs. In general, if pain is sufficiently severe, animals will be physically injured, which limits how useful they will be.

Thanks Vasco! Perhaps a nitpick but suffering still doesn't seem to be the limiting factor per se, here. If farmed animals were philosophical zombies (i.e., were not sentient but still had the exact same needs), that wouldn't change the fact that one needs to keep them in conditions that are ok enough to be able to make a profit out of them. The limiting factor is their physical needs, not their suffering itself. Do you agree?

I think the distinction is important because it suggests that suffering itself appears as a limiting factor only insofar as it is strong evidence of physical needs that are not met. And while both strongly correlate in the present, I argue that we should expect this to change.

Thanks for clarifying!

The limiting factor is their physical needs, not their suffering itself. Do you agree?

Yes, I agree.

Nice post - I think I agree that Ben's argument isn't particularly sound. 

Are you thinking about this primarily in terms of actions that autonomous advanced AI systems will take for the sake of optimisation? If not, I imagine you could look at this with a different lense and consider one historical perspective which says something like "One large driver of humanity's moral circle expansion/moral improvement has been technological progress which has reduced resource competition and allowed groups to expand concern for others' suffering without undermining themselves". This seems fairly plausible to me, and would suggest that you might expect technological progress to correlate with methods involving less suffering. 

I wonder if this theory might highlight points of resource contention where one might expect there to be less concern for digital suffering. Examples of this off the top of my head seem like AI arms races, early stage space colonisation, and perhaps some form of partial civilisation collapse. 
 

Thanks!

Are you thinking about this primarily in terms of actions that autonomous advanced AI systems will take for the sake of optimisation?

Hum... not sure. I feel like my claims are very weak and true even in future worlds without autonomous advanced AIs.


"One large driver of humanity's moral circle expansion/moral improvement has been technological progress which has reduced resource competition and allowed groups to expand concern for others' suffering without undermining themselves".

Agreed but this is more similar to argument (A) fleshed out in this footnote, which is not the one I'm assailing in this post.

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier