Hide table of contents


(This post also has a Russian version, translated from the present original by K. Kirdan.)

Summary: The alleged inevitable convergence between efficiency and methods that involve less suffering is one of the main arguments I’ve heard in favor of assuming the expected value of the future of humanity is positive, and I think it is invalid. While increased efficiency luckily converges with less biological suffering so far, this seems to be due to the physical limitations of humans and other animals rather than due to their suffering per se. And while past and present suffering beings all have severe physical limitations making them “inefficient”, future forms of sentience will likely make this past trend completely irrelevant. Future forms of suffering might even be instrumentally very useful and therefore “efficient”, such that we could make the reverse argument. Note that the goal of this post is not to argue that technological progress is bad, but simply to call out one specific claim that, despite its popularity, is – I think – just wrong. 

The original argument

While I’ve been mostly facing this argument in informal conversation, it has been (I think pretty well) fleshed out by Ben West (2017)[1]: (emphasis is mine)

[W]e should expect there to only be suffering in the future if that suffering enables people to be lazier [(i.e., if it is instrumentally “efficient”.] The most efficient solutions to problems don’t seem like they involve suffering. [...] Therefore, as technology progresses, we will move more towards solutions which don’t involve suffering[.]

Like most people I’ve heard use this argument, he illustrates his point with the following two examples: 

  1. Factory farming exists because the easiest way to get food which tastes good and meets various social goals people have causes cruelty. Once we get more scientifically advanced though, it will presumably become even more efficient to produce foods without any conscious experience at all by the animals (i.e. clean meat); at that point, the lazy solution is the more ethical one.
    1. (This arguably is what happened with domestic work animals on farms: we now have cars and trucks which replaced horses and mules, making even the phrase “beat like a rented mule” seem appalling.)
  2. Slavery exists because there is currently no way to get labor from people without them having conscious experience. Again though, this is due to a lack of scientific knowledge: there is no obvious reason why conscious experience is required for plowing a field or harvesting cocoa, and therefore the more efficient solution is to simply have nonconscious robots do these tasks.
    1. (This arguably is what happened with human slavery in the US: industrialization meant that slavery wasn’t required to create wealth in a large chunk of the US, and therefore slavery was outlawed.)

Why this argument is invalid

While I tentatively think the the most efficient solutions to problems don’t seem like they involve suffering” claim is true if we limit ourselves to the present and the past, I think it is false once we consider the long-term future, which makes the argument break apart.

Future solutions are more efficient insofar as they overcome past limitations. In the relevant examples that are enslaved humans and exploited animals, suffering itself is not a limiting factor. It is rather the physical limitations of those biological beings, relative to machines that could do a better job at their tasks.

I don't see any inevitable dependence between their suffering and these physical limitations. If human slaves and exploited animals were not sentient, this wouldn't change the fact that machines would do a better job.

The fact that suffering has been correlated with inefficiency so far seems to be a lucky coincidence that allowed for the end of some forms of slavery/exploitation of biological sentient beings.

Potential future forms of suffering (e.g., digital suffering)[2] do not seem to similarly correlate with inefficiency, such that there seems to be absolutely no reason to assume future methods will engender less suffering by default.

In fact, there are reasons to assume the exact opposite, unfortunately. We may expect digital sentience/suffering to be instrumentally useful for a wide range of activities and purposes (see Baumann 2022aBaumann 2022b).

Ben West, himself, acknowledges the following in a comment under his post:

[T]he more things consciousness (and particularly suffering) are useful for, the less reasonable [my “the most efficient solutions to problems don’t seem like they involve suffering” point] is.

For the record, he even wrote the following in a comment under another post six years later: 

The thing I have most changed my mind about since writing the [2017] post of mine [...] is adjacent to the "disvalue through evolution" category: basically, I've become more worried that disvalue is instrumentally useful. E.g. maybe the most efficient paperclip maximizer is one that's really sad about the lack of paperclips.

While I find his particular example not very convincing (compared to examples in Baumann 2022a or other introductions to s-risks), he seems to agree that we might expect suffering to be somewhat “efficient”, in the future.

I should also mention that in the comments under his 2017 post, a few people have made a case somewhat similar to the one I make in the present post (see Wei Dai’s comment in particular). 

The point I make here is therefore nothing very original, but I thought it deserved its own post, especially given that people didn’t stop making strong claims based on this flawed argument after 2017 when those comments were written. (Not that I expect my post to make the whole EA community realize this argument is invalid and that I'll never hear of it again, but it seems worth throwing this out there.) 

I also do not want readers to perceive this piece as a mere critique of West’s post but as a

  • “debunking” of an argument longtermists make quite often, despite its apparent invalidity (assuming I didn’t miss any crucial consideration; please tell me if you think I did!), and/or as a
  • justification for the claim made in the title of the present post, or potentially for an even stronger one, like Future technological progress negatively correlates with methods that involve less suffering

Again, the point of this post is not to argue that the value of the future of humanity is negative because of this, but simply that we need other arguments if we want to argue for the opposite. This one doesn’t pan out.

  1. ^

    In fact, West makes two distinct arguments: (A) We’ll move towards technological solutions that involve less suffering thanks to the most efficient methods involving less suffering, and (B) We’ll move towards technological solutions that involve less suffering thanks to technology lowering the amount of effort required to avoid suffering. In this post, I only argue that (A) is invalid. As for (B), I tentatively think it checks out (although it is pretty weak on its own), for what it’s worth.

  2. ^

    One could also imagine biological forms of suffering in beings that have been optimized to be more efficient, such that they’d be much more useful than enslaved/exploited sentient beings we’ve known so far.

64

0
0
1

Reactions

0
0
1

More posts like this

Comments12
Sorted by Click to highlight new comments since:

Thanks Jim! I think this points in a useful direction, but I'm not sure I would describe this argument as "debunked". Instead, I think I would say that the following claim from you is the crux:

Potential future forms of suffering (e.g., digital suffering)[2] do not seem to similarly correlate with inefficiency

As an example of why this claim is not obviously true: Quicksort is provably the most efficient way to sort a list, and I'm fairly confident it doesn't involve suffering. If you told me that you had an algorithm which suffered while sorting a list, I would feel fairly confident that this algorithm would be less efficient than quicksort (i.e suffering is anti-correlated with efficiency).

Will this anti-correlation generalize to more complex algorithms? I don't really know. But I would be surprised if you were >90% confident that it would not.

Interesting, thanks Ben! I definitely agree that this is the crux. 

I'm sympathetic to the claim that "this algorithm would be less efficient than quicksort" and that this claim is generalizable.[1] However, if true, I think it only implies that suffering is -- by default -- inefficient as a motivation for an algorithm

Right after making my crux claim, I reference some of Tobias Baumann's (2022a, 2022b) work which gives some examples of how significant amounts of suffering may be instrumentally useful/required in cases such as scientific experiments where sentience plays a key role (where the suffering is not due to it being a strong motivator for an efficient algorithm, but for other reasons). Interestingly, his "incidental suffering" examples are more similar to the factory farming and human slavery examples than to the Quicksort example.

  1. ^

    To be fair, it's been a while since I've read about stuff like suffering subroutines (see, e.g., Tomasik 2019) and its plausibility, and people might have raised considerations going against that claim.

Right after making my crux claim, I reference some of Tobias Baumann's (2022a, 2022b) work which gives some examples of how significant amounts of suffering may be instrumentally useful/required in cases such as scientific experiments where sentience plays a key role (where the suffering is not due to it being a strong motivator for an efficient algorithm, but for other reasons).

I think it would be helpful if you provided some of those examples in the post.

Yeah, I find some of Baumann's examples plausible, but in order for the future to be net negative we don't just need some examples, we need the majority of computation to be suffering.[1]

I don't think Baumann is trying to argue for that in the linked pieces (or if they are, I don't find it terribly compelling); I would be interested in more research looking into this.

  1. ^

    Or maybe the vast majority to be suffering. See e.g. this comment from Paul Christiano about how altruists may have outsized impact in the future.

I do not mean to argue that the future will be net negative. (I even make this disclaimer twice in the post, aha.) :)

I simply argue that the convergence between efficiency and methods that involve less suffering argument in favor of assuming it'll be positive is unsupported.

There are many other arguments/considerations to take into account to assess the sign of the future.

Ah yeah sorry, what I said wasn't precise; I mean that is not enough to show that there exists one instance of suffering being instrumentally useful, you have to show that this is true in general.

(Unless I misunderstood your post?)

 If I want to prove that technological progress generally correlates with methods that involve more suffering, yes! Agreed.

But while the post suggests that this is a possibility, its main point is that suffering itself is not inefficient, such that there is no reason to expect progress and methods that involve less suffering to correlate by default (much weaker claim).

This makes me realize that the crux is perhaps this below part more than the claim we discuss above. 



While I tentatively think the the most efficient solutions to problems don’t seem like they involve suffering” claim is true if we limit ourselves to the present and the past, I think it is false once we consider the long-term future, which makes the argument break apart.

Future solutions are more efficient insofar as they overcome past limitations. In the relevant examples that are enslaved humans and exploited animals, suffering itself is not a limiting factor. It is rather the physical limitations of those biological beings, relative to machines that could do a better job at their tasks.

I don't see any inevitable dependence between their suffering and these physical limitations. If human slaves and exploited animals were not sentient, this wouldn't change the fact that machines would do a better job.

Sorry for the confusion and thanks for pushing back! Helps me clarify what the claims made in this post imply and don't imply. :)

Interesting post, Jim!

In the relevant examples that are enslaved humans and exploited animals, suffering itself is not a limiting factor.

I think suffering may actually be a limiting factor. There is a point beyond which worsening the conditions in factory-farms would not increase productivity, because the increase in mortality and disability (and therefore suffering) would not be compensated by the decrease in costs. In general, if pain is sufficiently severe, animals will be physically injured, which limits how useful they will be.

Thanks Vasco! Perhaps a nitpick but suffering still doesn't seem to be the limiting factor per se, here. If farmed animals were philosophical zombies (i.e., were not sentient but still had the exact same needs), that wouldn't change the fact that one needs to keep them in conditions that are ok enough to be able to make a profit out of them. The limiting factor is their physical needs, not their suffering itself. Do you agree?

I think the distinction is important because it suggests that suffering itself appears as a limiting factor only insofar as it is strong evidence of physical needs that are not met. And while both strongly correlate in the present, I argue that we should expect this to change.

Thanks for clarifying!

The limiting factor is their physical needs, not their suffering itself. Do you agree?

Yes, I agree.

Nice post - I think I agree that Ben's argument isn't particularly sound. 

Are you thinking about this primarily in terms of actions that autonomous advanced AI systems will take for the sake of optimisation? If not, I imagine you could look at this with a different lense and consider one historical perspective which says something like "One large driver of humanity's moral circle expansion/moral improvement has been technological progress which has reduced resource competition and allowed groups to expand concern for others' suffering without undermining themselves". This seems fairly plausible to me, and would suggest that you might expect technological progress to correlate with methods involving less suffering. 

I wonder if this theory might highlight points of resource contention where one might expect there to be less concern for digital suffering. Examples of this off the top of my head seem like AI arms races, early stage space colonisation, and perhaps some form of partial civilisation collapse. 
 

Thanks!

Are you thinking about this primarily in terms of actions that autonomous advanced AI systems will take for the sake of optimisation?

Hum... not sure. I feel like my claims are very weak and true even in future worlds without autonomous advanced AIs.


"One large driver of humanity's moral circle expansion/moral improvement has been technological progress which has reduced resource competition and allowed groups to expand concern for others' suffering without undermining themselves".

Agreed but this is more similar to argument (A) fleshed out in this footnote, which is not the one I'm assailing in this post.

Curated and popular this week
 ·  · 7m read
 · 
This is a linkpost for a paper I wrote recently, “Endogenous Growth and Excess Variety”, along with a summary. Two schools in growth theory Roughly speaking: In Romer’s (1990) growth model, output per person is interpreted as an economy’s level of “technology”, and the economic growth rate—the growth rate of “real GDP” per person—is proportional to the amount of R&D being done. As Jones (1995) pointed out, populations have grown greatly over the last century, and the proportion of people doing research (and the proportion of GDP spent on research) has grown even more quickly, yet the economic growth rate has not risen. Growth theorists have mainly taken two approaches to reconciling [research] population growth with constant economic growth. “Semi-endogenous” growth models (introduced by Jones (1995)) posit that, as the technological frontier advances, further advances get more difficult. Growth in the number of researchers, and ultimately (if research is not automated) population growth, is therefore necessary to sustain economic growth. “Second-wave endogenous” (I’ll write “SWE”) growth models posit instead that technology grows exponentially with a constant or with a growing population. The idea is that process efficiency—the quantity of a given good producible with given labor and/or capital inputs—grows exponentially with constant research effort, as in a first-wave endogenous model; but when population grows, we develop more goods, leaving research effort per good fixed. (We do this, in the model, because each innovator needs a monopoly on his or her invention in order to compensate for the costs of developing it.) Improvements in process efficiency are called “vertical innovations” and increases in good variety are called “horizontal innovations”. Variety is desirable, so the one-off increase in variety produced by an increase to the population size increases real GDP, but it does not increase the growth rate. Likewise exponential population growth raise
 ·  · 4m read
 · 
TLDR When we look across all jobs globally, many of us in the EA community occupy positions that would rank in the 99.9th percentile or higher by our own preferences within jobs that we could plausibly get.[1] Whether you work at an EA-aligned organization, hold a high-impact role elsewhere, or have a well-compensated position which allows you to make significant high effectiveness donations, your job situation is likely extraordinarily fortunate and high impact by global standards. This career conversations week, it's worth reflecting on this and considering how we can make the most of these opportunities. Intro I think job choice is one of the great advantages of development. Before the industrial revolution, nearly everyone had to be a hunter-gatherer or a farmer, and they typically didn’t get a choice between those. Now there is typically some choice in low income countries, and typically a lot of choice in high income countries. This already suggests that having a job in your preferred field puts you in a high percentile of job choice. But for many in the EA community, the situation is even more fortunate. The Mathematics of Job Preference If you work at an EA-aligned organization and that is your top preference, you occupy an extraordinarily rare position. There are perhaps a few thousand such positions globally, out of the world's several billion jobs. Simple division suggests this puts you in roughly the 99.9999th percentile of job preference. Even if you don't work directly for an EA organization but have secured: * A job allowing significant donations * A position with direct positive impact aligned with your values * Work that combines your skills, interests, and preferred location You likely still occupy a position in the 99.9th percentile or higher of global job preference matching. Even without the impact perspective, if you are working in your preferred field and preferred country, that may put you in the 99.9th percentile of job preference
 ·  · 5m read
 · 
Summary Following our co-founder Joey's recent transition announcement we're actively searching for exceptional leadership to join our C-level team and guide AIM into its next phase. * Find the full job description here * To apply, please visit the following link * Recommend someone you think could be a great fit here * Location: London strongly preferred. Remote candidates willing to work from London at least 3 months a year and otherwise overlapping at least 6 hours with 9 am to 5 pm BST will be considered. We are happy to sponsor UK work visas. * Employment Type: Full-time (35 hours) * Application Deadline: rolling until August 10, 2025 * Start Date: as soon as possible (with some flexibility for the right candidate) * Compensation: £45,000–£90,000 (for details on our compensation policy see full job description) Leadership Transition On March 15th, Joey announced he's stepping away from his role as CEO of AIM, with his planned last day as December 1st. This follows our other co-founder Karolina's completed transition in 2024. Like Karolina, Joey will transition to a board member role while we bring in new leadership to guide AIM's next phase of growth. The Opportunity AIM is at a unique inflection point. We're seeking an exceptional leader to join Samantha and Devon on our C-level team and help shape the next era of one of the most impactful organizations in the EA ecosystem. With foundations established (including a strong leadership team and funding runway), we're ready to scale our influence dramatically and see many exciting pathways to do so. While the current leadership team has a default 2026 strategic plan, we are open to a new CEO proposing radical departures. This might include: * Proposing alternative ways to integrate or spin off existing or new programs * Deciding to spend more resources trialling more experimental programs, or double down on Charity Entrepreneurship * Expanding geographically or deepening impact in existing region