Cross-posted at zachgroff.com

In 1995, Yew-Kwang Ng wrote a groundbreaking paper, "Towards welfare biology: Evolutionary economics of animal consciousness and suffering" that explored the novel question of the wellbeing of wild animals as distinct from the conservation of species. As perceptive as it was innovative, the paper proposed a number of axioms about evolution and consciousness to study which animals are sentient, what their experiences are, and what might be done about it.

Among the many results in the paper was the Buddhist Premise, which stated that under reasonable conditions, suffering should exceed enjoyment for the average wild animal. The finding matches the intuitions of many people who have thought about the issue and concluded that nature is "red in tooth and claw" in Alfred, Lord Tennyson's phrase. As it turns out, though, this "evolutionary economics" argument is wrong. This week, Ng and I published a new paper showing that the original "Buddhist Premise" does not hold: under the model in the paper, the balance of suffering and enjoyment can go either way.

The mistake in the original paper may appear technical, but it is suggestive of an aspect of wild animal suffering that prevailing intuitions in the space may miss. Our paper basically points out a math mistake in the proof that total suffering exceeds total enjoyment in nature based on a set of assumptions about the evolutionary benefits of consciousness and affective states. Ng's original paper offered an intuitive argument in addition to the mathematical one, though. Most wild animals have far more offspring than can survive to maturity, so the experience of an average animal is to be born and then nearly immediately suffer a horrible death. Based on this, Ng speculated that the Buddhist Premise should hold before offering a proof of it based on the axioms.

But the intuitive argument misses a potential evolutionary pressure the math picks up. Because the costs (e.g. resource usage) of suffering depend on the probability of experiencing suffering, when the probability of suffering increases, the severity of suffering should decrease. In other words, if the probability of being born and then immediately dying is sufficiently high, then increasing the amount of suffering is less advantageous for genetic reproduction.

Note well: suffering may very well dominate enjoyment in nature. We cannot arrive at a conclusion on that. Our point is that it does not necessarily dominate.

For me, the paper leads me to suspect that the view that suffering predominates in nature may be anchored on an incorrect result. Few people explicitly give the technical argument from the 1995 paper in conversations about wild-animal wellbeing, so it might seem to not be that influential. If you look at writings on wild-animal wellbeing, though, you find that many academic and lay research cite Ng (1995) and often cite multiple sources that all cite Ng (1995) for the claim that suffering should dominate enjoyment in nature. Many more people than realize it may have been influenced by this result. Our new paper does not show that enjoyment predominates, but it does give reason to pause and reflect on work to date.

84

0
0

Reactions

0
0

More posts like this

Comments12


Sorted by Click to highlight new comments since:

Congrats on fixing the error!

When I first discussed Ng (1995)'s mathematical proof with some friends in 2006, they said they didn't find it very convincing because it's too speculative and not very biologically realistic. Other people since then have said the same, and I agree. I've cited it on occasion, but I've never considered the mathematical result of that particular model to be more than an extremely weak argument for the predominance of suffering.

I think the intuition underlying the argument -- that most offspring die not long after birth -- is one of the reasons many people believe wild-animal suffering predominates. It certainly might be the case that this intuition is misguided, such as based on what you said: "when the probability of suffering increases, the severity of suffering should decrease." I have an article that also discusses theoretical reasons why shorter-lived animals and animals who are less likely to ever reproduce may not feel as much pain or fear as we would from the same kinds of injuries.

While I think these kinds of arguments are interesting, I give them relatively low epistemic weight because they're so theoretical. I think the best way to assess the net hedonic balance of wild animals is to watch videos and read about their lives, seeing what kinds of emotions they display, and then come up with our own subjective opinions about how much pain and pleasure they feel. This method is biased by anthropomorphism, but it's at least somewhat more anchored to reality than simple theoretical models. We could try to combat anthropomorphism a bit by learning more about how other animals make tradeoffs between combinations of bad and good things, and so on.

For me, it will always remain obvious that suffering dominates in nature because I believe extreme, unbearable suffering can't be outweighed by other organism-moments experiencing pleasure. In general, I think most of the disagreement about nature's net hedonic balance comes down to differences in moral values rather than disagreements about facts. But yes, it remains useful to improve our frameworks for thinking about this topic, as you're helping to do. :)

Thanks Brian. I agree that this sort of argument deserves relatively low epistemic weight and that the argument is very speculative, as I tried to emphasize in the paper but am worried that not everybody picked up. I'm definitely more uncertain than you on the topic, perhaps because of different views on suffering. Thanks for the comment.

I'm thankful for this discussion. Previously, I was under the impression that most people who looked deeply into WAS concluded that there was definitely net suffering. However, now it's clear to me this isn't the case.

Brian - I'm wondering if you've explained elsewhere exactly what you mean by "extreme, unbearable suffering can't be outweighed by other organism-moments experiencing pleasure." Is this an expression of negative utilitarianism, or just the empirical claim that current organisms have greater suffering capacity than pleasure capacity?

I am a total hedonic utilitarian, and not negative leaning at all, so I'm wondering what conclusion this philosophical position would lead to, given all the empirical considerations.

You're right that communication on this topic hasn't always been the most clear. :)

This section of my reply to Michael Plant helps explain my view on those questions. I think assessments of the intensities of pain and pleasure necessarily involve significant normative judgment calls, unless you define pain and pleasure in a sufficiently concrete way that it becomes a factual matter. (But that begs the question of what concrete definition is the right one to choose.)

I guess most people who aim to quantify pleasure and pain don't choose numbers such that unbearable suffering outweighs any amount of pleasure, so the statement you quoted could be said to be mainly about my negative-utilitarian values (though I would say that a view that pleasure can outweigh unbearable suffering is ultimately a statement about someone's non-negative-utilitarian values).

I think this is a good place to start, although not written by Brian:

There’s ongoing sickening cruelty: violent child pornography, chickens are boiled alive, and so on. We should help these victims and prevent such suffering, rather than focus on ensuring that many individuals come into existence in the future. When spending resources on increasing the number of beings instead of preventing extreme suffering, one is essentially saying to the victims: “I could have helped you, but I didn’t, because I think it’s more important that individuals are brought into existence. Sorry.” (See this essay for a longer case for suffering-focused ethics.)

Looking back at this now, I don't really get the original setup, either:

Let the amount of enjoyment  and suffering  both be functions of the associated costs  and . If for each success, we have  failures, the maximization of  subject to , gives the following first-order condition (...)

  reflects the costs associated with enjoyment and suffering for the average individual (after multiplying by ), where  and  are the costs of the individual instances. But why are we maximizing ? The average individual faces  instances of  for each of , so should we maximize  instead and then look at ? Or something else?

I'm also not convinced that the hedonic functions of costs should be concave, either, and my intuition is that this doesn't hold at the extremes of suffering, which take all of an individual's attention and priority.

It's also assumed that animals who die without having offspring have net negative lives, while those who do have offspring have net positive lives, but neither seems obvious to me, since you can imagine animals who die without having offspring but have long and decent (although plausibly worse) lives nevertheless, e.g. individuals lower in the status hierarchy of their group.

I don't have a strong view on the original setup, but I can clarify what the argument is. For the first point, that we maximize . The idea is that we want to maximize the likelihood that the organism chooses the action that leads to enjoyment (the one being selected for). That probability is a function of how much better it is to choose that action than the alternative. So if you get E from choosing that action and lose S from choosing the alternative, the benefit from choosing that action is E - (-S) = E + S. However, you only pay to produce the experience of the action you actually take. This last reason is why the costs are weighted by probability, while the benefits, which are only about the anticipation of the experience you would get conditional on your action, are not.

It occurs to me that a fuller model might endogenize n, i.e. be something like max P(E(C_E) + S(C_S)) s.t. P(.) C_E +  (1 - P(.)) C_S = M. (Replacing n with 1 - P here so it's a rate, not a level. Also, perhaps this reduces to the same thing based on the envelope theorem.)

And on the last point, that point is relevant for the interpretation of the model (e.g. choosing the value of n), but it is not an assumption of the model.

Hi Zack. Great work! I have one clarificatory question. It is unclear to me whether you use your model to yield predictions about net average welfare or about net total welfare. Sometimes you seem to speak about one and sometimes about the other (on average, on the whole, etc.). Thanks!

Thanks for this comment. I think the model equally yields predictions on both. In no way does the model give any sense of scale or units. The only thing it's useful for at this stage is saying whether suffering exceeds enjoyment or vice versa, and that should be true on average if and only if it's true on the whole, unless I'm missing something.

Thanks for the reply! I think you are not missing anything: if there's total net positive/negative value then, necessarily, there's average net positive/negative value, and viceversa. But average net negative value can be lower than total net negative due to some individuals having much better lives than others. Some axiologies care for average, some for total, so it would be interesting to have separate measurements of each.

I've had a few thoughts about this recently (well, I've had a post drafted about point 3 for a while, but I don't think there's enough in it to warrant a whole post).

1. I think r/K-selection can tell a lot the closer a species is to our own. For example, I don't think most rodent brains and development are so different from our own that we should expect them to process pain much less intensely than us, including at corresponding developmental milestones. This is a bad sign for rodent welfare.

2. However, invertebrates are very different from us, so we should be careful extending to them.

3. I think these results only really apply at equilibrium. Since populations are not in general at equilibrium, and my prior is that a change in conditions is in expectation bad for individual welfare, since their genes are being optimized for a given set of environments (related to the observation that there seem to be more ways for things to go wrong than for them to go right), this would be a reason to expect average welfare to be lower than you'd expect at equilibrium. If your prior was 0 at equilibrium, it should be negative outside equilibrium.

4. However, I also suspect there's an argument going in the opposite direction (is it the same as the original one in the OP?): animals act to avoid suffering and seek pleasure, and the results might better be thought of as applying to behaviours in response to pleasure and suffering as signals than directly to these signals, because evolution is optimizing for behaviour, and optimizing for pleasure and suffering only as signals for behaviour. If we thought a negative event and a positive event were equally intense, probable and reinforcing *before* they happened, the positive event would be more likely to continue or happen again after it happened than the negative one after it happened, because the animal seeks the positive and avoids the negative. This would push the average welfare up. I'm pretty uncertain about this argument, though.

To illustrate a bit further, suppose individuals from a species are equally likely to encounter a given harm A or a given reward B for the first time, and these are equally intense for the animals, and the animals spend as many resources to avoid A as they do to seek B. After the first encounter with each of A and B, the reward B becomes more likely (immediately by reflex or in the future due to learning). Would this imply the species is not at equilibrium and evolutionary pressures should force them to spend relatively more resources on avoiding A than on seeking B? If not, this is a good sign for average welfare.

I've elaborated on point 3 here.

Curated and popular this week
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
 ·  · 1m read
 · 
This is a linkpost for a new paper called Preparing for the Intelligence Explosion, by Will MacAskill and Fin Moorhouse. It sets the high-level agenda for the sort of work that Forethought is likely to focus on. Some of the areas in the paper that we expect to be of most interest to EA Forum or LessWrong readers are: * Section 3 finds that even without a software feedback loop (i.e. “recursive self-improvement”), even if scaling of compute completely stops in the near term, and even if the rate of algorithmic efficiency improvements slow, then we should still expect very rapid technological development — e.g. a century’s worth of progress in a decade — once AI meaningfully substitutes for human researchers. * A presentation, in section 4, of the sheer range of challenges that an intelligence explosion would pose, going well beyond the “standard” focuses of AI takeover risk and biorisk. * Discussion, in section 5, of when we can and can’t use the strategy of just waiting until we have aligned superintelligence and relying on it to solve some problem. * An overview, in section 6, of what we can do, today, to prepare for this range of challenges.  Here’s the abstract: > AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession. We call these developments grand challenges.  > > These challenges include new weapons of mass destruction, AI-enabled autocracies, races to grab offworld resources, and digital beings worthy of moral consideration, as well as opportunities to dramatically improve quality of life and collective decision-making. > > We argue that these challenges cannot always be delegated to future AI systems, and suggest things we can do today to meaningfully improve our prospects. AGI preparedness is therefore not just about ensuring that advanced AI systems are alig