Hide table of contents

Under-Attributing Moral Patienthood

Under-attribution of moral patienthood means mistakenly seeing and treating a being as an object, as a non-moral patient. We tend to under-attribute moral patienthood to beings when we benefit from them, exploit them, or perceive them as fundamentally different from ourselves. Colonizers, for instance, saw enslaved people as biological machines rather than humans deserving moral consideration. Until the 1980s, surgical operations on babies were often performed without anesthesia because it was believed that babies did not feel pain, and their reactions were seen as mere mechanistic reflexes.

We do not need to look to the past to find under-attribution of moral patienthood. Every day, billions of animals are used by humans, and humanity as a whole still fails to grant them moral patienthood and moral consideration. The cost of under-attributing moral patienthood has been unimaginably large. Billions of enslaved people, amimals, women, and babies have suffered mistreatment throughout history as a result of this moral failure.

There are also structural disadvantages to under-attribution in the case of animals. When our food systems depend on the exploitation of beings that deserve moral consideration, it becomes harder to acknowledge this due to self-serving incentives. It is also more difficult to change the system once it is established, since those who benefit from it do not want to lose their advantages.

Despite all of this, our moral circle is expanding. Compared to the previous century, we have made immense progress in attributing moral patienthood to beings who deserve it. I am especially proud of the EA community’s open-mindedness in expanding this circle even further to invertebrates and even digital beings. We can work to understand whether these beings deserve moral patienthood and, if we find that we have been under-attributing, act according to their best interests as welfare subjects. However, as we move toward beings that are increasingly different from us, under-attribution will not be the only mistake we risk making.

Over-Attributing Moral Patienthood and Moral Circle Calibration

Over-attribution of moral patienthood means mistakenly seeing or treating an object as a being, or a non-moral patient as a moral patient. We might over-attribute moral patienthood for various reasons, sometimes simply because we like something or perceive it as important. For example, some people treat trees as moral patients and believe they should have rights protecting their intrinsic welfare. We are also more likely to attribute moral patienthood to entities with eyes, with motion that appears self-directed, or those that can use language. This helps explain why 18% of the current U.S. population believes that present-day AI systems are sentient. [1] This figure may seem surprising, but considering how well large language models can hold conversations and even act as romantic companions when designed that way, it becomes easier to understand.

Since we can also over-attribute moral patienthood, perhaps instead of merely focusing on moral circle expansion, we should aim for moral circle calibration. This means calibrating our moral circle to include only beings who genuinely deserve moral patienthood. We need to account for both under-attribution and over-attribution and adjust our moral concern accordingly.

Over-attribution also carries costs. Attributing moral patienthood means we have certain duties toward those beings, such as ensuring we are not unnecessarily harming them. This can go further, as we may feel compelled to grant them rights. This becomes problematic because our resources are limited. The more resources we allocate to beings that may not actually deserve moral patienthood, the fewer resources remain for the eight billion humans and the quadrillions of animals who are still unable to live peaceful lives.

Imagine if the limited funds available for human and animal welfare were redirected toward improving the well-being of current AI systems. The cost of being mistaken is extremely high. We could have saved real lives with the same resources but instead might waste them on entities that do not deserve moral patienthood. Moreover, as a community, we also have limited “weirdness points.” Whenever we advocate for unusual ideas in public or in the media, we use up these points. Being perceived as weird is acceptable if it is justified and leads to impact, but losing legitimacy and seriousness for no reason would be unfortunate.

As we approach the outer edge of our moral circle, including all beings who truly deserve moral patienthood and excluding those who do not, avoiding over-attribution becomes increasingly important. Once we have already included those who deserve moral consideration, the cost of mistakenly expanding further becomes greater.

Erring on the Side of Expansion

In the tension between under-attribution and over-attribution, there is a common rule of thumb: “err on the side of expansion.” The reasoning behind this principle is that the cost of under-attribution has historically been immense. It allowed “business as usual” to continue and resulted in enormous suffering in the past, and could do so again in the future. I am sympathetic to this view because, in many cases, losing some resources would be a small price to pay if it meant preventing vast amounts of suffering for countless beings.

However, I am also wary of this approach. While the intention behind erring on the side of expansion is understandable, it risks oversimplifying the complex science of consciousness. It may lead us to act as though the question of moral patienthood has already been settled when it is not. I fear that this mindset could diminish the urgency of improving our understanding of these issues.

Moreover, erring on one side only works if the risks of over- and under-attribution are asymmetric, meaning one is significantly more harmful than the other. But in the case of digital sentience, this might not be true. If we err on the side of expansion and grant rights, including political and representational rights, to digital beings, we could also face severe consequences. AI systems might act against our best interests, potentially disempowering or even harming humanity.

If we prematurely accept the moral status of digital beings, we will also need to share our already limited resources with them. Imagine if digital beings turned out not to be sentient, yet significant funding from animal welfare and AI safety was redirected toward their supposed well-being because we were erring on the side of expansion. This would represent a serious loss for both AI safety and animal welfare. These considerations suggest that both over- and under-attribution carry serious risks, making the “erring on one side” approach infeasible.

The most urgent step for us now is to deepen our understanding of the beings who might deserve inclusion in our moral circle and to develop empirical frameworks for assessing their sentience. 

The EA community has already made remarkable progress in expanding our moral circle to include many animals and even some invertebrates. However, as we encounter beings that are very different from us, it becomes increasingly difficult to determine whether they are sentient. We are no longer just at risk of under-attributing moral patienthood; we are also at greater risk of over-attributing it. We have entered a period in which moral circle calibration, rather than mere expansion, should be our focus.

  1. ^

    See: Pauketat, Janet; Ladak, Ali; Anthis, Jacy (2023), “Artificial intelligence, morality, and sentience (AIMS) survey”, Mendeley Data, V2, doi: 10.17632/x5689yhv2n.2

18

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Thanks for writing this! It seems all the more important to get this right given the trend that the beings on the edge of our moral circle tend to be the most numerous, meaning that if we take the possibility of their sentience seriously, we may spend quite large amounts of resources trying to help them. Seems worth trying to figure out whether beings on the edge of our moral circles are basically all that matters or don't matter at all!

Curated and popular this week
Relevant opportunities