Hide table of contents

TLDR: When discussing existential risks, the most common focus is on humanity's extinction. In this post, I use arguments based on sentientism and moral uncertainty to demonstrate that this sole focus on humans is unwarranted; we should consider all sentient life when talking about existential risks.

0 Preface

This essay is an entry to question 2 of the Open Philanthropy AI Worldviews Contest:

Conditional on AGI being developed by 2070, what is the probability that humanity will suffer an existential catastrophe due to loss of control over an AGI system?

Here, I do not directly answer the question, but I address the meaning of the term 'existential catastrophe'. The question points readers to an explanation of three different types of existential catastrophes: human extinction, unrecoverable civilizational collapse, and unrecoverable dystopia. The first two types are completely focused on humans, and the third type implies a focus on humans. At the very least, none of the three types mention sentient beings at all, which I will argue is problematic.

1 Introduction

To mitigate the existential risk from AI, it is likely useful to have a clear, robust, and inclusive moral framework to assess what is important. Better recognition of what is important can afford increased protection and flourishing of morally valuable beings. More concretely, it might improve collaboration between people due to a clearer shared goal, and it might make it easier to find, notice, and seize risk-reducing opportunities.
When people discuss existential risk, they commonly define them as risks that cause the loss of humanity’s longterm potential (CSER, 2023; Ord, 2020). In this paper, I argue for extending the definition of existential risk to incorporate all sentient life. 

I focus on existential risk from AI, but arguments similar to the ones I raise here can be applied to other existential risk areas as well. Furthermore, throughout this paper, I roughly equate sentience to the capacity to suffer, and I assume that all that is sentient is alive, but not all that is alive is necessarily sentient. 

The structure of the essay is as follows: To argue why all sentient life should be morally considered with existential risk, one must first thoroughly understand the current definitions of existential risk, which is the topic of the first section. Subsequently, I argue in Section 3 why all sentient life deserves moral consideration from a hedonistic utilitarian perspective. I will not substantially defend this hedonistic utilitarian perspective; I build on the book The Point of View of the Universe by de Lazari-Radek and Singer (2014). Even after presenting these sentientist arguments in this section, it is possible that sentient life must not be included in the moral framework of AI; some uncertainty remains on the correctness of these arguments. Therefore, I thoroughly discuss moral uncertainty in Section 4. Finally, I present the conclusion in Section 5.

2 An overview of existential risk definitions 

Before I dive into the differences in existential risk definitions, I address some broad consensus in the research field. To my knowledge, people agree that existential risk concerns the probability of an existential catastrophe occurring. I use existential risk in the same way. This paper focuses on what constitutes an existential catastrophe, and not on the probability of an existential catastrophe occurring. 

Some authors on existential risk do not exclusively focus on humans. Bostrom (2013) states that “An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development”. Additionally, Cotton-Barratt and Ord (2015) define an existential catastrophe as “an event which causes the loss of a large fraction of expected value”. The crucial part of these two definitions is in the terms ‘desirable future development’ and ‘value’, respectively. These parts of the definitions can be whatever is thought to be valuable, as remarked by Cotton-Barratt and Ord (2015).
Nonetheless, I argue that both of these definitions are unsatisfactory. 
First, in Bostrom’s definition, intelligent life is not a clearly defined subset of possible beings. In his subsequent description of existential risk, Bostrom mentions only humans as a form of 'intelligent life'. Regardless of what intelligent life exactly means, I will argue in Section 3 that this focus on both intelligent and Earth-origination life is not inclusive enough. 
Second, Cotton-Barratt and Ord (2015) more closely match my view with their definition, as the meaning of the word 'value' is very broad, and few perspectives can conflict with such a broad statement. However, in their paper they fail to specify what they mean with ‘value’, which clouds the moral framework. 

3 The moral value of all sentient life 

Why would non-human life matter morally? I argue that the capacity of suffering is necessary and sufficient for an entity being worthy of moral consideration. Moral consideration tightly relates to moral patienthood, which entails that one matters morally in and of oneself (Muller, 2020). This philosophical perspective is called ‘sentientism’, see Jamieson (2008) for further reference. 
To argue why sentience is a necessary condition for moral patienthood, Singer (1996) explains in an intuitive sense that it is nonsensical to talk about the interests of a stone being kicked; the stone does not feel anything (in most philosophical views). When a human is kicked, it matters only because humans are sentient. If a human had no subjective experience at all, it would not feel pain from the kick, it would not feel shame of being kicked, it would not feel anything. Obviously, there are many effects of kicking a hypothetical non-sentient human. Examples are a toughening culture, friends and family being shocked, psychological damage to the kicker themself, etc. However, none of these reasons are directly related to the non-sentient human. In fact, all of them relate to the suffering of some sentient being. 
Sentience is also a sufficient condition for moral patienthood. Singer (1996) argues that all sentient humans are moral patients. To determine which other beings deserve moral patienthood, one must find the sufficient condition(s) of sentient humans which qualify them for moral consideration. Singer argues that there is no such condition that can differentiate between humans and other animals; there is overlap between humans and non-humans with any reasonable distinction one can draw between them. For example, one could argue that intelligence is what separates humans from other animals. However, due to brain damage, some humans cannot talk, recognize people, etc. In some relevant sense, they have comparable intelligence to other animals. Does these humans’ reduced intelligence imply that they do not deserve moral patienthood? Singer does not think so, and neither do I. The only clear distinction one could draw is to which species some being belongs to, but this position is untenable. Horta (2010) calls the unjustified focus on humans as opposed to other sentient life 'speciesism'. Merely belonging to a group is an invalid reason for a difference in moral consideration, just as other types of discrimination such as sexism and racism. Importantly, if sentience makes humans moral patients, then this property must also apply to other sentient beings. However, my argument lacks any relevance if there are no sentient beings besides humans. But, humans do not seem to be the only sentient beings in existence, see e.g. (Broom, 2014; Safina, 2015). Of course, one can probably never be sure of any other entity than yourself having subjective experience, even not for other humans. These are good arguments, but discussing their validity is out of the scope of this paper. I merely note that what matters in hedonistic utilitarian decision-making is the probability of something being sentient; although one may not be a 100 per cent certain of people being sentient but 99 per cent sure, one still treats them nearly the same because the probabilities are so close that they hardly affect the expected utility. The same argument applies to other sentient beings; one might believe with roughly 90 per cent certainty that dolphins are sentient, and therefore they deserve significant moral consideration. 
To be clear, it is not strictly relevant to my argument which beings are sentient, and which are not. If it turns out that dolphins are not sentient, then they do not need to be considered morally. On the other hand, if some AI systems or an alien species are sentient, they are also worthy of moral consideration. Also, the intensity of sentience is relevant for determining the expected utility, but it is not relevant to this paper.


With sentience being a necessary and sufficient characteristic for moral consideration, life originating from Earth unnecessarily and unfairly excludes sentient alien life. Imagine an advanced alien civilization swiftly and harmlessly replacing all ‘intelligent’ life on and from Earth. The replacing alien civilization is very similar to human civilization; they can love, they can enjoy food and music, and they push the frontier of knowledge with philosophy and science. Additionally, they bring their own plants, animals, fungi, or anything else one could classify as intelligent. They do all these things, and all sentient beings are on average substantially happier than our human society. This thought experiment illustrates that this situation is not bad from a perspective of the universe, because sentient life is not extinct but thriving. No existential catastrophe has occurred from a hedonistic utilitarian standpoint, and therefore sentient alien life must be considered in the definition of an existential catastrophe.
Similarly, imagine a non-sentient AI system that controls a future universe where nothing is sentient. It runs and maintains the data centre with the hardware to run itself on, it advances some scientific fields, and it has the potential to become much more intelligent and to create copies of itself. Further assume that the AI system and its successors do not and will not have any positive or negative affect. Apart from that, there is nothing in the world. It is, at least to reasonable standards, intelligent, and does not reduce the potential for intelligent life in the future. Therefore, no existential catastrophe has happened according to the definition in Bostrom (2013). However, I argue that this does constitute an existential catastrophe because the necessary condition of sentience is not met. There are no experiences, it is a morally worthless universe containing intelligent rocks. 

A counterargument can be made here. One can argue that intelligence does not exist without sentience, for example, because subjective experience is a necessary side effect of your body processing information (Robinson, 2019). If this is true, it would indeed completely undermine my argument. Refuting this counterargument is outside the scope of this paper. However, such counterarguments are a reason for uncertainty in my own arguments, and thus provide a good reason to investigate my arguments from a moral uncertainty perspective.

4 Moral uncertainty

 There is a chance that sentientism is incorrect, perhaps the current ideas on sentience are off, or there is a reasoning flaw in the argument that I do not currently see. This incorrectness is not necessarily falsifiable. Moral uncertainty is the study of what to do morally, despite this uncertainty (Lockhart, 2000; MacAskill et al., 2020). 

Let us further examine the uncertainty in moral theories. Bourget and Chalmers (2021) demonstrated moral uncertainty with a survey of philosophers. Philosophers were asked to give their opinion on philosophical questions, including their stance on normative ethics. The participants were asked to vote on a certain position they agreed with or leaned towards. The participants could vote for multiple answers, which some did, and the summed number of votes totalled 2050. 
The answers to the question: “Normative ethics: virtue ethics, deontology, or consequentialism?” were as follows: 27.2% of the votes were for deontology, 26.0% for consequentialism, 31.4% for virtue ethics, and 15.4% for the option ‘other’. Interestingly, the philosophers voting for multiple position indicate the commensurability of different stances on normative ethics. For example, if one votes for deontology and consequentialism, there must be some way, if the philosopher is not contradicting themself, to bridge the two perspectives. Nonetheless, as some respondents accept or lean towards some positions and other respondents do not, incommensurability remains. Therefore, some people are necessarily wrong, and we do not know for sure who they are. Thus, Bourget and Chalmers (2021) indirectly illustrate that my sentientist argument in Section 3 might also be wrong. 

The core argument of this section is that despite the possibility of sentientism and hedonistic utilitarianism being wrong, existential risk should still concern all sentient life. To aid decision-making under uncertainty, Lockhart (2000) uses the maximize-expected-moral-rightness criterion. This criterion is determined by multiplying the probability of a moral statement being right with how morally right this decision is. The credence for a certain moral statement is determined by those who make the decision, ideally with the clearest thinking and with perfect information. 
For example, we have a moral statement A with two contrasting opinions, namely X and Y, to which I assign credences of 0.9 and 0.1 respectively. See Table 1 for an overview of this example. If we act according to A being true, X holds that the moral rightness is 0.5 while Y assigns a moral rightness of 0.6. In this case, the value of the criterion is (0.9 × 0.5) + (0.1 × 0.6) = 0.51. On the other hand, if we act according to A being false, X holds that the moral rightness is 0.2 while Y assigns a moral rightness of 0.8. In this case, the value of the criterion is (0.9 × 0.2) + (0.1 × 0.8) = 0.26. Finally, the maximize-expected-moral-rightness criterion holds that one should act according to A being true, as we get a higher score for A being true than for A being false. 

Table 1: Example of applying the maximize-expected-moral-rightness criterion
 Opinion X (credence of 0.9)Opinion Y (credence of 0.1)
Moral statement A0.50.6
Moral statement not A0.20.8

Let us apply the maximize-expected-moral-rightness criterion to the case at hand. I will not calculate the criterion score for each plausible moral opinion. Rather, I outline some views which illustrate the general mechanics relevant to our moral statement. See Table 2 for an overview. The moral statement is whether we include all sentient life in the moral framework of AI. Now, the opinions on this moral statement are provided by theories such as sentientism and contractarianism. Contractarianism holds that moral norms are determined through mutual agreement or a contract (Cudd & Eftekhari, 2021). For clarity, I only picked one other theory, and I picked contractarianism because it is a clearly opposing view. 

Table 2: Applying the simplified maximize-expected-moral-rightness criterion
 Sentientism Contractarianism 
Include all sentient lifeReally goodPossibly bad
Not include all sentient lifeReally badSomewhat good

According to sentientist arguments raised in the previous section, it would be terrible to not consider all sentient life morally. Therefore, sentientists assign a high degree of moral rightness to the moral statement being true, and low if not. According to contractarianism, it is plausible that at least some non-human sentient beings are not rational agents, and therefore not all sentient life should be given moral patienthood (Rowlands, 1997). Contractarians would give a lower moral rightness score to the moral statement being true because many additional moral patients might disadvantage the current moral patient’s position. It might be harder to care for many more, reducing the quality of care of the original moral patients. But this would not be as extreme as the sentientists’ view, because all rational agents would still be considered as moral patients. 
Clearly, there is an asymmetry of the stakes; the moral issue at hand is much more important for perspectives in line with sentientism than for perspectives in line with contractarianism. In other words, the moral rightness pushes the score for maximize-expected-moral-rightness criterion towards including all sentient life. Now what remains is the need to assess the credences for both perspectives. I assign a substantially higher credence to the sentientist perspective than the contractarian perspective. However, the strength of my argument is that one could assign equal credences to both perspectives or even lean towards the contractarian perspective, and the maximize-expected-moral-rightness criterion still holds that one should include all sentient life in the moral framework of AI. 

There are some flaws in the maximize-expected-moral-rightness methodology, which weaken but do not invalidate my argument. I discuss the three most relevant flaws, and afterwards consider their effect on my argument. These flaws are best described in the book Moral Uncertainty by MacAskill et al. (2020). 
First, the issue of interaction effects. Interaction effects concern how different moral issues can affect each other’s credence and moral rightness. An example of an interaction effect is the treatment of sentient beings other than yourself; I raised this moral issue in the previous section. If no other being than you, a human, is sentient, the inclusion of all sentient life is irrelevant because all sentient life has already been included. This argument seems unlikely to be true, but it still could be true, which requires assigning at least some credence to this view. Interaction effects like these complicate the moral decision-making process I have illustrated before. 
Second, intertheoretic comparisons concern the issue that theories might be hard to compare. For example, utilitarianism and deontology seem hard to compare because utilitarians can use the expected value of an action to determine a moral rightness score while deontology tends to be binary; one might have a rule that one should not lie, but the degree of moral rightness is hard to determine.
Third, there is the problem of infinite regress. The foundational principle of the framework of moral uncertainty is that one cannot be completely sure that one’s preferred moral theory is correct. To account for this uncertainty, one can use moral uncertainty methods like the maximize-expected- moral-rightness criterion. However, this method might be wrong as well. One could have made a reasoning error, as is common throughout history. So, one should account for this uncertainty. This uncertainty pertains throughout every layer of fixes. Therefore, one can never be completely sure of the theory or method one uses, leading to infinite regress. 

How do these flaws affect my argument? MacAskill et al. (2020) correctly argue that the interaction effect complicates the argument I have made. However, I have discussed some of these interaction effects in Section 3, and there do not seem to be interaction effects that flip the outcome; these complications should merely decrease the strength of my argument. Similarly, the issue of intertheoretic comparisons complicates calculating the maximize-expected-moral-rightness criterion. Still, the effect on the thesis of this essay seems minimal; the argument of the asymmetry of the stakes still applies, but the amount of asymmetry might be harder to establish. Lastly, infinite regress is challenging, but this problem applies to any theory of moral decision-making processes (MacAskill et al., 2020). Generally, an unsolved problem in a methodology decreases one’s confidence in arguments resulting from the methodology, and this applies here as well. To sum up, methodological problems decrease the strength of my argument, but do not invalidate them. 

I finish this section with a recommendation by MacAskill (2022). He recommends that given moral uncertainty, a generally good approach is to keep options open. We have seen tremendous moral change throughout history, from abolitionism to women’s rights, and therefore we should expect new moral insights in the future. Because of this argument, the moral framework with which to steer AI should be flexible to include future insights. This is also in line with the two definitions by Bostrom and Ord in Section 2, where they use ‘desirable future development’ and ‘value’ to make their definition future-proof. 

5 Conclusion 

I have argued that all sentient life should be incorporated into the definition of existential risk. 

I started my argument by examining how existential risk is currently defined, where I observed that existential risk is focused on humans. In the following section, I argued sentience to be a necessary and sufficient condition for moral consideration for humans. Naturally, if other beings satisfy the condition of sentience, then they should also be morally considered. I showed that Earth-originating and intelligent are still lacking conditions to define who should be considered in an existential catastrophe.

Although this sentientist argument is strong, one must also consider the possibility it is wrong. After all, many people have extensively thought about these issues and still disagree. The framework of moral uncertainty provides tools to make decisions despite this uncertainty, one of which is the maximize-expected-moral-rightness criterion. Using this criterion, I showed that one should want to morally consider all sentient life due to the asymmetry of the stakes, even if one leans towards the perspective that not all sentient life deserves moral consideration. There are some methodological issues with the maximize-expected-moral-rightness criterion. These issues are described by interaction effects, intertheoretic comparisons, and infinite regress. I showed that these issues do weaken my argument, but there was no reason to invalidate the use and results of the criterion. Lastly, I noted that we should expect new moral insights in the future, as there have been many moral insights in the past. Therefore, the moral framework of existential risk should be future-proof. 

To conclude, one must include all sentient life to make the moral framework of AI more robust. The robustness of this moral framework could help in steering the powerful technology of AI into the right direction, creating a better universe for all.


Acknowledgements 

I want to thank Martijn Klop, Freek van der Weij, Nadja Flechner, and Nathalie Kirch for their feedback throughout various stages of the writing process. 

References 

  • Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4 (1), 15–31. 
  • Chapman; Hall/CRC. Bourget, D., & Chalmers, D. J. (2021). Philosophers on philosophy: The 2020 philpapers survey. https://philpapers.org/rec/BOUPOP-3.   
  • Broom, D. M. (2014). Sentience and animal welfare. Cabi. 
  • Cotton-Barratt, O., & Ord, T. (2015). Existential risk and existential hope: Definitions. Future of Humanity Institute: Technical Report, 1 (2015), 78. 
  • CSER. (2023). Centre for the study of existential risk. https://www.cser.ac.uk/
  • Cudd, A., & Eftekhari, S. (2021). Contractarianism. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Winter 2021). Metaphysics Research Lab, Stanford University. 
  • de Lazari-Radek, K., & Singer, P. (2014). The point of view of the universe: Sidgwick and contemporary ethics. Oxford University Press. 
  • Horta, O. (2010). What is speciesism? Journal of agricultural and environmental ethics, 23, 243– 266. 
  • Jamieson, D. (2008). Sentientism. In A companion to environmental philosophy (pp. 192–203). 
  • John Wiley & Sons. Lockhart, T. (2000). Moral uncertainty and its consequences. Oxford University Press. 
  • MacAskill, W. (2022). What we owe the future. Basic books. 
  • MacAskill, W., Bykvist, K., & Ord, T. (2020). Moral uncertainty. Oxford University Press. 
  • Müller, V. C. (2020). Ethics of artificial intelligence and robotics. https://plato.stanford.edu/entries/ethics-ai/ 
  • Ord, T. (2020). The Precipice: Existential Risk and the Future of Humanity Bloomsbury Publishing. 
  • Robinson, W. (2019). Epiphenomenalism https://plato.stanford.edu/entries/epiphenomenalism/ 
  • Rowlands, M. (1997). Contractarianism and animal rights. Journal of applied philosophy, 14 (3), 235–247. 
  • Safina, C. (2015). Beyond words: What animals think and feel. Macmillan.
  • Singer, P. (1996). Animal liberation. Springer. 

12

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities