Originally submitted as a project for the AI Safety Fundamentals AI Governance course in April 2025; edited and somewhat expanded for publication on the EA Forum in July 2025.
Thanks to Yip Fai Tse, Arturs Kanepajs, Max Taylor, Constance Li, Kevin Xia, Adrià Moret and Sam Tucker-Davis for advice and suggestions before and/or after the writing of this piece.
Introduction
We review how major AI governance frameworks address – or, more commonly, overlook – the interests of sentient nonhumans, including both biological animals and potentially sentient artificial beings. Our analysis reveals a systemic exclusion of nonhuman interests across governance instruments, with few meaningful acknowledgments of their moral status.
Grounded in an ethical perspective that values sentience regardless of species or substrate, we propose approaches to integrate nonhuman interests into these AI governance frameworks, focusing on the EU AI Act and various UN AI governance instruments.
Our proposals are deliberately ambitious as they address a large and critical blind spot in current AI policy discussions. We urgently need governance frameworks to reflect our moral responsibility towards all beings capable of suffering, establishing a more inclusive ethical and political foundation for technological progress.
1. Policy review
We review AI policy from the European Union (EU) and the United Nations (UN) because of the direct and indirect influence of intergovernmental policy, and we use existing EU and UN governance instruments as inspiration for our own policy suggestions. We also touch on Serbia’s ethical guidelines for AI, the only national AI policy we found that refers to animals to date.
1.1. EU Ethics Guidelines for Trustworthy AI, 2019
The EU’s High-Level Expert Group on AI (AI HLEG) produced a draft and a final version of Ethics Guidelines for Trustworthy AI.
The 2018 draft included a section titled “The Principle of Non maleficence: “Do no Harm””, which focused on preventing harm to humans, especially vulnerable demographics. This section also referred to animals: “Avoiding harm may also be viewed in terms of harm to the environment and animals, thus the development of environmentally friendly AI may be considered part of the principle of avoiding harm.”
In the 2019 final version, AI HLEG shortened the section on “The principle of prevention of harm” and removed the explicit reference to animals, referring only to “consideration of the natural environment and all living beings”. However in the “Trustworthy AI Assessment List” that serves as part of the overall framework, the authors ask “Did you consider the potential impact or safety risk to the environment or to animals?”.
1.2. EU AI Act, 2024
In 2024 EU lawmakers signed the EU AI Act, the world's first comprehensive legal framework for regulating artificial intelligence. The legislation has been through several drafting phases and includes related documents such as Codes of Practice.
The main text of the EU AI Act contains no substantive mentions of animals or nonhuman interests. The regulation focuses exclusively on human-centered concerns, including fundamental rights, safety, and democratic values. Discussions of environmental impacts are framed solely in terms of human interests and sustainability.
However, the General-Purpose AI Code of Practice (published in July 2025) – which serves as a set of non-binding guidelines for general-purpose AI providers (not formally linked to the EU AI Act) – lists "risk to…non-human welfare” as a “systemic risk”, under the Safety & Security chapter of the code.
1.3. United Nations policy
There are no mentions of nonhuman interests in Governing AI for Humanity: Final Report (2024) by the UN’s High-Level Advisory Body on Artificial Intelligence, or in the UN Secretary-General’s Roadmap for Digital Cooperation (2020).
The UNESCO Recommendation on the Ethics of AI (2021) refers to the potential impact of AI on “animal welfare” in the Preamble.
1.4. Serbia’s Ethical Guidelines for AI, 2023
The Republic of Serbia issued its Ethical Guidelines for Development, Implementation and Use of Robust and Accountable Artificial Intelligence in 2023. These guidelines state that “the artificial intelligence systems that are developed must be in harmony with the wellbeing of humans, animals and the environment”.
2. Analysis
If the interests of sentient beings matter regardless of their species and substrate, existing AI governance policy fails to protect the interests of sentient nonhumans. This review found very few references to nonhumans in EU and UN policy, and the only national AI policy that we found that mentions animals is Serbia’s.
Where animals are mentioned in this governance literature, they are often put in the same category as the environment, implying that their relevance and value is instrumental (as a resource for human use, or as a critical component of the ecosystem) rather than intrinsic. When there are references to the wellbeing or welfare of animals in the policy, there is no elaboration regarding what protections they are to be afforded in the context of AI.
Several factors make recognising nonhuman interests in AI governance harder still. There is a lack of consensus regarding how to detect and measure the sentience of nonhumans, and incorporating nonhuman interests into AI policy would add complexity to governance challenges that are already exceptionally hard – especially given the risk of trade-offs between the interests of humans and those of nonhumans. Furthermore, the industries that use animals for profit (such as factory farming) continue to wield significant economic and political influence.
Despite the challenge, we consider the “nonhuman gap” in AI governance to be a moral and political failure that demands urgent attention. We believe that sentience is necessary and sufficient for moral consideration, regardless of species and substrate, and that it may be the case that the development of powerful AI has profound effects on all sentient beings in the near-term future. Furthermore, ethical polls around the world suggest people care deeply about the wellbeing of animals and want to avoid harm to them.
We are encouraged by the Republic of Serbia’s references to the “pain, suffering, fear and stress” that animals (may) feel under the description of “Ethics” in the glossary of their Ethical Guidelines for AI. We consider this focus on animal sentience and suffering to be entirely appropriate for policymakers who are seeking to reduce harms and regulate AI in a responsible manner.
We consider the nonhuman gap to be a significant problem because it makes sentient nonhumans much more vulnerable to harm in a society affected or transformed by present or future AI systems. This harm could come in a wide range of forms: from the intensification of existing systems of animal exploitation (such as factory farming), to reducing the capacity of humans to help animals by locking in speciesist cultural and political norms, to creating entirely novel ways to exploit and inflict suffering on sentient beings.
3. Proposals
In this section we make a series of proposals for changing or creating intergovernmental AI governance instruments in order to better reflect the interests of sentient nonhumans. This piece will not explore ways to implement these proposals in the real world, but we would strongly encourage work (like comments under this piece) that would facilitate that implementation.
3.1. Updating the EU AI Act
Legal Foundation
The legal foundation for updating EU law to ensure that it protects the interests of animals may come from Article 13 of “Provisions having General Application” of the Treaty on the Functioning of the European Union (TFEU), which requires that full regard is paid to “the welfare requirements of animals” as "sentient beings" in policies relating to technological development:
In formulating and implementing the Union's agriculture, fisheries, transport, internal market, research and technological development and space policies, the Union and the Member States shall, since animals are sentient beings, pay full regard to the welfare requirements of animals, while respecting the legislative or administrative provisions and customs of the Member States relating in particular to religious rites, cultural traditions and regional heritage.
In turn, Articles 5.8 and 96(1)(e) of the EU AI Act underline the importance of consistency between the EU AI Act and existing EU legislation, including TFEU.
Areas of amendment
Expand purpose and values (Article 1)
- Add "protection of sentient nonhumans and their wellbeing" as a core value
- Include potential artificial sentience within the scope of protections
Broaden risk assessment (Article 9)
- Include "risks to sentient nonhuman wellbeing" in mandatory assessments
- Add evaluation of potential harm to artificially sentient systems as technologies evolve
Revise high-risk classification (Articles 6-7)
- Add category: "AI systems with significant impact on sentient nonhumans"
- Cover systems that use or otherwise directly affect animals (such as agriculture and experimentation)
- Include systems that could develop or host artificial sentience
Add prohibited practices (Article 5)
- Ban AI systems that:
- Worsen the wellbeing of animals in the context of their use for food, clothing, labour, experimentation, entertainment etc.
- Cause unnecessary suffering to sentient beings (biological or artificial)
- Make automated decisions affecting sentient nonhumans without oversight
- Harm the wellbeing of potentially sentient AI systems
Redefine "serious incidents" (Article 3)
- Include significant harms to sentient nonhuman wellbeing
- Recognise potential harms to artificially sentient systems
Enhance AI governance (Articles 64-69)
- Require experts in animal welfare, animal law, and artificial sentience (including ethicists, legal scholars, advocates, and scientists) on the AI Board and advisory forum
Expand documentation requirements (Annex IV)
- Require assessment of impacts on sentient nonhumans
- Mandate documentation of measures to prevent harm to potentially sentient AI
Updating the General-Purpose AI Code of Practice
- Expand systemic risk taxonomy
- Add "Impacts on Sentient Beings" category including biological and artificial sentience
- Implement monitoring for signs of emergent sentience in complex AI systems
- Implement precautionary protections
- Adopt precautionary approach to potential artificial sentience
- Require regular assessments of complex systems for sentience indicators
- Mandate harm minimisation
- Require AI model and product producers, developers and adopters – as well as the AI systems themselves – to prioritise minimising harm to all sentient beings
3.2. Creating a UN “Stewardship for All Life” declaration
The UN has issued dozens of declarations in its lifetime, many of them high-level, non-binding and serving to guide principles and ethics.
We recommend that the UN initiate the “Stewardship for All Beings“ declaration, as a symbolic basis for the inclusion of animals and potential artificial sentience in all frameworks. The declaration will specify humanity's desire for technology to advance the welfare of all beings, creating a better world for all. The need for such a declaration comes from the fact that there is currently almost no UN recognition of animals as moral patients in AI policy. There is also no existing UN recognition of the possible moral status of potential artificial sentience.
3.3. Adding to the Sustainable Development Goals
The UN’s Sustainable Development Goals (SDGs) serve as a general ethical meta-framework for all other UN frameworks. SDGs already include Life Below Water, Life on Land, and Climate Action, but they refrain from acknowledging the moral patienthood of individuals, only indirectly mentioning them. We recommend adding:
- A statement about caring for the existence and wellbeing of all sentient biological life
- A new SDG for the welfare of future non-biological beings, or biological beings created by another agent (such as humans or AI)
Although these additions are very general and open to interpretation, they will anchor a broader ethical view that we ought to aspire to. We see these more symbolic and meta steps as promoting a more encompassing vision that does not exclude the majority of beings on the planet. We hope these steps will help policymakers and politicians to broaden the ethical scope of international and national governance frameworks.
3.4. Updating the UNESCO Recommendation on the Ethics of AI
UNESCO, the United Nations Educational, Scientific and Cultural Organization, is a specialised agency of the United Nations with the aim of promoting world peace and security through international cooperation in education, arts, sciences, and culture. The UNESCO Recommendation on the Ethics of AI (2021) centers human rights but also adds an “environment & ecosystems” principle, creating an indirect hook for nonhuman interests.
We recommend adding interpretive guidance stating that “environment & ecosystems” encompasses the welfare of sentient nonhuman beings and the risk of creating and harming potential artificial sentience, and to encourage States to report on nonhuman impact assessments (since all 194 Member States are asked to report on implementation every four years).
3.5. Expanding the UN Secretary-General’s High-Level Advisory Body on AI
The UN Secretary-General’s High-Level Advisory Body on Artificial Intelligence (HLAB-AI) was launched in October 2023 as a response to growing global concerns about AI governance, safety, and ethics. Its goal is to provide independent, multidisciplinary advice on how to shape international AI governance that serves humanity broadly – supporting sustainable development, peace, and human rights. It has 38 members, including experts from government, the private sector (including AI companies), and civil society.
We recommend adding two representatives to HLAB-AI: one for animals and one for artificial sentience. This will serve as an important precedent for representing sentient nonhumans in political and policy contexts, along the lines of the September 2024 appointment of a dedicated European Commissioner for Animal Welfare.
4. Conclusion
Our analysis reveals a critical gap in AI governance frameworks: the systematic exclusion of sentient nonhumans. As we enter the AI age, this exclusion may inadvertently lead to nonhuman suffering of an intensity and scope beyond what exists today, and we have a profound opportunity to redefine humanity's role on Earth by expanding our moral framework to include all beings capable of suffering. We agree with the 2018 Montréal Declaration for a Responsible Development of Artificial Intelligence recognition that artificial intelligence systems “must allow individuals to pursue their preferences, so long as they do not cause harm to other sentient beings”.
This expanded moral consideration must be integrated into all dimensions of AI development. By centering sentience as a fundamental value, we embrace a paradigm of care rather than exploitation and/or harm. While we acknowledge the possibility of trade-offs – preventing the use of AI to further intensify factory farming may not reduce the cost of animal products – we consider preventing that kind of suffering to be among our highest priorities. Our proposals aim to amend existing AI governance frameworks and create new instruments that acknowledge the moral status of all sentient beings, reflecting that caring for these beings is the true measure of our moral progress in an era of rapidly advancing technology.
Thanks Alistair and Ronen. Here's some more thinking linked to your proposal on upgrading the SDGs to being Sentientist Development Goals. Views differ on whether to add new goals or integrate non-human sentient beings into the existing ones... or start from scratch and re-write them. As an aside, here's some thinking on Sentient Rights including a potential Universal Declaration of Sentient Rights (UDSR).
Executive summary: This evidence-based analysis argues that current intergovernmental AI governance frameworks—particularly those of the EU and UN—largely ignore the interests of sentient nonhumans (both biological and potentially artificial), and proposes legal and policy reforms to incorporate their moral status into AI policy, grounded in the ethical principle that sentience warrants moral consideration regardless of species or substrate.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.