Hide table of contents

Introduction

Inspired by Paul Christiano’s 2019 piece What failure looks like, we sketch a range of ways in which a future with powerful AI may go badly for animals.

We suggest:

  1. At some point in the future, AI is likely to become very powerful (e.g. AGI, TAI, ASI).
  2. This point may be soon (e.g. by 2030).
  3. Powerful AI is likely to have a huge impact on animals.
  4. We don’t know what this impact on animals will be, and it could be very bad or catastrophic.

Here are two philosophical assumptions we make for the purposes of this piece:

  1. The moral value and relevance of animals derives from their status as sentient individuals. Their membership of a group (like a species), their role in or impact on an ecosystem, or their aesthetic value in the eyes of humans fall outside the scope of this piece.
  2. Adopting a pluralistic moral approach, “failure” for animals could refer to a broad range of outcomes, including:
    1. More suffering – more intense, for longer, more animals, more systematic, more irreversible.
    2. More exploitation (non-consensual use of sentient beings as a means to an end), as something separate from suffering – on a greater scale, more irreversible, more and worse rights violations.
    3. Less flourishing – reduced scope for the realisation of wellbeing, freedom, welfare, autonomy, dignity.
    4. Extinction.

The bad futures for animals we describe below are not detailed, not neatly organised, and should not be interpreted as a comprehensive typology. Our aim here instead is to increase the salience of this issue, and encourage discussion and action as it relates to possible negative impacts of powerful AI on animals. (Some good ideas about possible positive impacts can be found in Max Taylor’s piece here.)

Ways things go badly for animals

  1. AI wipes out the biosphere, including all animals. Powerful AI, whether through a sudden takeover or a slower, sustained accumulation of influence, ends up controlling the trajectory of the planet. It prioritises goals that are incompatible with the continued existence of most or all biological life. This could happen abruptly, in the course of achieving a decisive strategic advantage, or gradually, as AI systems take over more decision-making and resource allocation. The biosphere may be cleared to make way for infrastructure, energy harvesting or generation, or other uses of matter and energy that leave no space for living ecosystems. In either case, animals disappear entirely, because their survival conflicts with the AI’s optimisation criteria.
  2. AI takeover with anti-animal values → value lock-in. A powerful AI gains long-term control over society and adopts values that, while not aimed at eradicating animals, cause them to continue to exist in a state of suffering or exploitation. These values could reflect harmful attitudes already present in parts of human culture, such as seeing animals primarily as tools, resources, or aesthetic objects. Once embedded in the AI’s decision-making, these priorities could shape the world for as long as the system remains in control, making it extremely difficult or impossible to shift toward more animal-friendly norms.
  3. AI-enabled human takeover with anti-animal values → value lock-in. A small number of humans (probably from a political/economic/military elite) use AI – or are persuaded by AI – to orchestrate a coup that is global in scale, effectively establishing a permanent global government. Even though some of these humans have some pro-animal views, and several of them consider themselves to be morally-motivated, most of them don't prioritise animals enough to protect or promote their interests in this new world order. A small number of animals are treated less badly, but only as companions or pets for the wealthy, or as objects of aesthetic or historical value: in ornamental duckponds, or in old-fashioned grazing systems. The scope for political opposition to this AI-backed global government is made impossible through advanced surveillance and law enforcement.
  4. Animal advocates disempowered due to repression/poverty. Powerful AI has a transformative impact on the economy, causing mass job loss, and disempowers all but a tiny elite, none of whom are particularly pro-animal. Animal advocates lose the bandwidth, energy, money, time to do anything except provide for themselves and their loved ones. Furthermore, scarcity drives the general public to become less receptive to pro-animal messaging.
  5. Animal advocates disempowered due to abundance / experience machine. Powerful AI transforms the world for the better in terms of economic growth and reported happiness, but because all the humans are wireheading themselves: we have access to such stimulating, pleasurable and addictive sources of entertainment and meaning that we forget or stop caring about animals, who continue to be exploited. Alternatively, we trust the AI systems that are running the world to the point that we don’t believe animal exploitation could be a moral problem.
  6. Animal advocates disempowered due to animal-related disinformation. Industries that use animals for food, testing etc. use AI to create extremely sophisticated and persuasive PR campaigns (e.g. showing how well-treated their animals are) that give them an unassailable advantage over animal advocates. Furthermore, the prevalence of deepfakes may discredit real footage of animal abuse, and erode the shared reality necessary for public debate.
  7. AI-accelerated climate change. The main impact on animals from powerful AI is the environmental impact of its development – the water use, land use change, greenhouse gas emissions and pollution accelerate global heating and habitat destruction. This climate change is likely to lead to the extinction of species, and might amplify wild animal suffering by incentivising r-selection over K-selection among wild animals: far more wild animals may be born and then may die painful deaths shortly after their birth than would otherwise be the case.
  8. AI-driven ideologies: Advanced AI could transform how beliefs form and spread, collapse modern epistemic safeguards, and make it easier for new ideologies centered around shared delusions to take hold at massive scale (analogous to how the invention of radio helped spread Nazism). These ideologies might not be primarily anti-animal, but they could contain elements which treat their welfare as irrelevant, justify their continued exploitation, or simply make it impossible for animal advocates to continue their work.
  9. AI-enabled panspermia, which increases wild animal suffering. Humans use AI to colonise the galaxy and beyond, spreading biological life everywhere. Because of our current values, we spread wild animals to these other planets to recreate Earth, ignoring the risk of significantly increasing the number of individuals experiencing intense suffering.
  10. AI-enforced libertarian future. AI enables individuals to control their own isolated domains with near-total autonomy. Most choose not to harm animals, but a minority with sadistic preferences cause extreme animal suffering. Their absolute independence prevents anyone from effectively intervening.

Conclusion

We think we (humans) continue to underappreciate the future impact of powerful AI on ourselves and the world in general, and we think we dramatically underappreciate the impact on animals in particular. This is a problem because at least many animals are sentient and therefore worthy of at least some moral consideration, and they could be harmed a great deal in futures involving powerful AI. There are also far more of them than there are of us.

If you’re “animal-first” – e.g. you’re an animal advocate – it is worth considering what the development of powerful AI may mean for the work you’re doing. Some more detailed work on this question can be found by Max Taylor here and by Jamie Harris here.

If you’re “AI-first” – e.g. if your work involves shaping AI development, governance, or deployment – it is worth asking how your choices might affect animals, and how to reduce the risk of harm. We worry this kind of thinking is not happening nearly enough, although the “AI x animals” field is growing in size and influence. This piece of Max Taylor’s makes the case for bringing about “animal-inclusive AI” in more detail.

There are things we can do to try to reduce the risk of failure for animals as well as for humans. Let’s try to work out what those things are, then do them.

Originally submitted as a project for the Electric Sheep Futurekind course in August 2025, with Nicholas as Alistair’s mentor.

41

1
1

Reactions

1
1

More posts like this

Comments2
Sorted by Click to highlight new comments since:

Thanks for the list, this is an important topic.

I'd just like to point out that life in the wild might be net negative and contain more suffering than happiness (due to a majority of beings dying closely after birth from hunger and predation). We need more research but that sounds more likely than not - as your point 9 suggests.

In that case, item number 1 on your list might be a better scenario than what is currently happening, and I am not sure we should spend time fighting against it.

But the rest is risky, yes.

Indeed. I'm personally sympathetic to this kind of view (my ethics are heavily suffering-focused), but we wanted to make this piece pluralistic, and specifically able to accommodate the intuitions of those who think extinction of (one or more species of) wild animals would be very bad.

Curated and popular this week
Relevant opportunities