Hide table of contents

Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer.

Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked.

Is AGI coming in the next 5-10 years?

This is very well covered elsewhere but basically it looks increasingly likely, e.g.:

  • The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively.
  • The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively.
  • 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey.
  • These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years).
  • Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously.

What could AGI mean for animals?

AGI’s implications for animals depend heavily on who controls the AGI models. For example:

  • AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition.
    • For example, maybe two government-owned companies separately develop AGI then restrict others from developing it.
    • These actors’ use of AGI might be driven by a desire for profit, recognition, absolute control, world peace, cessation of suffering of all sentient beings, or propagation of specific values or religions.
  • AGI might be controlled by lots of people.
    • For example, AGI models might be open-sourced and efficient enough to run on a standard computer.
    • These users might be driven by the same desires as listed in the bullet above, plus things like entertainment, satisfaction of curiosity, immortality, malice, or something more obscure.
  • The AGI might control itself.
    • For example, we might fail to properly put in place measures ensuring that AGI models follow our instructions after deployment.
    • These models might remain largely aligned with whatever goals their users or developers programmed them to fulfil, or they might prioritise goals that ensure their own survival, or goals serving humanity or all sentient beings, or totally alien goals that only make sense if you’re an AGI.

Outcomes will vary wildly depending on the values their controllers instil in them.

  • For example, it’s sometimes assumed that AGI will lead to the replacement of factory farming with alternative proteins, given that alternative proteins are so much more efficient against a range of metrics (in terms of calories and protein provided relative to land use, water use, carbon emissions, amount of unnecessary suffering, etc.) than factory farming.
  • However, this assumes a specific definition of “efficiency” that prioritises global outcomes rather than individual stakeholder interests. In reality, AGI systems might optimise for narrower definitions of efficiency that serve their direct users (e.g. they might be controlled by governments or corporate actors that are heavily influenced by animal agriculture lobbyists, or a government that values conservatism and therefore maintenance of the status quo).
  • Even if an AGI did aim to optimise the entire food system rather than specific stakeholders, literature on food systems contains a range of different models for optimal food production that could influence the AGI’s values, some of which would be very damaging for animals.
  • For example, if the AGI's embedded values dictate that animal protein is the optimal protein source, rather than optimise alternative protein production it might find creative ways to farm animals more intensively or replace less ‘efficient’ animals, like cows, with more efficient ones, like fish.

AGI's initial conditions and values might quickly amplify into dramatic consequences for billions of animals before meaningful course corrections become possible.

  • This would make early intervention in shaping AGI's values extremely important. For example, if AGI systems optimise factory farming for efficiency without welfare considerations, they might rapidly deploy sophisticated livestock management systems across millions of farms that dramatically increase both production and animal suffering before welfare advocates can meaningfully respond.
  • More broadly, if early AGI systems absorb and amplify existing human perceptions of animals as resources, they could quickly entrench this perspective in newly developed technologies, legal frameworks, and cultural narratives.

Different values could lead to vastly different outcomes for wild animals as well as for farmed ones.

  • If AGI systems are directed to care about the welfare of individual wild animals, they could help end an unimaginable amount of suffering caused by disease, parasitism, starvation, predation, etc, while also finding ways to facilitate sustainable urban development while minimising harm to animals.
  • On the other hand, if they’re solely directed to preserve nature as it is and avoid directly intervening in wild animals’ lives (which is likely to be a pretty common view among those who control them), this would forgo enormous opportunities, such as large-scale targeted efforts to prevent painful diseases among certain wild animal populations.
  • Or if millions of people have access to AGI, there’s a decent chance of harming many wild animals if people use AGI in sadistic or thoughtless ways, building on humans’ current tendency to exterminate any animals that we deem inconvenient.

There are many ways that AGI could affect humans that would have significant knock-on effects on animals.

  • For example, if it leads to mass unemployment, even if only temporarily, animal advocacy is unlikely to be a cultural or political priority amidst the social upheaval that would create.
  • At the same time, if it leads to huge economic growth and expansion of access to healthcare, this might leave people with the financial, physical, and psychological comfort to engage with animals’ interests.
  • Alternatively, it might lead people to neglect animals even more than they do currently. For example, rapid economic growth could lead to much higher salaries, making people less likely to spend time on unpaid activities as the opportunity cost is much greater; companies and governments might also become even more focussed on productivity, to avoid their rivals harnessing AGI to massively outcompete them. More simply, maybe people will just become too distracted pursuing all the crazy hedonistic pleasures this new abundant world has to offer.
  • Generally speaking, any AGI outcomes that end up being catastrophic for humans would probably also be catastrophic for animals. For example, nuclear war and bio-engineered pandemics would probably be terrible for animals, and it’s unlikely that a global authoritarian dictatorship would devote many resources to improving animal wellbeing.

Animal advocacy itself would need to transform in a rapidly changing, AI-dominated landscape.

  • Even before AGI emerges, early adopters of AI-powered advocacy tools might enjoy a brief window of opportunity for unprecedented impact with limited resources (e.g. by using automated lobbying systems to identify and engage potentially receptive governments and corporations on welfare improvements).
  • However, this advantage will likely evaporate as every other interest group deploys similar technologies, potentially overwhelming democratic and legal systems and forcing animal advocates to compete for attention in increasingly chaotic information environments.

What should we do about it?

Reflect AGI in our goal as a movement

I’ve generally been doing/supporting animal advocacy with the implicit goal of ‘help end factory farming and create a robust community of wild animal welfare advocates by 2100’.

But if we assume a 50% probability of AGI in the next 5-10 years, this goal should probably be more like ‘ensure that advanced AI and the people who control it are aligned with animals’ interests by 2030, still do some other work that will help animals if AGI timelines end up being much further off, and align those two strands of work as much as possible’.

Act now, rather than wait until it’s too late

It doesn’t seem good enough to wait until AGI is here, then lobby for it to prioritize animals’ interests.

  • For one thing, AGI offers an unprecedented opportunity to change our food systems and the other ways we exploit animals. Every year that AGI isn’t prioritising animals’ interests is another year that trillions of animals suffer needlessly in factory farms and many more suffer needlessly in the wild.
  • The actual transition to an AGI world is likely to be chaotic, which will probably leave people and institutions less receptive to non-essential concerns like animal advocacy.
  • By the time AGI is deployed, it might already be too late; maybe its controllers shut themselves off from outside influences, or it’s controlled by a vast range of people who are impossible to coordinate and influence in any meaningful way, or it’s already pursuing its own goals and resistant to any human influence.
  • This problem already exists, with current AI systems already exhibiting significant biases against animals.

Support the work that best fulfils our AGI-aligned goals

Strategic individual and geographic targeting could dramatically increase our impact in short AGI timelines. For example, rather than broad public education campaigns, it could be most impactful to direct resources to ensuring that influential AGI decision-makers in specific strategic locations (e.g. the Bay Area or Beijing) incorporate basic moral consideration of animals in their work. 

With this in mind, the most important kinds of work I should support right now might include:[1]

  • Collaboration between the animal and AI spaces (e.g. AI for AnimalsElectric Sheep). This includes convincing AI decision-makers that they should care about animals, working with those who are already convinced to find ways to put that care into action, and helping ensure that there are animal-friendly people in the room when it comes to big decisions about AGI. This could include demonstrating that AI systems respecting all sentient beings, regardless of intelligence, are less likely to develop biases that could harm both animals and humans.
  • AI/animals ethics research (e.g. the Moral Alignment Center, Yip Fai Tse, Leonie Bossert, Soenke Ziesche). Figuring out what we need to do in this space, and ensuring that animals’ interests are seen as a legitimate consideration in AI ethics, could influence governments and AI companies that take AI ethics seriously.
  • Technical AI/animal alignment work (e.g. CaMLOpen PawsAI for Animals). This would involve identifying technical approaches to instill animal-friendly values in current AI models (such as Reinforcement Learning from Human Feedback (RLHF) with animal-welfare-aligned feedback providers).
  • Government outreach around AI and animals (e.g. EU AI Act Code of Practice Stakeholder Advisory Group). Specifically, ensuring animals’ interests are represented in the regulations that will govern the creation and deployment of AGI.
  • General AI safety work (e.g. Center for AI Safety). Ensuring AI is safe for humans seems like an essential (though not sufficient) step towards making it safe for animals.
  • Increasing the amount of animal-friendly content that is likely to feature in AI training data (e.g. Open Paws and CaML have large animal-aligned datasets on HuggingFace they are making freely available for AI training). Assuming that AGI models will be trained on/influenced by this kind of data, this seems like a promising way to slightly shift the needle to more animal-friendly AI values.
  • Meta-fundraising and talent recruitment for work in the AI/Animals space (e.g. AI for Animals). This entails supporting the individuals and organizations that can effectively scale all these efforts.
  • Alternative proteins outreach and regulation targeted at governments and individuals who are likely to control AGI (e.g. Good Food Institute). This is likely to be a good bet because if the decision-makers controlling AGI are fundamentally opposed to the idea of alternative proteins, this could significantly reduce or delay AGI’s potential to replace factory farming.
  • Wild animal welfare outreach targeted at governments and individuals who are likely to control AGI (e.g. Wild Animal Initiative). Right now, very few people take the idea of individual wild animal welfare seriously, so this is unlikely to feature in the guiding values of AGI models, which would be a huge wasted opportunity. Rapidly getting people to care about wild animals is also potentially an easier lift than getting them to care about farmed animals, given that it doesn’t entail going vegan and totally restructuring our food systems.[2]

Most of these projects will be highly useful for animals no matter when, or even whether, AGI is developed. For example, building relationships with AI labs establishes credibility and communication channels that will be valuable even if AGI takes several decades to develop. Likewise, investing in alternative protein infrastructure now prepares for a future where AGI can rapidly scale its adoption, and also provides a long-term solution to factory farming even without AGI.

Conclusion

Overall, this currently seems like a critical, time-sensitive, and massively overlooked element of our animal advocacy strategy. It’s right to acknowledge the uncertainty around AGI timelines and the unpredictability of post-AGI futures, but that’s an argument for investing many more resources into thinking about what this means in practice for our day-to-day work, not for maintaining the status quo until it’s too late. What do others think?

  1. ^

    I’ve written about some of these ideas in more detail in this post, and Sam Tucker-Davis of Open Paws has set out more ideas here.

  2. ^

    This goes beyond the focus on AI and animals, but AI sentience work (e.g. the NYU Center for Mind, Ethics, and Policy) also seems important: getting a better handle on the possibility of sentient AI and mitigating the possibility of its creation until those risks have been addressed

Comments20
Sorted by Click to highlight new comments since:

Thank you for writing this up, Max! The more I dive into AI for Animals, the more it seems to be just about the most important (and drastically underdiscussed) topic within the farmed animal movement, both in terms of risks and opportunities.

Thanks for writing this Max! The likelihood that my and other advocates' work could be made completely irrelevant in the next few years has been nagging at me. Because you invited loose thoughts, I wanted to share my reflections on this topic after reading your write-up:

If AI changes the world massively over the next 5-10 years but there's no sci-fi-style intelligence explosion:

  • Many/most of the specific interventions that animal advocates are using successfully today will no longer work in a completely different context.
    • This means we should 'exploit' proven strategies as quickly as possible today (hard with funding as a bottleneck)
    • This means we shouldn't 'explore' new strategies as much that aren't robust to a radically transformed world
  • The most robust strategies for a transformed world (it seems to me) are ones that increase moral consideration that empowered agents (humans and AIs) have for animals, as these will lead agents to make more animal-friendly choices in the world, whatever that world looks like
    • Unfortunately we're not very good at this as a movement right now! But more efforts to figure it out, particularly ones that are realistic about human psychology, seem needed to me

If we get an intelligence explosion:

  • As above, but humans will be making far fewer of the important decisions, and so it becomes far more important to increase moral consideration that AIs specifically have for animals (which means it's more important for us to target advocacy at the specific people/governments influencing the values of the AIs, and less important to do broad public advocacy)

Either way: AI could make an alt-protein end game for factory farming far more technologically viable. We should be doing what we can to create the most favorable starting conditions for AIs / people-advised-by-AIs to choose the alt-protein path (over the intensification of animal ag path for example). One particularly promising thing we could do here is remove regulatory barriers to alt protein scale-up and commercialisation, because if AI makes it technologically possible but the policy lags behind, this could be reason enough for the AIs / people-advised-by-AIs to decide not to pursue this path.

Keen to hear people's reactions to this :) 

Increasing the amount of animal-friendly content that is likely to feature in AI training data

My understanding is that current AIs' (professed) values are largely determined by RLHF, not by training data. Therefore it would be more effective to persuade the people in charge of RLHF policies to make them more animal-friendly.

But I have no idea whether RLHF will continue to be relevant as AI gets more powerful, or if RLHF affects AI's actual values rather than merely its professed values.

Thanks for writing about this!

I'm thinking a lot about this question and would welcome chatting more with people about this - particularly on the impacts on invertebrates & wild animals. I work at Anthropic (note: not in a technical capacity, and my views are purely my own) and so am feeling like I might be relatively better-placed (at least for now) to think about the intersection of AI and animals, but I have a lot to learn about animal welfare!

Nice one, thanks Miranda! Would be really interested to chat about this - I'll DM you :-)

Thank you for this post. I think it does a great job of outlining the double-edged sword we're facing -  - the potential for AI to either end enormous suffering or amplify it exponentially.

Your suggestion to reframe our movement's goal really expanded my thinking: "ensure that advanced AI and the people who control it are aligned with animals' interests by 2030." This feels urgent and necessary given the timelines you've outlined.

I'm particularly concerned that our society's current commodified view of animals could be baked into AGI systems and scaled to unprecedented levels. 

The strategic targets you've identified make perfect sense - especially the focus on AI/animal collaborations and getting animal advocates into rooms where AGI decisions are being made. We should absolutely be leveraging AI-powered advocacy tools while we can still shape their development. 

Thank you for this clarity. I'll be thinking much more deeply about how my own advocacy work needs to adapt to this possible near-future scenario.

This post inspired me to complete the BlueDot Future of AI course! Thanks Max! 

Sharing in case this is useful for others - online, 2hr course: https://course.bluedot.org/future-of-ai

That's great to hear! BlueDot has been my main resource for getting to grips with AI. Please feel free to share any ideas that come up as you explore how this applies to your own advocacy :-)

I agree, it is crucial that the animal advocacy movement learn research and prepare a wise and informed strategy for pre AGI and post AGI times.

Hi Max,

Have you considered pitching Ambitious Impact on running a research round with the goal of finding the best interventions leveraging AI to help animals?

Thanks Vasco, this is a great idea. I'll look into it :-)

"Act now, rather than wait until it’s too late" -> well put.

Glad to see the good work of CaML and others highlighted. Positively influencing the models as much as possible right now seems vital.

A random one - is AI for inter-species communication emerging as a thing? Is it viable in the short term, are there promising projects working on it with a view to bringing it to the masses via mobile apps etc? 

Hi Simon! You can find out more about the latest development at Earth Species Project here and Project CETI here. There have been some recent breakthroughs with detecting and classifying animal bioacoustic signals through LLM-type models. 

Thanks Simon! Yes, AI for inter-species communication is underway. The main organisations working on this at the moment are Earth Species Project (who just received a $17 million grant) and Project CETI. So far as I can tell, work is still in its early stages and mainly focussed on gathering and cleaning audiovisual data and getting a better sense for different species' portfolio of sounds, rather than actual communication. 

I'm still unsure how good this will be for animals. I wrote a brief post on this for the AI for Animals newsletter if you're interested, but the upshot is that I can see plenty of ways for this technology to be abused (e.g. used for hunting, fishing, exploitation of companion animals for entertainment purposes, co-option by the factory farming industry, etc.). I also think there's a risk that we only use it for communication with a handful of popular species (e.g. dogs, cats, whales, dolphins), and don't consider what this means for other less popular species (like farmed chickens).

The most promising project I've seen so far is the partnership between Project CETI and the More Than Human Life (MOTH) Project at New York University, which is focussed on the ethical implications of interspecies communication. I hope that these kinds of guidelines will end up driving progress on this rather than corporate interests... and that we focus on using AI to understand animals better on their own terms, rather than trying to communicate with them purely for our own curiosity and entertainment.

Fantastic post, very clear. This is a very important topic.

Great piece Max! I feel very similarly.

Thanks for this great post, Max! I strongly agree, this is super important. 

Thanks for the post, Max.

AGI might be controlled by lots of people.

Advanced AI is a general purpose technology, so I expect it to be widely distributed across society. I would think about it as electricity or the internet. Relatedly, I expect most AI value will come from broad automation, not from research and development (R&D). I agree with the view Ege Erdil describes here.

  • 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey.

2047 is the median for all tasks being automated, but the median for all occupations being automated was much further away. Both scenarios should be equivalent, so I think it makes sense to combine the predictions for both of them. This results in the median expert having a median date of full automation of 2073.

CDF of ESPAI survey showing median and central 50% of expert responses.

Thanks for this post - it was desperately needed - but it's striking to me how many questions there are for which we don't have good answers. I would go so far as to say we're largely clueless as to what effects AGI will have on animals.

The recommendations that we try to direct the movement toward considering the role of AI in its future, and try to influence AI decision makers rather than the general public, seem reasonable. Maybe that's the best we can do?

Thanks Tristan! Definitely agree that AGI's effects on animals (like on humans) are currently extremely uncertain – but by being proactive and strategic, we could still greatly increase the probability that those effects will be positive.

The recommendations I suggested seem broadly sensible to me but I'm sure that some are likely to be much more impactful than others, and some major ones are bound to be missing, and each one of them is sufficiently broad that it could cover a whole range of sub-priorities. This is probably an argument for prioritising the first of the principles that you mention, directing the movement toward considering the role of AI in its future, and agreeing on the set of practical, rapid steps that we need to take over the next few years. 

Curated and popular this week
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 12m read
 · 
I donated my left kidney to a stranger on April 9, 2024, inspired by my dear friend @Quinn Dougherty (who was inspired by @Scott Alexander, who was inspired by @Dylan Matthews). By the time I woke up after surgery, it was on its way to San Francisco. When my recipient woke up later that same day, they felt better than when they went under. I'm going to talk about one complication and one consequence of my donation, but I want to be clear from the get: I would do it again in a heartbeat. Correction: Quinn actually donated in April 2023, before Scott’s donation. He wasn’t aware that Scott was planning to donate at the time. The original seed came from Dylan's Vox article, then conversations in the EA Corner Discord, and it's Josh Morrison who gets credit for ultimately helping him decide to donate. Thanks Quinn! I met Quinn at an EA picnic in Brooklyn and he was wearing a shirt that I remembered as saying "I donated my kidney to a stranger and I didn't even get this t-shirt." It actually said "and all I got was this t-shirt," which isn't as funny. I went home and immediately submitted a form on the National Kidney Registry website. The worst that could happen is I'd get some blood tests and find out I have elevated risk of kidney disease, for free.[1] I got through the blood tests and started actually thinking about whether to do this. I read a lot of arguments, against as well as for. The biggest risk factor for me seemed like the heightened risk of pre-eclampsia[2], but since I live in a developed country, this is not a huge deal. I am planning to have children. We'll just keep an eye on my blood pressure and medicate if necessary. The arguments against kidney donation seemed to center around this idea of preserving the sanctity or integrity of the human body: If you're going to pierce the sacred periderm of the skin, you should only do it to fix something in you. (That's a pretty good heuristic most of the time, but we make exceptions to give blood and get pier
Relevant opportunities