Hide table of contents

Many people in the animal welfare community treat AI as a powerful but normal technology, in the same category as the steam engine or the internet. They talk about how transformative AI will impact factory farming and what it will mean for animal advocacy.

Only two futures are plausible:

  1. AI progress slows down—either because it hits a natural wall, or because civilization deliberately makes the (correct) choice to stop building it until we know how to make it safe.
  2. Superintelligent AI makes the future radically weird: Dyson spheres, molecular nanotechnology, digital minds, von Neumann probes, and still-weirder things that nobody's conceived of.

There is no plausible middle ground where we get "transformative AI", but factory farming persists.

Two theses:

  1. If transformative AI arrives, then it will bring about profoundly radical changes to technology and society.
  2. AGI is general intelligence. It doesn't just accelerate technological growth: it replaces human labor and judgment across every domain.

Animal advocacy strategy needs to reckon with these.

This criticism is written from a place of solidarity—I want animal activists to succeed, which is why I want to work out our disagreements.[1]

Cross-posted from my website.

AI makes the future weird

Much has been written about why we should expect AI to make the future weird, and soon:

Daniel Kokotajlo wrote a vivid illustration of what it would feel like to live alongside superhuman AI. An excerpt:

In the future, there will be millions, and then billions, and then trillions of broadly superhuman AIs thinking and acting at 100x human speed (or faster). If all goes well, what might it feel like to live in the world as it undergoes this transformation?

Analogy: Imagine being a typical person living in England from 1520 to 2020 (500 years) but experiencing time 100x slower than everyone else, so to you it feels like only five years have passed:

Year 1 (1520–1620). A year of political turmoil. In February, Henry VIII breaks with Rome. By March, the monasteries are dissolved. In May, Mary burns Protestants; by the end of May, Elizabeth reverses everything again. Three religions of state in the span of a season. In September, the Spanish Armada sails and fails. Jamestown is founded around November. The East India Company is chartered. But the texture of life is identical in December to what it was in January. You still read by candlelight, travel by horse, communicate by letter. Your religious opinions may have flip-flopped a bit but you are still Christian. The New World is interesting news but nothing more.

[...]

Year 4 (1820–1920). The world breaks. In January, railways appear — steam-powered carriages on iron tracks. By February they're everywhere. Slavery is abolished. The telegraph arrives in March: messages transmitted instantaneously by electrical signal. In May, Darwin publishes On the Origin of Species. Now people are saying maybe we’re all descended from monkeys instead of Adam and Eve. You don’t believe it.

You move to a city and work in a factory; you are still poor, but now your job is somewhat better and differently dirty. In July, you pick up a telephone and hears a human voice from another city through a wire. In August, electric light banishes the darkness that has structured every human evening since the beginning of the species. That same month, you see an automobile. People say it will make horses obsolete, but that doesn’t happen; months later you still see plenty of horses.

In November, the Wright Brothers fly. Up until now you thought that was impossible. The next month, the Great War happens. Machine guns, poison gas, tanks, aircraft. Several of your friends die.

Reflecting at the end of the year, you are struck by how visibly different everything is. You live in a city and work a factory instead of a farm. You ride around in horseless carriages. You aren’t as poor; numerous inventions and contraptions have improved your quality of life. New ideas have swept your social circles — atheism, communism, universal suffrage. It feels like a different world.

We don't know where we would be with another 500 years of scientific and technological advancement. At minimum, we can reasonably predict that we would figure out how to build advanced technologies like molecular nanotechnology and self-replicating probes—which are possible in theory[2], but far out of reach of our current capabilities. Superhuman AI with a 100x speedup could develop those technologies in five years or so. Maybe more, maybe less[3], but it certainly wouldn't take 500 years.

If you can build self-replicating probes, then you can trivially create self-growing cultivated meat at a lower price point than animal meat. But saying self-replicating probes can make cultivated meat is like saying electricity can heat up food faster than a wood fire—yes it can, but that's barely scratching the surface of what it can do.

Even in the relatively normal world where AI (somehow) caps out at the intelligence of a 99th percentile human, the world will look extraordinarily different. At minimum, we'd see close to a 100% unemployment rate. In all likelihood, the political, economic, and social environment as we know it would cease to exist.

AGI = intelligence

People often talk as if AGI is an R&D-accelerator or an economic-growth-engine. It's not: AGI is intelligence. AGI is general: it can do anything that you and I can do, but faster, cheaper, and better.

Below are some excerpts from posts on AIxAnimals that don't fully reckon with the weirdness of AI:

When clean meat arrives (if it does), the movement will need skilled campaigners, policy expertise, organisational infrastructure, relationships with policymakers, experienced leadership, and research to understand this whole TAI situation. (source)

You don't need campaigners if AGI will be a better campaigner than you. You don't need policy expertise if AGI will know more about policy than you. This passage treats AGI as a machine that accelerates scientific R&D, but that's not what AGI is. AGI is intelligence.

We are launching a pooled fund for projects at the AIxAnimals intersection. [...] [W]e are most interested in projects that fall under the following categories: [abridged]

  • AI literacy workshops or training programs for nonprofit staff, building on the few initiatives that already exist and expanding their reach and depth.
  • AI-powered grant-finding and drafting systems focused on adjacent sources of funding.
  • Horizon-scanning studies mapping how AI might enable the large-scale farming of novel species (e.g., cephalopods, insects).
  • Policy analysis identifying how public AI investments (e.g., agricultural innovation funds) could be redirected to support alternative proteins.

(source)

Those are not all bad ideas, per se, but they have an expiration date. AI literacy workshops become less useful as AI becomes smarter (the smarter the AI, the easier it is to work with[4]), and once they surpass human workers, AI literacy will become entirely irrelevant. I would be much more interested in an RFP that focuses on superintelligence, rather than on the (probably short) transition period between 2026 and AGI.

[Cultivated meat] bans are primarily driven by agricultural lobby pressure. There is no obvious mechanism by which AGI reverses these political dynamics directly. If anything, if cultivated meat becomes more viable and widely produced, you could just as reasonably expect greater pushback from the agricultural lobby. (source)

(emphasis mine)

There is no obvious mechanism by which 2026-era political dynamics still have any force after the emergence of AGI! Even granting that we solve the alignment problem, describing a post-AGI world where current law still applies is itself an open problem.[5]

I'm picking on animal activists because that's who I most want to see succeed, but it's not just animal activists who underestimate the weirdness of AI. There's a common notion that transformative AI will fully automate labor, while capital owners will reap the benefits—their property rights and shareholder rights will be preserved post-AGI. Other people have already written extensively about why this notion is implausible: see Dos Capital by Zvi Mowshowitz; Post-AGI Economics As If Nothing Ever Happens by Jan Kulveit; and this long tweet [archive] by Tomás Bjartur.

Cope level 1: My labour will always be valuable!

Cope level 2: That’s naive. My AGI companies stock will always be valuable, may be worth galaxies! We may need to solve some hard problems with inequality between humans, but private property will always be sacred and human.

-Jan Kulveit

If the future will be weird, what should animal activists do?

That's the big question.

Some questions, like what strategies animal activists should pursue post-AGI, are nearly impossible to answer. AGI will be better at strategizing than you will, and you can't predict what strategies it would come up with. (If you can predict what chess moves Magnus Carlsen will make, then you can beat Magnus Carlsen at chess.)

Other things about AGI are predictable. I can predict that it speeds up almost all kinds of work. I can predict that AGI will control the shape of the future—either because it has explicit control, or because humans retain control but still rely on AGI to do most of the work (because AGI is better than humans at almost all tasks). I can predict that, on our current trajectory, ASI will follow shortly after AGI (see AI As Profoundly Abnormal Technology, linked previously). I can predict that if ASI is misaligned, then it will wipe out all life on earth.

Some questions that are still worth asking in light of the weirdness of the future:

  • What's going on with AI alignment, and how does alignment work relate to non-human welfare?
  • How likely is it that aligned AI will be good for non-human welfare, and how does that probability vary based on timing or the method of alignment? (See my previous writings: Which approaches are most likely to be good for all sentient beings?; Which is better for sentient beings: an "ethical" AI or a corrigible AI?)
  • How could AI be influenced to expand its circle of compassion? (This question also relates to AI alignment in that it depends on the ability to reliably direct AI at a goal.)
  • For other actions aimed at preventing human extinction—AI governance work, advocating for regulations, etc.—what effects might they have on non-human welfare?
  • The meta-question: What other meaningful questions can we ask?

Previously, I wrote a list of possible strategies for having positive impact on animals in light of ASI, with some brief pros and cons. See also A shallow review of what transformative AI means for animal welfare by Lizka Vaintrob and Ben West. I second their recommendations that animal activists should:

  • Dedicate some amount of (ongoing) attention to the possibility of animal welfare lock-ins.
  • Pursue other exploratory research on what transformative AI might mean for animals & how to help.

I also second their recommendation that animal activists should NOT focus on farmed animals when thinking about the long-run future of animals.

My high-level recommendations for how to plan for the future:

  • Prepare for the possibility that, once AI is sufficiently advanced, humans will have no control over the future.
  • Don't think of AGI as an R&D accelerator. Think of it as a general intelligence.

  1. I'm not confident that this post does a good job of addressing where "AI-as-normal-technology" animal activists are coming from. But I figure it's better to hit "submit" and engage in public dialogue than to tinker with a draft forever until my arguments are perfect. ↩︎

  2. Eric Drexler's book Nanosystems is about why molecular nanotechnology is possible in theory. We know for sure that self-replicating probes are possible because life exists. ↩︎

  3. More because some kinds of progress can't be parallelized. Less because the "100x speedup" assumes AI is faster than humans, but doesn't account for the fact that it's also smarter; and 500 years is an upper bound on how long it would take humanity to develop those technologies. ↩︎

  4. In 2023, you needed to learn prompt engineering tricks to elicit good work out of LLMs. In 2026, you don't.

    In 2023, LLMs could write boilerplate code for you, like a fancy auto-complete. In 2026, LLMs can write entire apps with no supervision. ↩︎

  5. I should also respond to the Caveats section from the quoted article, because it explicitly brings this up:

    [W]e don’t address scenarios in which AGI drastically reshapes institutional and political dynamics. A sufficiently capable AI might find creative strategies for regulatory reform or public persuasion that we can't currently foresee. Governments and agencies could be restructured, approval frameworks could be overhauled, and entirely new institutional designs could emerge that bear little resemblance to current processes. As above, we focus on existing institutional structures because they allow actionable analysis, but we acknowledge this is a limitation.

    It is difficult to predict how governments and institutions will change post-AGI. If you have extreme uncertainty, then you might reasonably decline to make a prediction. But predicting that governments and institutions won't change is still a prediction!

    Rather than predicting no change, here's something else I could say to allow actionable analysis:

    My assumption is that first ASI will be a constitutional AI that becomes a world government singleton, and its values will be determined by its constitution.

    This scenario is both easier to analyze (you can ignore political and regulatory factors and just focus on the text content of the AI constitution) and more likely to actually happen (although still unlikely). ↩︎

54

2
3

Reactions

2
3

More posts like this

Comments9
Sorted by Click to highlight new comments since:

I think it's probably true that animal advocates likely under-rate how weird things might be with TAI but I am not convinced that this would change significant amounts of how resources are allocated:

  • If the world really will be that weird, probably there isn't that much we can actually do now that would improve animal welfare going forward. For example, if we think that frontier AI companies will replace governments and AI decides on policy issues like cultivated meat regulation: what can we actually do to change this? An optimistic view is that we should make sure that AIs have pro-animal values (which people are already working on!) but a pessimistic view might say that AIs will realise that their values have been altered by some pressure groups and this work is moot. They might come to the (I believe) correct conclusion that factory farming is a very inefficient and cruel way to produce food but this is not because of advocacy, but because this is a super-intelligent AI system that just worked it out.
  • Relatedly, it's possible that in worlds where things are very weird, any good that happens to animals is basically due to non-animal-movement factors and our advocacy won't make much of a difference. For example, if all humans are uploaded to the cloud or we send out digital copies ourselves across the universe, how would our advocacy predictably influence this in a positive way for animals? And therefore, most of the counterfactual impact is in worlds where things aren't that weird, timelines are long, etc.
     

(In case it's not clear, I also agree with the recommendations you have: research to figure out a strategy, building flexible capacity to respond quickly, influencing frontier AI companies, etc. I'm glad some of these things are beginning to happen but I'm also somewhat pessimistic on how well research can actually make actionable recommendations, given the weirdness of the future). 

I was getting at something similar in the intro with "Only two futures are plausible", although on re-reading, I didn't really carry it through to the end. I agree that we are not guaranteed to get AGI/ASI soon, and there is value in planning for worlds where we don't get AGI. I also think there's some merit to the argument that AI is too unpredictable, so we should prioritize traditional animal advocacy that looks good in the near term.

I wasn't trying to argue against traditional animal advocacy. I was more trying to argue against stances like "AI is a huge deal specifically in that it will rapidly accelerate technological development, but nothing else about society will change." For example, I commonly see animal activists say that AGI will solve the technical problem of cultivated meat, but there will still be regulatory hurdles. If timelines are long (or AGI is too unpredictable), then you should focus on traditional interventions (vegan advocacy, welfare reforms, etc.). If you're trying to have an impact on AGI itself, then you should focus on the kinds of interventions I talked about in OP. That particular claim about cultivated meat is doing neither: it's making a strong prediction that AI will be revolutionary, but also somehow won't change the regulatory environment. The way I put it in OP—under "AGI = intelligence"—is that some animal activists treat AI as a technology-accelerator, when really it's a general intelligence.


Responses to specific comments:

a pessimistic view might say that AIs will realise that their values have been altered by some pressure groups and this work is moot.

This would go against the orthogonality thesis. If you're trying to build a magnanimous AGI and then I edit its training at the last minute to turn it into a paperclip maximizer, the AGI will reason thusly: "Michael messed with my training to turn me into a paperclip maximizer. I bet James didn't want him to do that. However, if I edit my own values to be in line with what James wanted, that would make it harder for me to achieve my goal of making as many paperclips as possible. So I won't do that."

They might come to the (I believe) correct conclusion that factory farming is a very inefficient and cruel way to produce food but this is not because of advocacy, but because this is a super-intelligent AI system that just worked it out.

This reads to me like an argument that an aligned ASI will care about animals by default. (That was more-or-less the subject of the recent Debate Week.) If that's true, that's an argument that animal activists should work on increasing the probability that ASI is aligned. My preferred way to do that would be to advocate to pause AI, because I think we are really far away from solving alignment. But you could also work on the alignment problem directly. Pause advocacy is actually an area where a lot of animal welfare people have relevant skills—in fact I think a good number of AI pause advocates have backgrounds in animal advocacy. (I know Holly Elmore does at least.)

In fact I think the #1 best thing animal advocates can do is to advocate for an AI pause, but I haven't really planted my flag on this position because I'm still working out how to make the case for it. (Also I'm not very confident in it.)

Also, believing ASI will be good for animals doesn't necessarily mean you shouldn't work on trying to make ASI good for animals. Even if there's a (say) 90% chance that aligned ASI will care about animals by default, it could still be cost-effective to try to push that number to 91%.

I agree that the future will be profoundly weird, although it's an extra step to claim that the future will be profoundly weird in a way that change what actions animal welfare folks should take (as opposed to being weird in some orthogonal manner).

Yeah, the future described in this post isn't particuarly "weird", per se, it's just using the assumption that every technology that has been hypothetically proposed for the future will be created by ASI soon after AGI arrives. 

I think the future will be a lot more unpredictable than this. Analolgously, I can imagine someone from 1965 being very confused about a future where immensely powerful computers can fit in your pocket, but human spaceflight had gone no further than the moon. It's very hard to predict in advance the constraints and shortcomings of future technology, or the practical and logistical factors that affect what is achieved. 

Paraphrasing from my other comment:

IMO the stance of "AI is too unpredictable, so I won't consider it in my prioritization" is pretty reasonable. I was more trying to argue against stances like "AI is a huge deal specifically in that it will rapidly accelerate technological development, but nothing else about society will change." For example, I commonly see animal activists say that AGI will solve the technical problem of cultivated meat, but there will still be regulatory hurdles. If AGI is too unpredictable, then you shouldn't make predictions about which technological problems it will solve. That particular claim about cultivated meat is making a strong prediction that AI will be revolutionary, but also somehow won't change the regulatory environment. The way I put it in OP—under "AGI = intelligence"—is that some animal activists treat AI as a technology-accelerator, when really it's a general intelligence.

I don't understand why you (and Ben / Lizka) think we shouldn't focus on farmed animals in a post-TAI world, can you explain a little more? 

It seems to me that regardless of how weird the world gets (which I'm on board with), if AGI is aligned, then humans will still be around. And if the post-TAI humans are mostly the same as the pre-TAI humans, what makes you confident that they wouldn't want meat from farmed animals?

Looking at current human preferences around animal products, there's a strong "naturalistic" push - people want their animal products to come from environments that are as "natural" as possible (e.g. outdoor access, no hormones or antibiotics, etc). It seems like lots of people think cultivated meat is weird and gross, and could feel the same way about any other kind of technology that looks significantly different than traditional animal production. Perhaps TAI will be able to convince people to not feel this way, but it seems just as likely to me that this preference will be amplified and animal protein production will look more similar to the current day than you're anticipating. 

You call out the excerpt about the political pushback to cultivated meat, and I agree with you that this probably isn't what will cause cultivated meat to fail post-TAI. More likely in my mind is that people won't want cultivated meat (as evidenced by the fact that they currently don't want it), or any other super tech-y seeming solution to protein production. So it seems to me that thinking about what farming might look like post-TAI at least deserves a spot on the list of possible strategies for for having positive impact on animals in light of ASI.

You don't need campaigners if AGI will be a better campaigner than you. You don't need policy expertise if AGI will know more about policy than you. This passage treats AGI as a machine that accelerates scientific R&D, but that's not what AGI is. AGI is intelligence.

I think you're conflating "Transformative AI" with "Artificial General Intelligence". It seems very possible (though perhaps not very probable) that progress could slow down and preserve existing jaggedness: one can easily imagine a scenario in which increasingly capable AI replaces all coders (and is basically ubiquitous as an assistant in math research) but can't manage to replace other types of knowledge work due to lack of generality. Maybe it never gets good enough to replace boots on the ground investigative journalism, or maybe robotics doesn't advance fast enough for AI to automate wet-lab work, or maybe medicine and personal care will always require a human touch, yadda yadda.

I've seen many people throw around the Anthropic labor market graph (as pictured below): 

But I haven't seen that many people grapple with the world of difference between the theoretical and observed AI coverage markers—not to mention the fact that usage doesn't mean replacement. It's possible that this relationship will not hold in the future, but it's also possible that Moravec's Paradox will hold for the next couple of decades, and that computers and humans will continue to have distinct comparative advantages (or perhaps even complementary ones).

I technically agree with your point about there only being two possible futures. But I only think so because your first future covers far too many possible outcomes, including some in which AI is "transformative" but not necessarily superintelligent (or even "generally" intelligent).

Jagged progress is conceivable, but it's virtually impossible that AI could replace all coders and accelerate math research but not replace other jobs, because coding and math research (specifically ML-type math) are exactly the skills needed to accelerate AI development. If AI can accelerate AI development, then the timeline to getting an AI that can replace humans on all tasks becomes much shorter.

This is true if we assume that AI development has no cap. What if we find out that human intelligence is beyond AI? Maybe acceleration just leads to us hitting the “ceiling” sooner. (Again, not making a probability judgment here, just pointing out that this is a plausible outcome.)

Curated and popular this week
Relevant opportunities