This is a crosspost from my Substack, Regression to the Meat. My typical forum posts are written in a more dispassionate, research-y style, but I'm sharing this here because it touches on something that's been discussed in few EA forum posts previously.
In August, Dwarkesh Patel interviewed Lewis Bollard, OpenPhilanthropy’s Farm Animal Welfare Program Director, and opened with: “At some point we'll have AGI. How do you think about the problem you're trying to solve? Are you trying to make conditions more tolerable for the next 10 years until AI solves this problem for us?”
Lewis responds by saying basically that better technology might make animal suffering worse if we use it to do “ever more intensive” farming, and also, even if AGI invents totally excellent meat alternatives, there will still be cultural and political barriers to its adoption, and we still need to do that work.
It’s a good answer, and it keeps the conversation flowing. My less diplomatic answer would probably have been to turn it around and hammer at the premise. Dwarkesh, what is your theory of the world where something we’ve been doing for as long as we’ve been on this planet, however you define that, will suddenly wrap up? Can you think of anything, ever, that went from everywhere to nowhere in ten years?[1]
For whatever reason the exchange has been nagging at my attention. There have also been a few EA forum posts in a similar vein. I think that if other people find the topic interesting, I’d like to explain why it occupies zero of my professional attention. (The short answer is that I expect AI to be sharply curtailed by risk-averse regulations in my lifetime.)
This post is not precisely about animals. It’s about a theory of technological change and how societies adapt to it. I’ll first sketch out the trajectory I expect AI to take in the next 10-50 years; why we still need to do the hard work of persuasion under that scenario; and why I still think it’s a good thing to work on ending factory farming if I’m wrong, including in worlds where AI either completely solves the lab-grown meat problem or kills us all.
I expect AI to follow a trajectory like nuclear power’s
Nuclear power is a big deal. It’s about 70 years old. There are ~440 nuclear power plants on earth which collectively generate about 9% of global electricity. Ballpark, we’d need a few thousand plants to generate all global electricity — ChatGPT says 3100-3500 1GW plants — and about 6X that to produce all ‘final energy.’
It costs ~$3B to build a 1GW plant in China and about twice that in the US, and I’m not claiming to be an expert in this area, but apparently the US’s costs could be about $3.5 billion/GW if we relaxed constraints like the “as low as reasonably achievable” ionizing radiation standard. Replacing all fossil fuels with nuclear power would cost between $7-$30T at baseline. If you add 10% on top of that for transmission/infrastructure costs and assume graft/corruption will eat another 20% — who even knows — you get a number that the world can afford. Especially if we treated nuclear technology advancements as a core civilizational goal and invested accordingly.[2]
But we’re not doing that. There are currently about 70 nuclear power plants under construction. Zero of them are in the US. Germany is denuclearizing and experiencing periodic energy shortages. We collectively lost our appetite for nuclear power because a few prominent nuclear disasters killed a few hundred people over many decades. (Air pollution from fossil fuels is thought to kill about 5 million people a year).
I expect AI to follow a similar path. I anticipate rapid progress in LLMs for both current use cases and new ones. (A college friend is working on putting researchers like myself out of business 😉.) And then I expect several dozen or hundreds of people to die from AI-related mishaps or terrorism. Suppose a pilot sleeps on the job while an LLM-based assistant crashes the plane, or an autonomous truck crashes into a school/hospital, or a cult starts worshipping a chatbot and does doomsday stuff. Seems pretty plausible to me! At that point I expect the west’s fundamentally lawyerly culture to take the reins and for AI to be strictly curtailed. That’s what we do when things are promising and dangerous. We do not become more utilitarian when the stakes get higher. Fear eats the soul, for people and for countries.
I’m kind of a techno-optimist, and when this happens I’ll be sad. I think the turn away from nuclear power is one of our civilization’s great mistakes. If AI can radically transform material/organic sciences, I want to see that unleashed and society radically upended. But I’m not expecting it. I am a bit baffled that other people seem to. Has anything in your lifetime, or in your parents’ lifetime, been like that?
Also, to clarify, nuclear power has been transformative. 9-10% of global electricity production is a lot of lightbulbs! But it’s not some civilization-altering thing. It just exists in tandem with other, older things, fueling our wants and needs. We could be aiming to fuel mass desalination to terraform the American west or the Sahara, which would sequester a few decades worth of carbon, open a huge new frontier for productive agriculture, and dramatically lower spatial pressures on biodiversity. But we’re not doing that because we’re scared of what it would take. That’s who we are. We get a lot of utility from arguing about things, perhaps more than from from solving them. This is, to me, a civilization-defining trait.
If I’m wrong, we’d still need to talk to people
To repeat something I said to Kenny Torella, persuasion is a beautiful thing. I’m not ready to give up on it. Let’s say AI-assisted labs make huge progress on lab-grown meat. First, in practical terms, ‘progress’ here means lowering the energy costs of production, because we have lab-grown meat in Oakland, Singapore, and Tel Aviv. But it’s expensive. Meat, by contrast, is cheap and available everywhere, and if you think of an industrial chicken plant as a macroorganism that converts corn to tasty, protein-rich meat, Lewis estimates that it takes about two calories of grain to produce one calorie of chicken, which is incredible.[3] Let’s say AI leads to breakthroughs that give lab-grown meat similar efficiency and therefore similar price. Great! Now we’ll have a bunch of fundamentally people problems, i.e. matters of persuasion.
- Who will convince the FDA/EMA to permit it?
- Which restaurants will carry it?
- How will we get the MAHA movement to give it a chance given their general hesitance about highly manufactured/processed foods?
- Can we persuade Florida or Montana to permit its sale?
- Will the EU allow folks to market plant-based products with meaty labels?
I see an obvious role for advocates and researchers. So does Lewis. (My colleague Jacob Peacock provides a nice overview of consumer attitudes towards plant-based meat attitudes in Price-, taste-, and convenience-competitive plant-based meat analogues would not currently replace the majority of meat consumption: A narrative review.)
A lot of AI scenarios are orthogonal to animal welfare issues
A friend once posited that AI doesn’t need to literally kill us all for it to be a big problem. His example was AI agents capturing like 20% of global electricity, and we just have to live with it, the way that Mexico has to live with parasitic, seemingly ineradicable cartels. That sure would suck! But I don’t see the implications for animal welfare one way or the other.
Or imagine, as Neal Stephenson does in Fall, that AI generates endless addictive slop, and the “Five companies/Running everything I see around me” continue to improve at beaming it directly to our eyeballs, and eventually most people just end up spending all day staring at nonsense on their AI goggles, human relations wither, we’re all sad, etc. Again, very bad! But unclear how this affects animals. Probably factory farming would just continue onwards. In which case, we’re back to where we were, which is needing to do the work.
Suppose the worst (or best) happens
Personally I view the “we’re all going to die” or “we’re all going to live in utopia” scenarios as very unlikely.[4] But I might be wrong. So back to Dwarkesh’s problem: Let’s say that by 2035, it’s either all going to be over or we’ll have infinity lab-grown meat for $0. Suppose those were the only two possible outcomes. Why continue working on ending factory farming in the meantime?
Because factory farming is very, very bad. It is many holocausts worth of bad. Stopping that even a day sooner than it would otherwise end is good. I very much doubt that you have something else more important to work on. Maybe, like, Aella, you “like throwing weird orgies” and you’re “like — well, we’re going to die. What’s a weirder, more intense, crazier orgy we can do? Just do it now.” That’s great, spend your evenings doing that! But I still think you can find time to work on solving something really bad in the morning.
Whether anything we do actually works is a separate problem. But we’re a lot more likely to find it if we are actually trying. I much prefer that, in any world, to waiting for a Deus Ex Machina.
- ^
Some animal advocates would reach here for the comparison to slavery, whose legal status changed dramatically over the 19th century. To which I would say that tens of millions of people are slaves today, compared to about 12.5 million people enslaved in the Atlantic slave trade. You can ‘win’ the moral fight in some places and still be nowhere close to getting the job done.
- ^
I know nothing about fusion but here is some evidence it’s happening.
- ^
Eric Gastfriend – who, it should be said, is smart and has pivoted to AI safety, which is some evidence in favor of its being worth doing — once said the ratio was more like 4 to 1, but either way, it’s way more efficient than a herd of cows or any extant lab-grown alternatives.
- ^
My probability of any global catastrophe killing >50% of the human population in a single year over the next 200 years is probably 1%, which is very high! But I think the most likely culprit is war.