Hide table of contents

This week, we are discussing the statement: “If AGI goes well for humans, it’ll go well for animals”. The announcement post, with a bit more info and a reading list, is here

What is this thread for?

General discussions about and reactions to the debate statement. 

Some of the comments on this thread will be populated directly from the debate banner on the homepage — these will mostly be people explaining why they voted the way they did.

 However, you’re also welcome to comment on here directly, with any considerations you'd like to share, or questions you'd like to ask. 

How should I understand the debate statement?

Again, our statement is: “If AGI goes well for humans, it’ll go well for animals”

The statement will ultimately mean whatever people interpret it to mean. The key is to explain how you are interpreting the statement in the comment that you attach to your vote. However, I can share a few notes which might pre-empt your questions:

  • AGI- Artificial General Intelligence. What this exactly is and how transformative it is likely to be to the world economy and our ways of life is likely to be a crux in this debate. As such, I won't be offering a definition.
  • Goes well- Likewise, what it means for AGI to go well is likely to be a live element of the discussion. For example, 'going well' might mean humans are still in control of AI tools, or it might mean that humans are replaced by more beneficent machines. I'll leave this up to you.
  • Animals- I'm talking about non-human animals. I'm specifically naming animals rather than 'other minds' to signal that this conversation isn't primarily about digital minds. 

Message me or comment in the thread with me tagged if you have any questions. 


 

33

0
0

Reactions

0
0
Comments19
Sorted by Click to highlight new comments since:
Jim Buhler
4
1
0
0% agree

I think there are plenty of crucial sign-flipping considerations pointing both ways (I'll publish a post on this today), and that our takes certainly fail to account for some of them, in ways that likely make these takes irrelevant. 

And even if someone's evaluation somehow does not omit a single crucial consideration, they have to make opaque judgment calls on how to weigh up the conflicting pieces of (theoretical and empirical) evidence. I see little reason to believe such judgment calls would do better than chance.

Clarification on what my "0% Agree" means: I confidently disagree that we should believe it'd go well for animals, but I don't think we should believe the opposite either. I think our cause prio should not rely on any assumption on this question.

Slightly leaning toward that moral progress in that area would become so cheap that people accept it.

Mjreard
5
1
0
60% agree

Seems like AGI will lead to ASI and ASI will show us more valuable ways to use all the land and matter that currently support animal suffering. The ways we use those probably won't involve animals or suffering at all.

MaxReith
3
0
0
10% disagree

 I think this depends on whether farmed or wild animal welfare matters more. I don't have an answer, so let's treat it as 50/50. 

  1. If wild animals matter more, what could happen?  On the upside, AGI might enable us to help wild animals.  On the downside, it might lead to humans creating biospheres on other planets, which would increase the suffering of wild animals by many orders of magnitude.
  2. If farmed animals matter more, the upside could be that AGI enables us to substitute farmed animals completely (cultivated meat, etc.). The downside could be that people get richer and want to eat more meat, or that AGI changes the production of farmed animals in a way that increases suffering. 

Again, I don't know whether the upside or downside in each scenario is more likely.  Let's say each is 50/50 again.  I think this makes 1) EV negative and 2) EV positive, with the aggregate being slightly EV negative.

If farmed animals matter more, the upside could be that AGI enables us to substitute farmed animals completely (cultivated meat, etc.).

Nitpick, but it seems unfair to consider this an upside rather than the mere absence of a downside, since the relevant counterfactual scenario, in expectation (if no AI safety work) is a misaligned AI that takes over and probably ends animal farming as it kills or disempowers humans. 

AI safety cannot take the credit for a potential future reduction or end of farmed animal suffering if it preserves humanity, without which animal farming would not exist to begin with.

NickLaing
2
0
0
0% agree

I think the answer to this question is too many branches down a tree of possible futures to meaningfully predict. What happens at multiple branch points could swing this either way. If I have time I'll share more about what I mean.

Hi! There's no labels on the slider bar so it's initially unclear which side is agree vs disagree.

Oh no, thanks so much for flagging this! Toby was on holiday today unfortunately, so I've just updated it.

Fair call disappearing after dropping the debate slider to avoid the upcoming bedlam...

Aaron Bergman
2
0
0
40% disagree

Vibes, I have no idea, I hope someone convinces me with good takes

alene
2
2
0
100% disagree

The good news is that life on Earth has been going better and better for humans over the millennia. For instance, we have technology that make it easy to grow tons and tons of food so lots of people can eat as much as they want. We have cures for lots of previously deadly diseases so lots of us humans can live a very long time. And lots of people live in countries that recognize their rights. We also have a robust international economy that makes it really easy for a large number of people to buy the goods and services they want—and for lots of other people to get paid producing those goods and services! 

The bad news is that none of this has translated to things going well for animals. :-( In fact, it has translated to the opposite. Things have been going worse and worse for animals over the millennia. For instance, factory farming, which causes a HUGE amount of suffering for animals, developed very recently in human history, and it developed as a byproduct of humans getting the things they want most (like a great economy, and the ability to produce food cheaply). So we have seen that humans getting more and more of what we want doesn't translate to animals getting what they need. Of course, humans do also want animals to be treated well, on some level! But humans' main goals are human-oriented goals. And so when we get more and more ability to achieve our goals, we put those human-oriented goals first, resulting in negative externalities for animals. If AI goes well for humans, it'll go well for humans. It'll be aligned with what humans want. And that will mean it's aligned with prioritizing human interests over all others. Sure, it'll care about animals a little, the way humans care about animals a little. But it will continue to put human interests first. And that will continue to result in externalities for animals. 

The same way people harm animals now (e.g. for food, entertainment, fashion, science, etc.) may continue. And new ways to harm animals may develop that we never could have imagined before AI. For instance, people love having pet dogs. When their pet dogs die, people are sad. People may want to be able to upload their pet dog's brain to the cloud to hang out with the pet dog when their dog dies. But trying to develop this technology may be a lot of work. AI may do the work by uploading 100,000 dog brains, or 100,000 copies of the same dog brain, to the cloud, and running various tests to see what works best. Perhaps a lot of these dogs will suffer immensely due to some mistake AI made in an early draft or some feature AI failed to include. And perhaps the suffering will be made worse because the dogs don't have bodies and cannot even able to express their suffering without vocal cords or paws. Eventually, AI may work out the kinks before AI roles out the final keep-your-dead-pet-alive-as-an-app-on-your-phone product. But there's all that behind-the-scenes suffering in the meantime. Humans care about animals a little. But humans love to turn a blind eye to behind-the-scenes suffering, so humans won't be too upset about this situation. Then maybe AI realizes humans would like an upgrade to their pet-on-your-phone product. And that means AI needs 100,000 more copies of dog brains to do more experiments. AI that is fully aligned with human interests would realize humans would like the upgrade more than humans would be bothered by the suffering inherent in creating the upgrade. So AI will create the upgrade. This is just an example to illustrate my point. But I think there are lots of ways animals can be caused to suffer that we can't even imagine right now.

What animals need is for AI to be aligned with animal interests, too—not just human interests.

PabloAMC 🔸
2
0
0
30% disagree

AGI could, in principle, find solutions for the key problems that animals face, but I would argue the main issue is that it won't automatically enlighten humans.

I'd really like it if AI resulted in amazing plant based or cultured meat, and that the general abundance coming from AI means that people can focus their thinking on morality, not just making their lives go okay. 

BUT, so far, new tech and improved economical situations have caused farmed animal suffering to get worse.

So I have a big uncertainty, but lean disagree. 
 

shepardriley
1
0
0
30% disagree

No particular strong reason, this is my intuition but curious to see people's reasoned takes.

Steven Rouk
1
2
0
60% disagree

I'm quite uncertain, but in general I don't think it's been the case that "if X technology goes well for humans, it'll go well for animals". I think in some key cases, it's been the exact opposite, actually—e.g., industrialization leading to the rise of factory farming and killing/causing suffering to many more animals.

However, I also don't think that AGI is going to be quite different from most technologies, at least in some ways (and definitely as it goes past AGI to ASI), and so I'm quite uncertain about how "going well for humans" might positively impact "going well for animals" in this specific case.

But I still see AGI as mostly being a technology developed by humans for human purposes, so it will be guided as such. And humans still predominantly use other animals as resources (for food, testing, raw materials, etc.). So, I think the default trajectory would probably be negative unless there is significant effort invested in helping AGI go well for nonhumans specifically.

Hazo
1
0
0
60% ➔ 50% agree

A couplet different potential mechanisms could help farmed animals:

  • Solving cultivated meat or brainless animals
  • Creating better welfare technologies (e.g. solving all disease issues on current farms)
  • Generating enough societal wealth to make welfare improvements  like lowering stocking density trivial

More abstractly, people generally care about welfare so it will be one of the things that an aligned AGI optimizes for. However, it wont be optimal for animals because AGI won't be directly optimizing for welfare. For example, most people don't think it's wrong to eat meat, and we might still not want do things like beneficial vaccines or genetic edits.

Wild animals, less clear though!

If AGI goes well for humans, this will likely mean a lot of technological development. This would likely include technologies allowing for products equal to or superior on the dimensions humans like, that don't have the animal welfare entailments. I realize that there have been some arguments that people would still prefer products created through suffering even if alternatives could be just as cheap, satisfying, and convenient, but I think that attitudes would change in the medium to long-term if those conditions were met.

Ligeia
-1
0
0
30% agree

ES: not professional, not sure

IMO if AGI goes well for humans then at least it would have a decent grasp for general ethics, which includes animal welfare. AGI that hasn't got good ethics wouldn't benefit human, they'd just paperclip around. Since I have a short-ish timeline, i think somewhat-ethical and empowered AGI will benefit animals more than speciest HGIs.

Curated and popular this week
Relevant opportunities