Comments10
Sorted by Click to highlight new comments since:

Interesting! Thank you for  writing this up. :) 

It does seem plausible that, by evolutionary forces, biological nonhumans would care about the proliferation of sentient life about as much as humans do, with all the risks of great suffering that entails.

What about the grabby aliens, more specifically? Do they not, in expectation, care about proliferation (even) more than humans do?

All else being equal, it seems -- at least to me -- that civilizations with very strong pro-life values (i.e., that thinks that perpetuating life is good and necessary, regardless of its quality) colonize, in expectation, more space  than compassionate civilizations willing to do the same only under certain conditions regarding others' subjective experiences.

Then, unless we believe that the emergence of dominant pro-life values in any random civilization is significantly unlikely in the first place (I see a priori more reasons to assume the exact opposite), shouldn't we assume that space is mainly being colonized by "life-maximizing aliens" who care about nothing but perpetuating life (including sentient life)  as much as possible?

Since I've never read such an argument anywhere else (and am far from being an expert in this field), I guess that is has a problem that I don't see.

EDIT: Just to be clear, I'm just trying to understand what the grabby aliens are doing, not to come to any conclusion about what we should do vis-à-vis the possibility of human-driven space colonization. :) 

That sounds reasonable to me, and I'm also surprised I haven't seen that argument elsewhere. The most plausible counterarguments off the top of my head are: 1) Maybe evolution just can't produce beings with that strong of a proximal objective of life-maximization, so the emergence of values that aren't proximally about life-maximization (as with humans) is convergent. 2) Singletons about non-life-maximizing values are also convergent, perhaps because intelligence produces optimization power so it's easier for such values to gain sway even though they aren't life-maximizing. 3) Even if your conclusion is correct, this might not speak in favor of human space colonization anyway for the reason Michael St. Jules mentions in another comment, that more suffering would result from fighting those aliens.

I  completely agree with 3 and it's indeed worth clarifying. Even ignoring this, the possibility of humans being more compassionate than pro-life grabby aliens might actually be an argument against human-driven space colonization, since compassion -- especially when combined with scope sensitivity -- might increase agential s-risks related to potential catastrophic cooperation failure between AIs (see e.g., Baumann and Harris 2021, 46:24), which are the most worrying s-risks according to Jesse Clifton's preface of CLR's agenda. A space filled with life-maximizing aliens who don't give a crap about  welfare might be better than one filled with compassionate humans who create AGIs that might do the exact opposite of what they want (because of escalating conflicts and stuff). Obviously, uncertainty stays huge here.

Besides, 1 and 2 seem to be good counter-considerations, thanks!  :) 

I'm not sure I get why "Singletons about non-life-maximizing values are also convergent", though. Do you -- or anyone else reading this -- can point at any reference that would help me understand this?

I'm not sure I get why "Singletons about non-life-maximizing values are also convergent", though.

Sorry, I wrote that point lazily because that whole list was supposed to be rather speculative. It should be "Singletons about non-life-maximizing values could also be convergent." I think that if some technologically advanced species doesn't go extinct, the same sorts of forces that allow some human institutions to persist for millennia (religions are the best example, I guess) combined with goal-preserving AIs would make the emergence of a singleton fairly likely - not very confident in this, though, and I think #2 is the weakest argument. Bostrom's "The Future of Human Evolution" touches on similar points.

Thank you for the great post! I think my post might be relevant to 2.1.1. Animals [1.1]. 

(my post discusses about factory farmed animals in the long-term future, but that doesn't mean I only worry about that as the only source of animal suffering in the long-term)

Thanks for the kind feedback. :) I appreciated your post as well—I worry that many longtermists are too complacent about the inevitability of the end of animal farming (or its analogues for digital minds).

Each of the five mutually inconsistent principles in the Third Impossibility Theorem of Arrhenius (2000) is, in isolation, very hard to deny.

 

This post/paper points out that lexical total utilitarianism already satisfies all of Arrhenius's principles in his impossibility theorems (there are other background assumptions):

However, it’s recently been pointed out that each of Arrhenius’s theorems depends on a dubious assumption: Finite Fine-Grainedness. This assumption states, roughly, that you can get from a very positive welfare level to a very negative welfare level via a finite number of slight decreases in welfare. Lexical population axiologies deny Finite Fine-Grainedness, and so can satisfy all of Arrhenius’s plausible adequacy conditions. These lexical views have other advantages as well. They cohere nicely with most people’s intuitions in cases like Haydn and the Oyster, and they offer a neat way of avoiding the Repugnant Conclusion.

 

Also, for what it's worth, the conditions in these theorems often require a kind of uniformity that may only be intuitive if you're already assuming separability/additivity/totalism in the first place, e.g. (a) there exists some subpopulation A that satisfies a given condition for any possible disjoint unaffected common subpopulation C (i.e. the subpopulation C exists in both worlds, and the welfares in C are the same across the two worlds), rather than (b) for each possible disjoint unaffected common subpopulation C, there exists a subpopulation A that satisfies the condition (possibly a different A for a different C). The definition of separability is just that a disjoint unaffected common subpopulation C doesn't make a difference to any comparisons.

So, if you reject separability/additivity/totalism or are at least sympathetic to the possibility that it's wrong, then it is feasible to deny the uniformity requirements in the principles and accept weaker non-uniform versions instead. Of course, rejecting separability/additivity/totalism has other costs, though.

I might have missed it in your post, but descendants of humans encountering a grabby alien civilization is itself an (agential) s-risk. If they are optimizing for spread and unaligned ethically with us, then we will be in the way, and they will have no moral qualms with using morally atrocious tactics, including spreading torture on an astronomical scale to threaten our values to get access to more space and resources, or we may be at war with them. If our descendants are also motivated to expand, and we encounter grabby aliens, how long would conflict between us go on for?

Perfection Dominance Principle. Any world A in which no sentient beings experience disvalue, and all sentient beings experience arbitrarily great value, is no worse than any world B containing arbitrarily many sentient beings experiencing only arbitrarily great disvalue (possibly among other beings).[15]

I'm confused by the use of quantifiers here. Which of the following is what's intended?

  1. If A has only beings experiencing positive value and B has beings experiencing disvalue, then A is no worse than B? (I'm guessing not; that's basically just the procreation asymmetry.)
  2. For some level of value , some level of disvalue , and some positive integer , if A has only beings experiencing value at least , and B has at least N beings experiencing disvalue  or worse (and possibly other beings), then A is no worse than B.
  3. Something else similar to 2? Can  and/or  depend on A?
  4. Something else entirely?

What I mean is closest to #1, except that B has some beings who only experience disvalue and that disvalue is arbitrarily large. Their lives are pure suffering. This is in a sense weaker than the procreation asymmetry, because someone could agree with the PDP but still think it's okay to create beings whose lives have a lot of disvalue as long as their lives also have a greater amount of value. Does that clarify? Maybe I should add rectangle diagrams. :)

Curated and popular this week
Relevant opportunities