This is a Draft Amnesty Week draft.

 

Note: This remains a work-in-progress. Please feel free to contact me with any thoughts or feedback on any flaws in my thinking or ways in which I should further flesh out the idea to make it clearer.

Let us start with a bit of background context for framing. For the vast majority of human history, our circle of empathy included ourselves and our tribe. What constitutes our tribe has grown from dozens of people who were in one way or another directly related to us to nations constituting tens of millions of people. Moving beyond that, we have begun to have empathy for all of humanity and even some non-human creatures, besides the ones we are already culturally inclined toward like cats and dogs.

In recent years, there has been an effort to not just think about all the creatures alive today but also the creatures that will be alive in the long-term. After all, the thinking goes, why should our lives matter more just because we are the ones here now? And if we can make better choices that will positively impact the well-being of those future creatures, then are we not ethically obliged to do so? This idea, known as longtermism, is a logical extension of our expanding circle of empathy.

After all, because we care so deeply about our own children, we put in massive effort to ensure they are well-fed, educated, and otherwise setup to live a good life. We also recognize that other people around the world are not so different from ourselves and also care about their children and grandchildren. As evidenced by the passionate activism around climate change, for example, many of us have now successfully expanded our circle of empathy not just to the rest of humanity, but even to the rest of humanity plus a generation or two into the future.

Longtermists challenge us to think beyond that. Some of them are concerned about centuries, some millennia, and some further still with megaannum, but less discussed is the end goal. In this view, that answer is not well-defined. The longtermist perspective is essentially about ensuring that those of us in the 21st century do not do something foolish that precludes the possibility of a thriving, spacefaring, multiplanetary civilization. I would certainly agree that is a desirable outcome in the long-term, but it is also possible for us to think longer term than the longtermists.

If we think trillions of years into the future, once all of the Universe has been explored, all of the adventures that are to be had have been had, and the Universe is nearing its heat death, what end state do the longtermists foresee?

Some have been so disturbed by the increasing power of humans to shape and damage their environment that they have argued for an end goal of voluntary extinguishing of the human species by not reproducing. While the formal organization associated with this way of thinking is quite fringe, there are a large number of people who agree with this sentiment and do not wish to have children as a result. They have been told having a child is the single worst thing you could do for the environment, as we are already overpopulated.

There are several issues with this line of thinking. Firstly, it is a self-terminating idea. Any idea that demands its followers do not reproduce is going to be memetically weak as the potent parent-to-child vector of idea transmission is automatically eliminated, unless the individual adopts a child. On the other hand, those who do not follow that line of thought will continue to have children, some of them many.

Let us leave that aside for a moment and consider what would happen if this ideology were successful in its aims. This could be done either by convincing 100% of humans not to have children or forcing the matter, such as through some sort of hypothetical highly contagious and deadly disease. Even if successful in removing humans from the playing field, life would continue on Earth and wherever else it exists in the vast Universe.

Eventually, another intelligent species would evolve. It could take anywhere from hundreds of thousands to many millions of years, but it would happen eventually somewhere. It would require an immense amount of suffering to get to that point, akin to the suffering our own ancestors experienced for millions of years. Their lives were nasty, brutish, and short, as many of the lives of our fellow humans today continue to be.

Our own recorded history began only around 5,000 years ago, with roughly 300,000 years of unrecorded history prior to that. More recent still, it is only in the last few centuries that modern nation-states and corporations came into being, enabling global trade networks, industrial economies, and innovation on an unprecedented scale. The development of these systems has allowed humanity to emerge into relative abundance and to develop technologies like space travel, artificial intelligence, and gene editing over what amounts to the blink of an eye.

This holds true not just for our most advanced societies, but for all of humanity. Over the past decade, I have lived in some of the least developed nations on Earth today. For all the hardships people in these nations undergo, I can say from firsthand experience the lives people live in these countries would be totally alien to past humans. In these places, many people have access to technologies like antibiotics, currency notes, woven fabrics, fixed dwellings, steel machetes and cooking pots, radios, basic cell phones and solar panels, and perhaps even a vehicle like a speedboat or a used RAV4 that would appear magical or simply incomprehensible to humans thoughout most of our history. When we have collectively come so far in such a short time, it would be ludicrous to throw it all away.

If we nonetheless self-extinguish, intentionally or otherwise, it is likely that at some point, somewhere in the Universe, another intelligent species would evolve. As for humanity, it would likely take hundreds of thousands of years for that species to reach the stage we are at today. If that species also self-annihilates, such as due to an anti-natalist guilt over its negative impact on the environment, this would do nothing to allieviate suffering. It would simply mark a continuation of an endless cycle that leads nowhere. Rather than put the rare emergence of intelligence to use to reduce suffering, this view, in its most extreme form, simply perpetuates it indefinitely.

The ultimate problem with this idea, like longtermism, is that it does not resolve the issue of the end outcome.

You might be thinking, “100 trillion years is a long way off. Why should I think about that now?” The reason is because once we agree on the end state we want to see, we can work backwards from there to inform our collective worldviews today.

Consider R.N. Smart’s argument that a “ruler who controls a weapon capable of instantly and painlessly destroying the human race” is a logical implication of Karl Popper’s negative utilitarianism. According to Smart, the use of such a weapon would be the the ruler’s duty, as it would be “bound to diminish suffering.” Despite Smart proffering that argument as an absurd outcome of negative utilitarianism, some people, such as antinatalists and human extinctionists, are in fact in favor of such an outcome.

However, what Smart and extinctionists do not consider is this: what comes after this ruler ends all human life? All you would have is hundreds of millions of years of animals fighting to fulfill their basic needs in a state of nature until and unless another intelligent species evolves on Earth and we repeat this brutal cycle of evolution, innovation, guilt, and self-destruction until some sort of natural catastrophe wipes us all out once and for all, rendering all that suffering and struggle truly pointless.

One possible outcome of the longtermist thinking would be a Universe filled with innumerable beings — endless hundreds of trillions — experiencing pure, eternal ecstasy. While there is a certain appeal to that outcome compared to the extinctionist impulse, it has a kind of emptiness to it. It simply takes things we care about now, namely a thriving civilization filled with people who are happy, and projects that indefinitely into the future.

As an intermediate goal, I have no objection to it. However, as an end state, it is akin to the paperclip problem in that it takes something we want in abundance — paperclips, in Bostrom’s example or happy people who are not suffering in the longtermist example — and leads to a sort of dystopic outcome. In this case, the end state may be something akin to trillions of brains in trillions of vats all hooked up to a system that disables hedonic adaptation and pumps them full of dopamine while they all hallucinate. Presumably, this would go on indefinitely until the Universe ends, putting an end to that project and to all of existence.

If neither creating an endless number of trillions of people who experience as much happiness as possible for as long as possible or extinguishing all humans are good end outcomes, what would be?

When I first developed this view in the early 2000s, I was uncertain about its validity, and I was unsure of how to best express it or test it. Before gaining enough confidence, I had to understand the views of the world I was born into. I have spoken to followers of Abrahamic faiths like Christian pastors and Muslim imams who believe the end game is follow God’s will so that they may reach Heaven or Jannah and exist there blissfully forever at his side. Likewise, I have spoken to Hindu and Buddhist priests and scholars who believe the aim is to overcome samsara and duhkha to achieve moksha or nirvana. I have also spoken to anxious Europeans choosing not to have children in order to protect those children from suffering or contributing to a climate disaster and Oxford-educated effective altruists, some of whom would find the idea of trillions of brains in trillions of pleasure vats to be akin to the Heaven envisioned by the religious.

From these conversations, I saw a common thread that aligned with my thought process. The commonality all of these traditions share is that they prioritize the end of cycles of suffering. However, how we should go about that is not necessarily immediately clear, especially if we wish to do so in the temporal world.

However, if we are seeking to accomplish this goal in the temporal world and reject making humans go extinct or hooking our brains up to an experience machine as valid goals, where does that leave us? I have come to the conclusion that the following principles encompass where we should be aiming:

  1. Ensure humans or some other intelligence continues to exist forever. There is no purpose in gaining all of the knowledge of the Universe if we are all destroyed at the end of it. Our hard-won knowledge must be protected.
  2. Progressively gain knowledge, until all is known, except for knowledge that requires inducing suffering. For instance, one of the most unacceptable and unethical acts possible would be to create virtual worlds full of ignorant creatures who suffer and themselves create virtual worlds full of ignorant creatures who suffer ad infinitum. The fact that this is so deeply unethical is one reason I doubt Bostrom’s argument that we likely live in a simulation. His argument does not account for the desire of highly advanced civilizations to prevent it from happening.
  3. End all suffering. This includes both human and non-human, on Earth and throughout the Universe, any hypothetical multiverses or virtual worlds. Basically, anywhere that suffering can be eliminated, it must be.

These principles allow us to work backwards to today to determine what to collectively prioritize. If we understand what our place in the Universe is, we can more readily determine how to move forward. We should worry about things like climate change, but we should reject the extinctionist and antinatalist impulses as shortsighted and wrongheaded. Likewise, we should aim for a thriving, happy multiplanetary species in the long-term but without losing sight of the end state.

So what would this end state I am proposing look like? I would put forward it should be an eternal knowledge custodianship. This custodianship could take any number of forms, perhaps a single entity like the Multivac or an entire type IV civilization. We will have plenty of time to work out those sorts of specifics. However, what is important for the time being are the broad strokes of the end state. Specifically, the principles I have referenced above should be enshrined in its behavior: 1) ensuring the ongoing existence of sentience, 2) gaining total knowledge except that knowledge which requires inducing suffering, and 3) ending all suffering.

This approach ensures that all the billions of years of suffering that lifeforms have endured was not totally meaningless and it prevents that cycle from repeating. Instead, it leaves, at the end of it all, a peaceful Universe in a perpetual state of fully knowing itself.

6

1
0

Reactions

1
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

There was a lot in here that felt insightful and well considered. 

I agree that thinking about the end state and humanity in the limit is a fruitful area of philosophy with potentially quite important implications. I wrestle with this sort of thing a lot.

One perspective I would note here (I associate this line of thinking with Will McAskill) is that we ought to be immediately aiming for a wiser, more stable sort of middle-ground and then aim for the "end state" from there. I think that can make sense for a lot of practical reasons. I think there is enough of a complex truth to what is and isn't morally good that I am inclined to believe the "moral error as an x-risk" framing and, as such, I tend to place a high premium on option value. I think, given the practical uncertainties of the situation, I feel pretty comfortable aiming for / punting to some more general ""process of wise deliberation" over directly locking my current best guess into the cosmos. 

That said, y'know, we make decisions every day and it is still definitely worth tracking what my current best guess is for what ought actually be done with the physical matter and energy extant in the cosmos. I am partial to much of the substance that you put forward here. 
 

  1. "ensuring the ongoing existence of sentience"

    "sentience" is a bit tricky for me to parse, but I will put in for positively valenced subjective experience :) 

  2. "gaining total knowledge except that knowledge which requires inducing suffering"

    I mean, sure, why not? I think that sort of thing is cool and inspiring for the most part. There are probably things that would count as "knowledge" to me, but which are so trivial that I wouldn't necessarily care about them much. But, y'know, I will put in for the practical necessity of learning more about the universe as well as the aesthetic/ profound beauty of discovery the rules of the universe and the nature of nature. 

  3. "ending all suffering"

    Fuck ya dude! I'm against evil and suffering seems like a central example of that. There may even be more aesthetic or injustice like things that I would consider evil even in the absence of negatively valenced experience per se which I might also entertain abolishing. 

There is a lot to be said about the "end state" which you don't really mention here. Like, for example, I think it is good for people to be really, exceptionally happy if we can swing it. I don't know how to think about population ethics honestly. 

One issue that really bites for me when I try to picture the end of the struggle and the steady end state is:

  1. people often intrinsically value reproducing
  2. I want immortality
  3. Each person may require a minimum subsistence amount of stuff to live happily (even if we shrink everyone or make provably morally relevant simulations or something)
  4. Finite materials / scarcity

I have no reasonable way out of this conundrum and I hate biting the "population control" bullet. That reeks of, like, "one child policy" and overpopulation motivated genocides (cf. The Legacy of India’s Quest to Sterilize Millions of Men / Uttawar forced sterilizations). I think concerns in this general vein about the resources people use and the limits to growth are also pretty closely ties to the not uncommon concerns people have around over population / climate heads not wanting to have kids.

Also, to make it less abstract, I will admit that my morals / impulses are fundamentally quite natalist and I would quite like to be a Dad some day. Even if we grant that resource growth exceeds population growth for now, it seems hard to escape the Malthusian trap forever and I think this is a very fundamental tension in the limit.

Thanks very much for taking the time to respond, Jacob! 

I think, given the practical uncertainties of the situation, I feel pretty comfortable aiming for / punting to some more general ""process of wise deliberation" over directly locking my current best guess into the cosmos.

A useful analogy may be how people think they want their life to look like when they're old. The reality of how one's life will actually look may be uncertain, it's possible that one will figure out some stuff that may change how one wants their life to be in old age, and there is nothing wrong with taking a process of wise deliberation along the way in one's life. Nonetheless, having an aim that one has reasoned through can help one in that process of wise deliberation on the way to old age.

"sentience" is a bit tricky for me to parse, but I will put in for positively valenced subjective experience :)

I wouldn't necessarily say it need be positively valenced, but rather at least not negatively valenced. Neutral may suffice.

There are probably things that would count as "knowledge" to me, but which are so trivial that I wouldn't necessarily care about them much. But, y'know, I will put in for the practical necessity of learning more about the universe as well as the aesthetic/ profound beauty of discovery the rules of the universe and the nature of nature. 

Perhaps every little trivial thing wouldn't be necessary to know. But when thinking in terms of something with unimaginable cognitive capacity and limitless time, even trivial things might be worth knowing.

There is a lot to be said about the "end state" which you don't really mention here. Like, for example, I think it is good for people to be really, exceptionally happy if we can swing it. I don't know how to think about population ethics honestly. 

Sure! I'm happy to talk address that. I assume at that time there need not be life as we know it. I'm not necessarily opposed to it in addition to what I've laid out, but I don't think it's particularly important when we're thinking about time spans of googols of years.

people often intrinsically value reproducing

Just like humans are the byproduct of an evolutionary process, so is the human and animal desire to reproduce. This is also something I'm not necessarily opposed to but also don't think is particularly important to maintain in the timescales we're talking about. For instance, we are already seeing wealthy human societies experience sub-replacement fertility. Perhaps once all of humanity is wealthy in a few centuries or millennia, the whole world will experience sub-replacement fertility? I can't say, but I view these as relatively short-term concerns.

I have no reasonable way out of this conundrum and I hate biting the "population control" bullet. ... I think concerns in this general vein about the resources people use and the limits to growth are also pretty closely ties to the not uncommon concerns people have around over population / climate heads not wanting to have kids. ... Also, to make it less abstract, I will admit that my morals / impulses are fundamentally quite natalist and I would quite like to be a Dad some day. Even if we grant that resource growth exceeds population growth for now, it seems hard to escape the Malthusian trap forever and I think this is a very fundamental tension in the limit.

I tend to not be too concerned about the issues raised by anti-natalists regarding overpopulation and climate and tend to be pretty bullish on resource and technological growth exceeding population growth for the foreseeable future. I've counseled friends of mine who are concerned about climate change they shouldn't be afraid to have kids on that basis, for example. However, I tend to prioritize a different set of concerns when thinking about issues decades or centuries into the future versus periods of googols of years into the future. When thinking about those kinds of timespans, the concerns we prioritize on a day-to-day basis or century-to-century basis look small in comparison.

Again, thank you very much for the feedback Jacob. Much appreciated!

Curated and popular this week
Relevant opportunities