Knowing the shape of future (longterm) value appears to be important to decide which interventions would more effectively increase it. For example, if future value is roughly binary, the increase in its value is directly proportional to the decrease in the likelihood/severity of the worst outcomes, in which case existential risk reduction seems particularly useful[1]. On the other hand, if value is roughly uniform, focussing on multiple types of trajectory changes would arguably make more sense[2].
So I wonder what is the shape of future value. To illustrate the question, I have plotted in the figure below the probability density function (PDF) of various beta distributions representing the future value as a fraction of its maximum value[3].
For simplicity, I have assumed future value cannot be negative. The mean is 0.5 for all distributions, which is Toby Ord’s guess for the total existential risk given in The Precipice[4], and implies the distribution parameters alpha and beta have the same value[5]. As this tends to 0, the future value becomes more binary.
- ^
Existential risk was originally defined in Bostrom 2002 as:
One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
- ^
Although trajectory changes encompass existential risk reduction.
- ^
The calculations are in this Colab.
- ^
If forced to guess, I’d say there is something like a one in two chance that humanity avoids every existential catastrophe and eventually fulfills its potential: achieving something close to the best future open to us.
- ^
According to Wikipedia, the expected value of a beta distribution is “alpha”/(“alpha” + “beta”), which equals 0.5 for “alpha” = “beta”.
I have two thoughts here.
First, I'm not sure I like Bostrom's definition of x-risk. It seems to dismiss the notion of aliens. You could imagine a scenario with a ton of independently popping up alien civilizations being very uniform, regardless of what we do. Second, I think the binaryness of our universe is going to be dependent on the AI we make and/or our expansion philosophy.
AI 1: Flies around the universe dropping single celled organisms on every livable planet
AI 2: Flies around the universe setting up colonies that suck up all the energy in the area and converting it into simulations/digital people.
if AI 2 expands through the universe then the valence of sentience in our lightcone would seemingly be much more correlated than if AI 1 expands. So AI 1 scenario would look more
binaryuniform and AI 2 scenario would look moreuniformbinary.