In a recent report, Toby Ord and I introduce the idea of 'existential hope': roughly, the chance of something extremely good happening. Decreasing existential risk is a popular cause area among effective altruists who care about the far future. Could increasing existential hope be another useful area to consider?
Trying to increase existential hope amounts to identifying something which would be very good for the expected future value of the world, and then trying to achieve that. This could include getting more long-term focused governance (where perhaps the benefit is coming from reduced existential risk after you reach that state), or effecting a value-shift in society so that it is normal to care about avoiding suffering (where the benefit may come from much lower chances of large amounts of future suffering).
What other existential hopes could we aim for?
Technical note: the idea of increasing existential hope is similar to that of a trajectory change, as explained in section 1.1.2.3 of Nick Beckstead's thesis. It is distinct in that it is extremely hard to tell when a trajectory change occurs, because we don't know what the long-term future will look like. In contrast we can have a much better idea of expectations.
Probably the most important "good things that can happen" after FAI are:
Whole brain emulation. It would allow eliminating death, pain and physical violence, not even mentioning ending discrimination and social stratification on the basis of appearance (although serious investment into cybersecurity would be required).
Full automation of the labor required to maintain a comfortable style of living for everyone. Avoiding a Malthusian catastrophe would still require reasonable reproduction culture (especially given immortality due to e.g. WBE).
It seems like the development of these would increase expected value massively in the medium term. I'm not sure what the effect on long term expected value would be (because we'd expect to develop these at some point anyway in the long term).