This part of Sam Bankman-Fried's interview on the 80K Podcast interview stood out to me. He's asked about some of his key uncertainties, and one that he offers is:
Maybe a bigger core thing is, as long as we don’t screw things up, [if] we’re going to have a great outcome in the end versus how much you have to actively try as a world to end up in a great place. The difference between a really good future and the expected future — given that we make it to the future — are those effectively the same, or are those a factor of 10 to the 30 away from each other? I think that’s a big, big factor, because if they’re basically the same, then it’s all just about pure x-risk prevention: nothing else matters but making sure that we get there. If they’re a factor of 10 to the 30 apart, x-risk prevention is good, but it seems like maybe it’s even more important to try to see what we can do to have a great future.
What are the best available resources on comparing "improving the future conditional on avoiding x-risk" vs. "avoiding x-risk"?
I would replace "avoiding x-risk" with "avoiding stuff like extinction" in this question. SBF's usage is nonstandard -- an existential catastrophe is typically defined as something that causes us to be able to achieve at most a small fraction of our potential. Events which cause us to achieve only 10^-30 of our potential are an existential catastrophe. If we avoid existential catastrophe, the future is great by definition.
Regardless, I'm not aware of much thought on how to improve the future conditional on avoiding stuff like extinction (or similar questions, like how to improve the future conditional on achieving aligned superintelligence).
Most work on s-risks, such as the work done by the Center for Reducing Suffering and the Center on Long-Term Risk, is an example of this type of research, although restricted to a subset of ways to improve the future conditional on non-extinction.
(Note that this only ... (read more)