This part of Sam Bankman-Fried's interview on the 80K Podcast interview stood out to me. He's asked about some of his key uncertainties, and one that he offers is:
Maybe a bigger core thing is, as long as we don’t screw things up, [if] we’re going to have a great outcome in the end versus how much you have to actively try as a world to end up in a great place. The difference between a really good future and the expected future — given that we make it to the future — are those effectively the same, or are those a factor of 10 to the 30 away from each other? I think that’s a big, big factor, because if they’re basically the same, then it’s all just about pure x-risk prevention: nothing else matters but making sure that we get there. If they’re a factor of 10 to the 30 apart, x-risk prevention is good, but it seems like maybe it’s even more important to try to see what we can do to have a great future.
What are the best available resources on comparing "improving the future conditional on avoiding x-risk" vs. "avoiding x-risk"?
Most work on s-risks, such as the work done by the Center for Reducing Suffering and the Center on Long-Term Risk, is an example of this type of research, although restricted to a subset of ways to improve the future conditional on non-extinction.
(Note that this only follows if you assume that humanity has the potential for greatness.)