Avoiding futures with astronomical amounts of suffering (s-risks) is a plausible priority from the perspective of many value systems, particularly for suffering-focused views. But given the highly abstract and often speculative nature of such future scenarios, what can we actually do now to reduce s-risks?
In this post, I’ll give an overview of the priority areas that have been identified in suffering-focused cause prioritisation research to date. Of course, this is subject to great uncertainty, and it could be that the most effective ways to reduce s-risks are quite different from the interventions outlined in the following.
A comprehensive evaluation of each of the main priority areas is beyond the scope of this post, but in general, I have included interventions that seem sufficiently promising in terms of importance, tractability, and neglectedness. I have excluded candidate interventions that are too difficult to influence, or are likely to backfire by causing great controversy or backlash (e.g. trying to stop technological progress altogether). When reducing s-risks, we should seek to find common ground with other value systems; accordingly, many of the following interventions are worthwhile from many perspectives.
Thanks for the comment, this is raising a very important point.
I am indeed fairly optimistic that thoughtful forms of MCE are positive regarding s-risks, although this qualifier of "in the right way" should be taken very seriously - I'm much less sure whether, say, funding PETA is positive. I also prefer to think in terms of how MCE could be made robustly positive, and distinguishing between different possible forms of it, rather than trying to make a generalised statement for or against MCE.
This is, however, not a very strongly held view (despite having thought a lot about it), in light of great uncertainty and also some degree of peer disagreement (other researchers being less sanguine about MCE).