Avoiding futures with astronomical amounts of suffering (s-risks) is a plausible priority from the perspective of many value systems, particularly for suffering-focused views. But given the highly abstract and often speculative nature of such future scenarios, what can we actually do now to reduce s-risks?
In this post, I’ll give an overview of the priority areas that have been identified in suffering-focused cause prioritisation research to date. Of course, this is subject to great uncertainty, and it could be that the most effective ways to reduce s-risks are quite different from the interventions outlined in the following.
A comprehensive evaluation of each of the main priority areas is beyond the scope of this post, but in general, I have included interventions that seem sufficiently promising in terms of importance, tractability, and neglectedness. I have excluded candidate interventions that are too difficult to influence, or are likely to backfire by causing great controversy or backlash (e.g. trying to stop technological progress altogether). When reducing s-risks, we should seek to find common ground with other value systems; accordingly, many of the following interventions are worthwhile from many perspectives.
Thanks for writing this!
Do you think any of the interventions are particularly good from an importance, tractability, neglectedness point of view?
Do you have a favourite?