I recently looked through the current version of the virtual groups intro syllabus and was disappointed to see zero mention of s-risks within the sections on longtermism/existential risk. I think this is a symptom of a larger problem, where “longtermism” has come to mean a very particular set of future-oriented projects (primarily extinction risk reduction) that primarily derive from a very particular set of values (primarily classical utilitarianism). As facilitators responsible for introducing people to the ideas of EA, I think it’s important for us to diversify our readings and discussions to account for multiple reasonable starting positions. For a start, I suggest that we rework the week on existential risk to have a more general focus on cause prioritization in longtermism, including readings and discussions on the topic of s-risks.
More generally, I think that we should take the threat of groupthink very seriously. The best-funded and most influential parts of the EA community have come to prioritize a particular worldview and value system that is not necessarily definitive of EA, and one that reasonable people in the community could disagree with. Throughout my experience as a student organizer, I've seen many of my peers just defer to the views and values supported by organizations like 80,000 Hours without reflecting much on their own positions, which strikes me as quite problematic given that many want to represent EA as a question, not an ideology. Failing to include a broader range of ideas and topics in introductory fellowships only exacerbates this problem of groupthink.
I’d love to talk more about how we can diversify the range of views represented to newcomers, and in particular how we can “diversify longtermism.”
I agree with 2. Not sure about 3 as I haven't reviewed the Introductory fellowship in depth myself.
But on 1, I want to briefly make the case that s-risks don't have to be/seem much more weird than extinction risk work. I've sometimes framed it as: The future is vast and it could be very good or very bad. So we probably want to both try to preserve it for the good stuff and improve the quality. (Although perhaps CLR et al don't actually agree with the preserving bit, they just don't vocally object to it for coordination reasons etc)
There are also ways it can seem less weird. E.g. you don't have make complex arguments about wanting to ensure a thing that hasn't happened yet continues to happen, and missed potential, you can just say: "here's a potential bad thing. We should stop that!!" See https://forum.effectivealtruism.org/posts/seoWmmoaiXTJCiX5h/the-psychology-of-population-ethics for evidence that people, on average, weigh (future/possible) suffering more than happiness.
Also consider that one way of looking at moral circle expansion (one method of reducing s-risks) is that its basically just what many social justicey types are focusing on anyway -- increasing protection and consideration of marginalised groups. It just takes it further.