I've noticed that the EA community has been aggressively promoting longtermism and longtermist causes:
- The huge book tour around What We Owe the Future, which promotes longtermism itself
- There was a recent post claiming that 80k's messaging is discouraging to non-longtermists, although the author deleted (Benjamin Hilton's response is preserved here). The post observed that 80k lists x-risk related causes as "recommended" causes while neartermist causes like global poverty and factory farming are only "sometimes recommended". Further, in 2021, 80k put together a podcast feed called Effective Altruism: An Introduction, which many commenters complained was too skewed towards longtermist causes.
I used to think that longtermism is compatible with a wide range of worldviews, as these pages (1, 2) claim, so I was puzzled as to why so many people who engage with longtermism could be uncomfortable with it. Sure, it's a counterintuitive worldview, but it also flows from such basic principles. But I'm starting to question this - longtermism is very sensitive to the rate of pure time preference, and recently, some philosophers have started to argue that nonzero pure time preference can be justified (section "3. Beyond Neutrality" here).
By contrast, x-risk as a cause area has support from a broader range of moral worldviews:
- Chapter 2 of The Precipice discusses five different moral justifications for caring about x-risks (video here).
- Carl Shulman makes a "common-sense case" for valuing x-risk reduction that doesn't depend on there being any value in the long-term future at all.
Maybe it's better to take a two-pronged approach:
- Promote x-risk reduction as a cause area that most people can agree on; and
- Promote longtermism as a novel idea in moral philosophy that some people might want to adopt, but be open about its limitations and acknowledge that our audiences might be uncomfortable with it and have valid reasons not to accept it.
I'm very very skeptical of longtermism in a practical sense, but not for the reason described here.
It's like watching people try to come up with rules about flight safety before powered flight - with some people arguing about lifting gasses, others worrying about muscle strains that will occur when everyone has to turn the hand cranks that power the aircraft, and yet others concerned about the potential for an aircraft to accidentally fly to close to the sun. Even if one person started thinking about a real risk (for example, landing too hard) - how would they even come up with a solution without knowing what a plane looks like or what a control surface is?
I think most people who are skeptical of longtermism have reasoning some what similar to mine.