As far as I can tell, there seems to be a strong tendency for those who are worried about AI risk to also be longtermists.
However many elements of the philosophical case for longtermism are independent of contingent facts about what is going to happen with AI in the coming decades.
If we have good community epistemic health, we should expect there to be people who object to longtermism on grounds like:
- person-affecting views
- supporting a non-zero pure discount rate
but who still are just as worried about AI as those with P(doom) > 90%.
Indeed, the proportion of "doomers" with those philosophical objections to longtermism should be just as high as the rate of such philosophical objections among those typically considered neartermist.
I'm interested in answers either of the form:
- "hello, I'm both neartermist and have high P(doom) from AI risk..."; or
- "here's some relevant data, from, say, the EA survey, or whatever"
I feel like I am a neartermist mostly because of my studies and my comparative advantages, neartermist seems more likely to be higher in empathetic leaning person (not sure how to phrase this). However, my tech and interaction with applied AI and geoscience has also allow me to recognise the danger for longtermist risks which with let me approach the research and discussion with open-mindedness despite my comparative advantage in neartermist causes. One of the main attractiveness of EA to me originally was because the movement address both of my concerns.