As far as I can tell, there seems to be a strong tendency for those who are worried about AI risk to also be longtermists.
However many elements of the philosophical case for longtermism are independent of contingent facts about what is going to happen with AI in the coming decades.
If we have good community epistemic health, we should expect there to be people who object to longtermism on grounds like:
- person-affecting views
- supporting a non-zero pure discount rate
but who still are just as worried about AI as those with P(doom) > 90%.
Indeed, the proportion of "doomers" with those philosophical objections to longtermism should be just as high as the rate of such philosophical objections among those typically considered neartermist.
I'm interested in answers either of the form:
- "hello, I'm both neartermist and have high P(doom) from AI risk..."; or
- "here's some relevant data, from, say, the EA survey, or whatever"
I'd also be pretty interested to get to know someone who thinks AI doom is inevitable and works to reduce suffering while we still have some power. I feel like some people who find AI alignment almost impossibly intractable should work on neartermist causes but I've never seen that happen.
I think a lot of people who are aware of AI risk for some time but nevertheless choose to work on some other causes, such as climate change, may implicitly hold this view.