A short and arguably unfinished blog post that I'm sharing as part of EA strategy fortnight. There's probably a lot more to say about this, but I've sat on this draft for a few months and don't expect to have time to develop the argument much further.
-
I understand longtermism to be the claim that positively shaping the long-term future is a moral priority. The argument for longtermism goes:
- The future could be extremely large;
- The beings who will inhabit that future matter, morally;
- There are things we can do to improve the lives of those beings (one of which is reducing existential risk);
- Therefore, positively shaping the long-term future should be a moral priority.
However, I have one core worry about longtermism and it’s this: people (reasonably) see its adherents as power-seeking. I think this worry somewhat extends to broad existential risk reduction work, but much less so.

Arguments for longtermism tell us something important and surprising; that there is an extremely large thing that people aren’t paying attention to. That thing is the long-term future. In some ways, it’s odd that we have to draw attention to this extremely large thing. Everyone believes the future will exist and most people don’t expect the world to end that soon.[1]
Perhaps what longtermism introduces to most people is actually premises 2 and 3 (above) — that we might have some reason to take it seriously, morally, and that we can shape it.
In any case, longtermism seems to point to something that people vaguely know about or even agree with already and then say that we have reason to try and influence that thing.
This would all be fine if everyone felt like they were on the same team. That, when longtermists say “we should try and influence the long-term future”, everyone listening sees themselves as part of that “we”.
This doesn’t seem to be what’s happening. For whatever reason, when people hear longtermists say “we should try and influence the long-term future”, they hear the “we” as just the longtermists.[2]
This is worrying to them. It sounds like this small group of people making this clever argument will take control of this extremely big thing that no one thought you could (or should) control.
The only thing that could make this worse is if this small group of people were somehow undeserving of more power and influence, such as relatively wealthy[3], well-educated white men. Unfortunately, many people making this argument are relatively wealthy, well-educated white men (including me).
To be clear, I think longtermists do not view accruing power as a core goal or as an implication of longtermism.[4] Importantly, when longtermists say “we should try and influence the long-term future”, I think they/we really mean everyone.[5]
Ironically, it seems that, because no one else is paying attention to the extremely big thing, they’re going to have to be the first ones to pay attention to it.
—
I don’t have much in the way of a solution here. I mostly wanted to point to this worry and spell it out more clearly so that those of us making the case for longtermism can at least be aware of this potential, unfortunate misreading of the idea.
- ^
58% of US adults do not think we are living in “the end times”. Not super reassuring.
- ^
- ^
As much as they try and make themselves less wealthy by donating a large portion of their income to charity.
- ^
I think you could make the case that this is often an indirect goal, such as getting the ears of important policymakers.
- ^
Except, perhaps, dictators and other ne'er-do-wells.
This is a very interesting and provocative idea! Thank you for sharing.
One thought: is it possible that the concern relates to innumeracy / anti-science-thinking rather than (or in addition) to doubts about any specific group (e.g. white men)?
As in: could (part of) the concern be: "Here is a group of (very nerdy?) people trying to force us all to take critical decisions that sometimes seem bizarre, based on complex, abstract, quantitative arguments that only they understand. I'm not sure I trust them or their motives." ?
IMHO we underestimate just how abstract some of the arguments in favour of longtermism can seem to 99% of the population who have not studied it, and when this is combined with recommendations that seem quite dramatic, it isn't hard to see why people would express doubt and question the motives.
Remember, we live in a society in which many people think climate-scientists are just inventing climate-change so they can get more power, and in which medical experts trying to use data to recommend strategies to fight covid frequently had the motives questioned.
Is there a chance that, despite all the healthy disagreement within the EA community, to the external world we seem like an echo-chamber, living in an (imaginary?) world in which math and science and logic can justify any position?
I don't think most people feel that way. People learn to be suspicious when people throw lots of numbers and new ideas at them and then recommend something that doesn't seem to make sense. They think of suave car salesmen.
If you say "give me $100 and I can save three children by buying them mosquito nets," that is tangible and simple. If you say "we should devote X% of our GDP to preventing very low-risk scenarios which could cost trillions of lives," you've lost most people. If you then tell them that you want some of their money or resources to be impacted by this, they will question your motives. The specific details of the longtermism argument may not even be relevant to their suspicion.