Intro
Longtermism is perceived by some of its critics as having an extremist, potentially totalitarian nature; of seeking to slow down progress in a way that contributes to societal stagnation and rigidity. Unfortunately, longtermism’s loudest critics have not been very coherent or detailed in their critiques, offering only vague peans to techno-optimism, ominous prophecies about the antichrist, and other assorted vibes. Thus, it falls to us (in particular, to me) to steelman their critiques for them.
In this essay, I attempt:
- To place longtermism in a historical context of rising societal risk-aversion, a broad trend which has many downsides that critics rightfully complain about.
- To reconcile critics’ sense that longtermism is already an oppressive, controlling juggernaut, with longtermists’ sense that they are a tiny and nascent movement directing relatively miniscule resources.
- To illuminate how extinction risks, which are especially amenable-to-analysis, may be crowding out other longtermist causes via the streetlight effect.
- To illustrate how the impartial-altruism language of hedonism erodes the value of individual human life, both dissolving it downwards (into atomized qualia-moments) and subsuming it upwards (for the wellbeing of the state).
- To gesture in the direction of a more empowering, dynamism-promoting, humanistic longtermism, (via a tragically scattered, last-minute compilation of half-baked ideas).
Finally, note that this essay was written following a “hits-based” philosophy. Intuitively, it might seem like the difference between a hard-hitting, piping-hot take versus a lackluster dud of a critique might be somewhere around 5x - 10x difference in subjective quality. However, I expect that in reality, the difference in impact between my most versus least successful critiques of longtermism could be very large (eg, perhaps 10,000x or more). Thus, in an attempt to maximize my odds of winning the contest the wellbeing of trillions of digital minds in my future light-cone, I have written a somewhat meandering essay that jumps around between a number of related critiques. With luck, my best ideas will be able to more than repay the time spent considering all the others.
The Long Arm of “Cultural Longtermism”
Longtermism as the latest incarnation of inexorably rising societal risk-aversion
Longtermism isn’t the first time that people have come up with the idea of a grand civilizational project to collectively mitigate catastrophic risks. When authors are imagining “longtermist states” or “longtermist political philosophy” I think it would be helpful to situate longtermism in the context of other grand international projects, both to learn from their successes and attempt to avoid their failures.
Of course, through history there have been innumerable political alliance systems (like the "Metternich System" preserving conservative monarchies in the 1800s, or competing alliance networks of the like US vs USSR during the cold war, or etc). And there have been many individual social movements and institutions that the EA movement has compared itself to, such as the Fabian Society or Mohism.
But in some ways, I think the closest prior analogues to the grand aspirations of modern longtermism -- successful movements organized against perceived existential risks that fundamentally altered the course of civilization and remain defining forces in our world today, are the following:
- Liberalism after WW2. After the horrors of the second world war, the US and other nations wanted to make sure that such a calamity (especially the worst-case nightmare of world domination by a stable-totalitarian regime) never happened again. Their efforts took many forms -- promoting democracy, building international governance institutions like the UN, shunning far-right politics in particular, and a sort of wide-ranging project of cultural change (from Orwell’s “1984” to Disney’s “It’s a Small World After All”) to promote tolerance, nonviolence, egalitarianism, civil and human rights, anti-imperialism, etc. In pushing against ethnonationalism, the universalist values of post-WW2 liberalism offer an echo of modern longtermism’s radical “impartial altruism” extending to the far future and potentially to the far reaches of mind-space.
- The environmental and anti-nuclear movements of the 1970s. Although certainly less pervasive than post-WW2 liberalism, this essentially pro-regulatory / anti-technology movement was organized to oppose the existential risk of nuclear war and the (sometimes perceived-to-be-existential) dangers of pollution, environmental collapse, overpopulation, and so forth. It also invoked proto-longtermist language about future generations and It featured the largest one-day protest in US history (Earth Day 1970), slowed adoption of nuclear power (eventually to a near standstill), led to the passage of a range of extensive environmental laws in many countries, and can arguably also claim credit for inspiring China’s one-child policy.
Longtermism and its discontents
Politically right-wing critics of longermism often (somewhat rightly IMO) see the idea of longtermism as an extension of one or both of these prior movements. This might not seem so bad -- liberalism, after all, is perhaps the most successful ideology of all time, having overseen an unprecedented era of human thriving. But critics percieve a dark side to longtermism, and to the “proto-longtermist” anti-x-risk movements that preceded it:
- Complaints about the 1970s environmental and nuclear movements are commonplace nowadays, even within the Open-Philanthropy-endorsed “abundance” movement: that the 70s’ rejection of nuclear power worsened global warming and kicked our civilization off the Henry-Adams curve, that its invention of the dysfunctional legal proceduralism pioneered by Ralph Nader stifled housing and infrastructure construction and led to a more stagnant society, that the cultural changes of the 70s may have been responsible for some of the odd trend-breaks (such as in academia / science) that happened around 1971, or led to a generally anti-science / anti-progress turn in culture, et cetera.
- The usual analogy to longtermism is that longtermism, too, is a movement seeking to minimize percieved x-risks by imposing stifling regulations on the most promising new technologies of the day (such as AI and synthetic biology), but (its critics imply) might end up impeding general progress so much, and taking actions so misguided, that it ultimately proves to be a net-negative influence on civilization.
- More speculatively, critics of post-WW2 liberalism identify problems such as:
- That in the attempt to mitigate destructive great-power conflict, between nations, liberalism has homogenized the world -- creating hegemonic global governance organizations and a groupthink-ing “international community” that itself could slide into stagnation / totalitarianism / collapse in a correlated, global way.
- That, as “Essays on Longermism” contributor Richard Ngo writes on Lesswrong, the cultural programme of liberalism attempted to fend off militarism and ethnonationalism by suppressing certain truths and creating a kind of “toxic egalitarianism”, which lead to all sorts of derangements and distortions in society (eg, reduced meritocracy, an eroded “sense of virtue” in society, etc).
- The analogy to longtermism is either that longtermism too might seek to create a controlling network of global institutions that could slide into stagnation / totalitarianism, or that something about impartial total-hedonic-utilitarian-style altruism might have similarly severe deranging long-term effects on society.
These criticisms might seem far-fetched -- let’s be real, who could be opposed to spending just 1% more on mitigating existential risks, such an obviously important, neglected, and tractable cause? But critics perceive a creeping ratchet, a neverending series of demands (after all, who could be opposed to spending just 1% more on nuclear-reactor safety? and then 1% more after that?) that slowly eat away at human freedom. In the limit, critics fear that longtermism would create the existential risk (in the form of permanent technological stagnation enforced by coordinated global governance) that it was originally designed to stop.
Personally, I think it’s reasonable to place the phenomenon of longtermism in a historical context where, just as individuals spend a higher and higher percentage of their income on healthcare as they get richer, societies likely tend to become more and more risk averse as they build wealth. This has many positive effects (liberalism, environmental conservation, and mitigation of existential risk are good things!), and is probably positive on net, but also has some significant downsides. One might call this trend a “rising tide of risk aversion” or perhaps “cultural longtermism”. Critics are opposed to (parts of) this rising tide, such as increasingly strict regulations across many areas of technology and society, and they see longtermism as a force that, whatever its specific effects on appropriately minimizing legitimate risks, also has the general effect of encouraging / legitimizing broader cultural risk aversion and its accompanying risks of stagnation and globalized centralization.
I think that individual longtermists should be more self-conscious of their place in this broader societal trend. In some places, increased societal risk aversion is exactly the right move. But in other places, as critics contend, the rising tide of “cultural longtermism” could be causing huge distortions and derangements that hamper civilizational progress, and might even amount to a substantial existential risk in itself. Longtermists should take care to balance these risks! And, more specifically, although the appeal of “swimming with the tide” often creates many good arguments for tractability / political feasibility of one specific cause versus another (like x-risk mitigation versus research into experimental transhumanist goods), it is also an underappreciated argument for the neglectedness and importance of causes that “swim against” the cultural tide.
Are we already living in a world of oppressively burdensome longtermism?
In Chapter 19 of Essays on Longtermism, Owen Cotton-Barratt and Rose Hadshar imagine a spectrum from today’s world (where, they imply, only a very small percent of GDP is spent on longtermist goods), to a “partially longtermist society” that devotes 2% - 10% of their resources towards these goals, to an “implausibly strict longtermist society” and/or “strict longtermist state” where the ruling class engineer all of society to maximize the amount of resources devoted to longtermism (potentially directing “somewhere between 10% and 85%” of its resources to that end). In Chapter 18, Hilary Greaves & Christian Tarsney also contrasts a “minimal” 2% allocation to a more extreme scenario where 71% of GDP is directed to longtermist goods. Both essays conclude that their most extreme scenarios are politically infeasible -- the population of such a “strict longtermist state” would never accept such extreme sacrifices for the sake of the distant future.
But are such extreme sacrifices really so implausible? Indeed, are we already making such sacrifices already, without even noticing?
The Strict Culturally-Longtermist State
It’s true that, in the grand scheme of things, the world isn’t spending much on AI alignment research or pandemic prevention. But how much are we already paying for risk-mitigation policies across society?
- The FDA and other drug regulators are infamously conservative, slowing the pace of medical innovation in myriad ways. How much would human lifespans and healthspans have improved if regulators had been able to maintain a less risk-averse policy over the previous 50 years, allowing the benefits of medical innovations like statins, GLP-1 agonists, MRNA vaccines, immunotherapies for cancer, and CRISPR-based gene therapies to compound more quickly?
- The failed transition to nuclear energy arguably mired the modern world in a dire state of “energy poverty”, with less than half the per-capita energy that would have been available to every person in a whiz-bang Jetsons-esque world where the Henry-Adams curve had stayed on track.
- Workplace-safety regulations and national healthcare systems generally use an implausibly high value-of-statistical-life (such as $10 million, more than most people could possibly earn in a lifetime, a rate that would value individual years of life much more highly than what most people would “sell them” for if given the opportunity), implying that humanity incurs broad economic losses from its overly safetyist approach in those domains.
- YIMBYs will be quick to tell you about the massive economic losses created by our collective unwillingness to build housing, especially in the most productive cities -- one estimate states that onerous zoning laws may have cost the USA 36% of GDP over the past 50 years.
- Research into areas like human genetic enhancement and brain-computer interfaces is taboo because it’s perceived to be too closely related to certain forms of risky mad science or evil ideology. Experiments with alternative forms of government, such as charter cities or prediction-market based governance, are similarly viewed with suspicion and often strictly regulated / banned.
- To the extent that a culture of risk aversion results in deeper cultural biases (such as Ngo’s post-WW2 “toxic egalitarianism” eroding important foundational values, or societal skepticism of science or entrepreneurship reducing long-run economic growth, or other such indirect effects), there could be very large deleterious effects that are nonetheless difficult to concretely identify.
Tally all these up, and, relative to an idealized pro-growth counterfactual world, we might already be living in a world where we’re sacrificing 50%+ of our counterfactual wealth at the altar of societal risk-aversion!
Admittedly, no modern longtermist ever asked for clinical trials to be made more onerous, or for housing to become more difficult to build. Nor do most of these costs even seem plausibly related to reducing the originally-targeted risks of nuclear war, environmental collapse, and right-wing militarism! But (as the proponents of public-choice theory endlessly remind us) such unintended consequences are a fact of life when trying to make policy in a complex world full of uncertainty and competing influences!
Invisible Graveyards Rule Everything Around Me
Aside from the fact that the various economic & health losses above are all unintended consequences, they have one other thing in common. The losses themselves are largely hidden; they are invisible graveyards.
In Chapter 18 and 19 of “Essays on Longtermism”, the idea of a “strict longtermist state” is dismissed because the population would refuse to make such direct and obvious sacrifices as being overworked and seeing 70% - 80% of their income taxed and directed towards distant longtermist goals. But this strikes me as an unrealistic and naive portrayal of how a strict longtermist state would really be architected. Just as tax increases are more politically palatable when levied indirectly (as tariffs, or corporate / VAT taxes, or seigniorage, or etc, rather than directly on citizens as income / payroll / wealth / sales taxes), I imagine that the biggest costs imposed by a strict longtermist state on its citizens would be levied indirectly -- more and more “invisible graveyards” of foregone progress, foregone growth, restricted technologies, curtailed human freedom, and so forth.
This argument raises the political feasibility of implementing a strict-longtermist state, and thus legitimizes fears that such a state may arise.
The Elusive Counterfactual
On the other hand! So far I’ve been comparing the “invisible graveyards” created by risk-aversion, to an idealized future where unfettered technological progress leads to a whiz-bang utopia of abundance and human liberty. But perhaps if the forces of liberalism and anti-nuclear activism had been weaker, we would not in fact be sitting in a nuclear-powered space station around the moon right now. Maybe we would be sitting in an irradiated crater in the aftermath of a global thermonuclear war.
Obviously a devastated wasteland would have very low GDP, flipping the sign on everything I said about the costs of societal risk-aversion.
The difficulty of figuring out the right counterfactual, added on top of the “base rate tennis” of trying to figure out what unintended costs should count as consequences of longtermist-style thinking, and the fact (mentioned by Greaves & Tarsney) that many longtermist interventions have “co-benefits” for present generations, makes it impossible to precisely work out how much humanity is currently sacrificing for (or already profiting from) long-term x-risk reduction efforts. So, while the longtermist can often fairly plead “look at this specific project -- it’s clearly underfunded, clearly worthwhile, unlikely to have dramatic unintended side-effects!”, and may further argue that the overall cost-benefit results of societal risk aversion have been positive, critics may also reasonably argue that the overall effects have been negative. Ultimately adjudicating the debate is a problem that can only be left to future generations to answer.
The Streelight Effect Warps Modern Longtermism Towards an Exclusive Focus On X-risk Mitigation
“For tractability, let us restrict the question to investment in existential safety” -- Greaves & Tarsney, Chapter 18
“Enough with this slander by association!!”, I hear you cry out. “Begone with your low-decoupling insistence that the important work of mitigating narrow, specific x-risks necessarily requires smothering society under a blanket of across-the-board safetyism! Where is your critique of what we, actual practicing longtermists, are actually doing today?!”
Indeed, I hear you. So -- while I think the cultural-longtermism critique has real merit -- let’s move on.
There are lots of reasons why x-risk mitigation is the centerpiece of applied longtermism today. Here are a few:
- It legitimately seems to be very important.
- It is a relatively popular and intuitive message among ordinary people, governments, etc (contrast this with something bizarre and controversial, like mind uploading research), thus easier to build momentum and consensus around, making it perhaps more tractable than other potential longtermist interventions. (And, as mentioned, risk-reduction also dovetails with the general trend towards increased safetyism.)
- It is easier to reason about x-risk mitigation than about other kinds of interventions -- for instance, as Greaves and Tarsney write, “Risks of human extinction and other ‘existential catastrophes’ create an exception to these worries about intractability, since each such risk comes with a strong and clear ‘lock-in’ mechanism.”
Of course, other things equal, it is better to work on problems that are amenable to rigorous analysis than problems where clear analysis is harder. But other things aren’t equal, and at this point, I fear that x-risk’s amenability-to-analysis is fueling an increasingly unhealthy lopsidedness within longtermism.
EAs are all too familiar with how streetlight effects can distort global priorities
Here are some facts which are sadly familiar to many longtermists:
- While climate change has received trillions of dollars of support, and is the focus of significant activist campaigns, international coordination efforts, etc, AI safety research has perhaps only around 1000 people working on it full-time, a budget in the billions instead of trillions.
- Within the already-neglected pandemic-prevention space, most existing efforts aren’t focused on the worst-case pandemics, but rather ordinary outbreaks like a mild potential monkeypox epidemic or a new zoonotic-spillover respiratory virus.
Some of the discrepancy here comes from value differences between ordinary people and longtermists (with longtermists placing special emphasis on extinction risks), but even from a non-longtermist perspective, these seem like a severe misallocation of societal resources. Why does this happen?
I think what’s going on is that a variety of “streetlight effects” steer effort towards predictable, known, quantifiable threats and away from more “speculative” (though larger) risks:
- Global warming is simple physics; it’s a lot easier easier to study than complex AI takeover scenarios featuring multiple layers of deception and tricky notions of “alignment” and “agency”.
- It’s thus easier for society to make collective intellectual progress, and to realize when a newly-reached conclusion is correct.
- Easier analysis makes it easier to definitively prove that the problem is real.
- It’s possible to produce precise forecasts of likely future impacts.
- All this makes it easier to convince individual people about the problem
- And easier to build political consensus for action within any given group
- The clear, simple goal (reduce CO2 ppm) makes it easier to think up solutions and quantify their potential impact (compared to trying to evaluate the relative merits of AI safety work).
All of those bullet points are essentially tractability arguments for why you should work on solving climate change instead of AI risk. Why bother with AI at all; it’s so intractable!
Yet despite all this, I believe AI risks are much more important to work on, for importance and neglectedness reasons.
Nevertheless, the streetlight effects are calling from inside the house
I believe there’s a similar dynamic going on within longtermism:
- Extinction risks are relatively amenable to analysis, because their importance is basically just a function of (how likely they are to happen) x (how likely they are to kill everyone) x (100% of the value of our lost long-term potential).
- By contrast, evaluating the impact of interventions aimed at “flourishing futures” seems harder for a variety of reasons -- not just of tractability concerns about “lock-in” versus “wash-out”, but also because, compared to slapping a probability-of-extinction discount on the entire future, it’s trickier to evaluate how much of a multiplier such flourishingness interventions (such as welfare-enhancing transhumanist goods) would apply to the raise quality of that future.
- It’s similarly tricky to assess how indirect improvements to social systems & governance & culture (which could help us both avoid x-risks and steer towards utopia) ultimately filter through into outcomes.
- Even classic x-risks with clear “lock-in” mechanisms, but which aren’t driven by human extinction (such as stable totalitarianism and various other “s-risks”) seem disadvantaged by the fact that they flow through hard-to-analyze social / political pathways rather than more physicsy dynamics of something like pandemic or nuclear risk.
It’s not that nobody has ever thought of these risks, or that nobody is working on them. Many people are; that's great! But the fact that they’re less amenable to analysis still leaves them systematically neglected.
In my view, these unbalanced priorities don’t just mean that longtermism risks slightly failing to do quite as much good as an idealized version of itself could do. In the same way that slight changes in differential technological progress can possibly lead to quite different path-dependent futures, I think that individually small streetlight effects can compound (as in the global-warming-versus-AI case) to quite large differences on a group level. If we badly misprioritize flourishing futures, indirect interventions, and non-extinction x-risks, we risk leaving incredibly valuable actions on the table.
It’s also possible that unbalanced priorities in longtermism could risk actively causing harm.
- With our distorted view of the longtermist landscape, we might also be more likely to take misguided actions that create large unintended costs, just as the environmental / anti-nuclear movement of the 1970s once did.
- (This one is a little galaxy-brained, but stick with me…) If longtermist goals of x-risk mitigation tend to unwittingly contribute to a larger societal trend of increasing global governance and regulation that itself amounts to a substantial stable totalitarianism risk, this could be trouble. Longtermism would effectively be sailing between a scylla-and-charybdis of mitigating extinction risk versus totalitarian risk. So, if longtermism is ALSO systematically underrating stable totalitarianism risks (because they’re harder to reason about), then we risk misjudging the situation and steering too close to the charybdis of totalitarian lock-in.
Total Hedonic Utilitarianism and the Denial of Death
“Sure, he talks a good game about freedom when out of power, but once he’s in – bam! Everyone’s enslaved in the human-flourishing mines.” -- Slate Star Codex blog tagline
Yes, yes, of course maximizing utilitarianism gives off totalitarian vibes
If I was a total hack, here’s an easy critique I could make, an all-too-common extension of the cultural-longtermist critique:
“Wow, get this -- Greaves & Tarsney contrast ‘minimal’ versus ‘expansive’ visions of longtermism. But even their ‘minimal’ version isn’t nearly minimal enough to avoid being oppressively burdensome and creepily controlling! Look at the sweeping aims that even the seemingly narrow goal of x-risk-mitigation requires!”
- “they mention that we should ‘scale up efforts to avoid great power conflict’ -- aka subordinate independent nation states to an oppressive one-world government, perhaps ruled by the antichrist??”
- “they want ‘legal regulation of dangerous technologies’ -- but don’t worry, they ONLY want to regulate all of artificial intelligence and all of biotech, which are like the #1 and #2 most important technologies of our time!”
- “then later they propose indirect interventions like ‘improved education’ (re-education??) to inculcate people with greater willingness to spend on climate change, nuclear nonproliferation, pandemic preparedness, etc -- sounds like propaganda to me. And to ‘improve the talent pool of future… bureacrats’ -- long march through the institutions much??”
- “In the limit, this is basically a totalizing desire to control all of society! Hardly ‘minimal’!!”
But actually, Greaves & Tarsney make the fair point that it isn’t longermism in particular that creates these totalitarian vibes. The real problem is just the fundamental nature of consequentialism. Maximal consequentialism is enough to make anything seem totalitarian! As they write:
“If the implications of an axiological longtermist thesis together with maximizing consequentialism strike one as overly demanding, then (absent some other reason for doubting the axiological longtermist claim) the natural response is to reject maximizing consequentialism, not to revise one’s axiology or one’s empirical beliefs.”
(For more on the perils of consequentialism, see Joe Carlsmith’s critique of Yudkowsky’s concept of the fragility of value, and Holden Karnofksy’s post about EA & maximization.)
Of course, one response to this defense would be to note that in practice, consequentialism seems like a pretty large part of longtermism, so the question of whether totalitarian vibes are coming from the consequentialist part of longtermism, or the other parts of longtermism, seems irrelevant when all the parts usually come bundled together.
But I actually believe there are notable totalitarian, anti-human vibes that come from elsewhere, and deserve special consideration.
The “total hedonic” part of total hedonic utilitarianism creates its own, separate problems.
If you talk to longtermists, most of them will say that they’re not strict total hedonic utilitarians. Am I then about to attack a strawman?
I don’t think so -- despite everyone disavowing it, total hedonic utilitarianism seems to pops up all the time in EA analyses (including not just longtermism, but also animal welfare, global development, and more), often as an unstated background assumption. I suspect this happens because total hedonic utilitarianism makes it easier to analyze problems, by providing a starting framework & a simplifying assumption. The streetlight effect strikes again!
Now, of course, “all models are wrong, some models are useful”. Simplifying assumptions are often necessary! But the ubiquitous background assumption of total hedonic utilitarianism is, IMO, corrosive to individual liberty and human empowerment.
Hedonic utilitarianism dissolves the value of the unique human individual into atomized, interchangeable qualia-moments
In the standard way of longtermist reckoning, 10 people living 40 happy years is the same number of QALYs, all else equal, as 5 people living 80 happy years.
But personally, I would like to live 80 years and not 40 years!
One can of course stipulate that this is already accounted for in the calculation -- the disutility of my outrage at being cut down in my prime could be exactly offset, in the thought experiment, via the provision of other goods. Or perhaps there are other ways of trying to resolve the dilemma.One could say that nonexistent people would like even a short existence, and this might balance out the preference of existing people for longer lifespans, but this view strikes me as a somewhat absurd perspective that few hold; most humans value future/potential generations but also recognize that actually-existing people have more of a sort of property-rights claim on existence than merely potential people.
But regardless of the defensive gymnastics that can be performed here, in practice, foregrounding interchangeable units of qualia means dissolving value downwards -- from units of unique human individuals to infintesimal atom-like moments of qualia-experience (akin to what some call “empty individualism”).
Furthermore, the vast diversity of potential qualia-experiences (the landscape of which we can hardly begin to imagine) is then, for analytical convenience, projected down onto a one-dimensional spectrum from positive affective valence to negative affective valence. This second simplifying step (which again, few would completely endorse, but many implicitly rely on), further contributes to undermining individuality, by compressing the complex landscape of human aesthetic values (see eg “The Nietzschean Challenge to Effective Altruism”, or simply introspect on your own motivations, values, and feelings), into a homogenous hedonic mush more amenable to mathematical reasoning.
Thus, in the standard version of total hedonic utilitarianism, there is no difference between individuals, no inherent notion of fundamental human rights or freedoms (perhaps instead you should content yourself with a kind of standard UBI of positively-valenced experience?), a kind of Rawlsian tendency towards communistic redistribution rather than traditional property-ownership and inequality, or of any distinguishing characteristics whatsoever. It is only a simplifying assumption, of course; few would mistake it for ultimate reality. But nevertheless, the structure of the model threatens to reach out (in the form of flawed / misinterpreted analyses) and begin to instantiate its troublesome biases in reality.
Hedonic utilitarianism subsumes the value of the human individual into the glory of the immortal leviathan
Counterintuitively, while the deaths of individuals become irrelevant, the survival of overall civilization (ie, of the state) becomes paramount in the longtermist framework. This is the whole logic of mitigating x-risks -- the collective-life of human civilization in aggregate is the highest value. Consequently, power and moral value is also agglomerated upwards, from the individual to the state, for whom individuals are like mere cells making up its immortal body.
A critic would point out that this subsumption of the individual to the needs of the state is the hallmark of totalitarian communism and fascism, and thus perhaps flags longtermism as intrinsically suspicious, despite its adherents’ good intentions and professed objective of maximizing human thriving.
An interesting illustration of this point might be the notably lukewarm attitude of longtermism towards the idea of slowing human aging. It’s odd, considering that Bostom’s “Fable of the Dragon-Tyrant” is probably his most-read work (what other of his essays has lovingly-animated video adaptations with millions of views?), and Yudkowsky’s stridently anti-death HPMOR and Sequences were a formative experience for many longtermists. Within “Essays on Longtermism”, Kevin Kuruc and David Manley’s Chapter 24 does a good job recounting the standard rationalist case for why the badness of death is underrated, and pairs this with some informative calculations about the large economic benefits that increased healthy lifespans would bring. Yet anti-aging often seems to be nowhere on the wider longtermist or EA list of priorities -- nothing like it is mentioned on 80,000 Hours’s problem profiles. One gets the sense that many longtermists are privately enthusiastic about the development of advanced medical technology as one of the most prized “longtermist goods”, alongside various other, even more exotic proposals for human enhancement. But people don’t talk about this publicly.
- In part, this is because it would be politically counterproductive. In wider society, the idea of life extension is a kind of taboo -- perhaps born of some anti-science, anti-progress attitude embedded in our culture?
- And in part, there are legitimate concerns about tractability and neglectedness of any currently-proposed anti-aging interventions.
- But it also feels relevant that the structure of total hedonic utilitarianism renders the problem of death (ie, the death of individuals during the ordinary course of events, rather than catastrophes that threaten the health of the state) largely invisible to standard formulations of longtermism.
In his 1973 book The Denial of Death, anthropologist Ernest Becker claims that "human civilization is a defense mechanism against the knowledge of our mortality" and that people manage their "death anxiety" by pouring their efforts into an "immortal project" which "enables the individual to imagine at least some vestige of meaning continuing beyond their own lifespan". In this secular modern age, when heroic cultural narratives and religious delusions no longer do the job, and when building literal giant pyramids in the desert for the glorification of the state has fallen out of style, what better an immortal project than "longtermism" with which to harness individuals' energy? What else could provide better relief from men’s death-anxiety, than the promise of binding their mortal efforts to the sublime eternity of the far-distant galaxies?
Towards A More Human Longtermism?
Surely not everything can be totalitarian…
By now I have accused longtermism of being totalitarian in about twelve different ways. But this is absurd; longtermism is one of the most intelligent and well-meaning movements out there, dear to my heart, repository of many of my hopes for a brighter future! And, as noted earlier, some of these totalitarian vibes seem intrinsic to the notion of consequentialism, which in turn seems intrinsic to the notion of trying to do practically anything in life. What is going on??
Longtermism is indeed helpfully and appropriately wary of "lock-in", hopeful for a future of human flourishing, yet our frameworks nudge us toward assuming fragility of value, taking the perspective of abstract social controllers, and performing atomized analysis that disregards human individuality.
I think one problem (not just for longtermism, but for all of society) is that our notions of things like “empowerment”, “liberty”, “democractic-ness”, “legitimacy”, and so forth, etc, are just too confused and contradictory to sort this all out, similar to how AI safety researchers often complain that our concepts of “alignment” or “agency” are confused and ill-defined.
Some Confused Hopes for a Flourishing Human Future
Inspired by the analogous dilemma in AI, this suggests several possible approaches to dealing with this problem:
- “Agent foundations, but for understanding human freedom and responsible governance”: probably not super tractable, but idk, maybe it’ll come up with something -- you guys are the philosophers here.
- Ignore the fact that we can’t pin down exact definitions, just work on good stuff that seems like it’ll help improve freedom and human agency and democraticness, and hope for the best -- this has worked alright historically, it will probably keep being a good strategy.
- Hope that future AI technology will be able to solve philosophy and clarify this stuff, leading to huge advances in social technology, self-understanding, and human thriving. Unfortunately this liberty stuff is probably most valuable while we are on the way to technological maturity, and navigating all these various dangers. So waiting to the end to solve all the problems kinda misses the point. But maybe it will be possible to make progress as we go along, by putting effort into developing progressively better sociological concepts and social technology as AI improves.
This is perhaps too fanciful a metaphor, but it is perhaps bearing in mind some sociological analogue to “embedded agency”? That is, perhaps it would be wrong to think that a “strict longtermist state” could dispassionately shepard its package of “values” (x-risk mitigation, etc) into the future in a way disconnected from its functioning in the present. Many of society’s most important values are tied up in the design & functioning of the state itself; the workings of government and other institutions, of culture and the economy, are both an expression of a society’s values and a method of ensuring the stability of those values. Not to sound too hippie, but rather than imagining a “strict longtermist state” seizing control of the future and trying to white-knuckle its way toward utopian “lock-in”, an “embedded agency” perspective might help us conceptualize the path to utopia more a dynamic process of co-evolution and interaction between different societal forces.
Another potential reframing, especially when considering various proposed governance interventions (such as discussed in Chapters 26, 27, and 30) could be to move away from visions of political “control” over important outcomes (which seems almost self-defeating as a concept -- if the control is still in the realm of political, it is still under contention, thus not fully under control…), and instead seek something more like “developing social technology for removing certain issues from the realm of the political”. For example:
- Prediction markets might be able to effectively “remove policy-forecasting questions from the realm of the political”, reducing politics to a discussion of which values to prioritize and relevant metrics by which to target those metrics.
- More prosaicly, independent central banks (and even moreso, fully algorithmic approaches like NGDP level targeting) are an attempt to partially remove the pursuit of optimal fiscal policy from the realm of the political.
- Similarly automatic mechanisms like pandemic bonds, various AI-liability schemes, carbon taxation etc, could serve to reduce existential risks fairly “automatically” in a way that preserves rather than suppresses decentralized human agency.
- More generally, developing systems of “credible neutrality” (like perhaps the twitter community-notes algorithm, or wikipedia on a good day, or various crypto projects?) seems like a path to reduce the scope of the political, yet which doesn’t fully map onto the usual notion of totalitarian-style “control” or illegitimate consequentialist-style “lock-in”.
As stated at the beginning of this essay, I also think it would be worthwhile for longtermism to be particularly on-guard about the potential downsides of a societal trend towards increasing stasis and safetyism. Some longtermist interventions, including x-risk mitigation, are mostly “swimming with the tide” of increasing risk-aversion. But we should be careful not to mistake causes that “swim against the tide” (with consequently reduced tractability, but increased neglectedness and perhaps also importance) as a knockdown argument against those causes.
As H Orri Stefansson writes in Chapter 28, “Longtermism and Social Risk-Taking”, longtermism magnifies the value-of-information that can be gathered from policy experiments, and this consideration (other things equal) ought to increase the amount of policy experimentation that society undertakes. I think this vision, of pursuing value-of-information through experimentation, competition, diversity, and dynamism in society, offers a helpful lens with which to counterbalance the default, centralizing view of the social “planner” referenced so often in the chapters of “Essays on Longtermism”. (Wherein something like a network of independent charter cities might be viewed suspiciously, almost as a potential “cancer” on the international community’s ability to coordinate.)
In addition to these broad cultural vibes, there are also likely a wide variety of direct, specific interventions against stable totalitarianism that longtermism could explore more thoroughly. Attempts to limit the impact of AI-enabled propaganda, intelligence-gathering, and censorship all seem like they could be fertile ground for promising interventions. Forward-looking attempts to map out the offense/defence balance of various specific totalitarianism-enabling technologies (perhaps super-persuasion AI, advanced lie detection technology, or MRI-based mind-reading tech), then try to establish norms and regulations or apply other d/acc strategies to them, could also be valuable.
- ^
One could say that nonexistent people would like even a short existence, and this might balance out the preference of existing people for longer lifespans, but this view strikes me as a somewhat absurd perspective that few hold; most humans value future/potential generations but also recognize that actually-existing people have more of a sort of property-rights claim on existence than merely potential people.

Executive summary: An exploratory, steelmanning critique argues that contemporary longtermism risks amplifying a broader cultural drift toward safetyism and centralized control, is skewed by a streetlight effect toward extinction-risk work, and—when paired with hedonic utilitarian framings—can devalue individual human agency; the author proposes a more empowerment-focused, experimentation-friendly, pluralistic longtermism that also treats stable totalitarianism and “flourishing futures” as first-class priorities.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.