The central thesis of Essays on Longtermism: Present Action for the Distant Future rests on a compelling moral intuition: that the welfare of future generations matters enormously, and that small changes today can yield astronomical consequences millennia hence. Yet as I examine the collection's treatment of technological innovation particularly artificial intelligence and biotechnology; I find myself questioning whether the authors have adequately grappled with technology's fundamentally disruptive relationship to longtermist planning itself.
The volume's contributors, while acknowledging technology's transformative potential, seem to approach it as another variable in their utilitarian calculus rather than recognizing it as a force that may render traditional longtermist frameworks obsolete. This oversight, I argue, represents a critical blind spot that undermines many of the book's core recommendations.
The Technological Acceleration Problem
Hilary Greaves and William MacAskill's opening essay establishes longtermism's foundational premise: that we can meaningfully optimize for outcomes across vast temporal scales. They write with confidence about interventions that might benefit humanity thousands of years hence, yet their analysis curiously sidesteps a fundamental question raised by our current technological trajectory, whether the concept of "the long term" retains coherent meaning in an era of exponential change.
Consider artificial intelligence development. The essays treat AI risk as a discrete problem to be managed through careful governance and safety research. But this framing misses something crucial: AI represents not merely a risk to be mitigated, but a potential phase transition that could render all current longtermist calculations moot. If artificial general intelligence emerges within decades (as many researchers now consider plausible). then our entire framework for thinking about human flourishing across millennia requires fundamental revision.
The authors' confidence in their ability to chart optimal courses across vast timescales appears increasingly suspect when confronted with the reality of technological acceleration. We cannot meaningfully optimize for outcomes in the year 3000 if we lack basic understanding of what intelligence, consciousness, or even humanity itself might look like by 2050. The exponential curves that govern technological progress make mockery of linear extrapolation.
Biotechnology and the Mutability of Human Nature
The volume's treatment of biotechnology reveals similar conceptual limitations. Contributors discuss genetic enhancement and life extension as factors that might influence future welfare calculations, but they fail to grapple with how these technologies fundamentally challenge longtermism's anthropocentric assumptions.
Traditional longtermist thinking presupposes relatively stable human values and capabilities across time. We're meant to care about future humans because they share essential characteristics with present humans like consciousness, the capacity for suffering and flourishing, moral agency. But biotechnology promises to render this assumption invalid within decades, not millennia.
CRISPR-Cas9 and emerging gene-editing technologies already allow precise modification of human genetics. Advances in synthetic biology may soon permit wholesale redesign of biological systems. Neural interfaces could merge human and artificial intelligence in ways that fundamentally alter consciousness itself. Under these conditions, the "future humans" whose welfare we're optimizing for may be so radically different from present humans that our moral intuitions about their wellbeing become meaningless.
The essays acknowledge these possibilities but treat them as complications rather than fundamental challenges. Derek Parfit's non-identity problem that different actions lead to different people existing, already troubles longtermist calculations. Biotechnology amplifies this problem exponentially, ensuring that our choices will determine not just which people exist, but what kinds of beings they are.
The Innovation Imperative vs. Precautionary Principles
Perhaps most problematically, the volume's risk-focused approach to emerging technologies conflicts with compelling arguments for technological acceleration. Several contributors advocate for slowing certain forms of innovation—implementing AI safety measures, restricting dangerous biotechnology research, coordinating global technology governance. These precautionary impulses reflect reasonable concern about existential risks.
Yet this precautionary stance ignores technology's role as humanity's primary tool for addressing existential challenges. Climate change, asteroid impacts, volcanic eruptions, and other natural threats require technological solutions. The diseases, aging, and material scarcity that cause immense present suffering demand innovation, not caution. Slowing technological progress in the name of long-term safety may increase suffering in both the near and distant future.
Moreover, the global coordination mechanisms the essays propose for governing dangerous technologies appear politically naive. International institutions already struggle to address far simpler coordination problems. Expecting effective global governance of AI or biotechnology development seems wishful thinking, particularly when the potential benefits of leading in these fields are enormous.
The more likely scenario involves competitive development of transformative technologies, with safety measures implemented reactively rather than proactively. Under these conditions, longtermist resources might be better spent accelerating beneficial innovations rather than attempting to slow potentially dangerous ones.
Reconceptualizing Longtermist Priorities
These critiques suggest need for fundamental reorientation of longtermist thinking around technology. Rather than treating innovation as an external force to be managed, longtermists should recognize it as the primary determinant of future human welfare.
This recognition yields several implications. First, longtermist resources should prioritize ensuring beneficial rather than merely safe technological development. Rather than focusing narrowly on AI alignment or biosecurity, efforts should address the institutional and economic incentives that shape innovation trajectories. This means supporting open-source development, promoting beneficial uses of emerging technologies, and ensuring broad access to transformative innovations.
Second, longtermists should abandon attempts to optimize across vast timescales in favor of building robust institutions that can navigate technological transitions successfully. Given uncertainty about future technological capabilities, the most valuable intervention may be developing adaptive governance structures rather than specific long-term strategies.
Third, the movement should embrace technological acceleration in domains likely to reduce existential risk or increase human flourishing. Aging research, space technology, clean energy, and medical innovations deserve support not because they serve specific longtermist calculations, but because they expand humanity's options in an uncertain future.
Beyond Anthropocentrism
The most radical implication concerns longtermism's anthropocentric assumptions. If AI and biotechnology promise to transcend current human limitations, then longtermist thinking should expand beyond concern for "future humans" to consider the welfare of whatever conscious beings emerge from technological transformation.
This expansion requires abandoning comfortable assumptions about value stability and moral progress. Future beings may have values, capabilities, and forms of consciousness we cannot currently comprehend. Rather than attempting to optimize for their welfare based on current moral frameworks, longtermists should focus on preserving optionality and enabling beneficial forms of consciousness to emerge.
Such an approach demands intellectual humility rarely evident in the Essays collection. Contributors write with confidence about moral truths that should guide actions across millennia, yet acknowledge little uncertainty about fundamental questions regarding consciousness, value, and identity that emerging technologies will soon force us to confront.
Conclusion: Embracing Uncertainty
Essays on Longtermism presents sophisticated arguments for taking the far future seriously. Its utilitarian framework and rigorous philosophical analysis provide valuable tools for thinking about intergenerational justice and existential risk. Yet the volume's treatment of technology reveals concerning blind spots that undermine its core project.
The exponential pace of innovation in AI and biotechnology makes traditional longtermist planning increasingly quixotic. Rather than attempting to optimize across vast timescales based on current moral intuitions, the movement should focus on building adaptive institutions and accelerating beneficial innovations. This requires embracing uncertainty about fundamental questions that technology will soon force us to confront.
The irony is that by taking technology more seriously, longtermism might achieve its stated goals more effectively. Rather than trying to control humanity's technological trajectory from above, the movement should work to ensure that trajectory serves the flourishing of whatever conscious beings emerge from our current transformations. Such an approach demands humility about our current moral frameworks while maintaining commitment to reducing suffering and expanding opportunities for beneficial forms of consciousness.
Technology is not merely another factor in longtermist calculations—it is the force that will determine whether longtermism as currently conceived remains relevant at all. The sooner the movement grapples with this reality, the more effectively it can serve its ultimate purpose: ensuring that the future, whatever form it takes, is better than the present.
I found this essay to be a refreshing take from the usual analysis longtermists hold. I hope this writing inspires more discussion in the EA space on the fundamental nature of transformative technologies and the capacity of the community to realistically anticipate the value change over the coming millennia if not century.
To play a devil's advocate though, I would not discount current longtermist policies altogether. There is a non-negligible chance that transformative technologies might not see mass adoption for a long time. The plans may be contributing to a body of knowledge which future institutions may adopt through an iterative process. If the systems improve welfare prior to a paradigmatic shift, it may not be entirely meaningless.Another counterpoint might be that these transformative technologies might in fact enhance planning capabilities. This would likely occur if transformative technologies saw consolidation by a small group, which I find unfavorable. Nonetheless, a small group may theoretically apply those technologies to provide robust enough systems that improve the long run future.I have not thought too seriously on the above issues, so I suspect there are many reasonable counterpoints. I suppose my only request is to not treat contemporary longtermists too uncharitably given the same uncertainty that underpins these theoretical arguments.I realize the above was more or less advocated for in the paper when it writes on the adoption of these technologies.