Adam Becker recently published More Everything Forever. From the initial blurbs and early reactions, I expected it to be broadly representative of the now familiar leftist critique of Effective Altruism, Rationalism, and adjacent communities: essentially a highly critical summary of what is often referred to as TESCREAL. This expectation has proven accurate.
I’ve been summarizing each chapter carefully while also adding my own commentary along the way. Since Chapter 4 is the main (though not exclusive) section where Becker directly focuses on Effective Altruism itself, I thought it might be particularly relevant to share my notes and reflections on this chapter here.
Chapter 4 - “The Ethicist at the End of the Universe”
Secrets of Trajan House exposed!
- Becker visits Trajan House in Oxford, home to many EA organizations (Centre for Effective Altruism, Global Priorities Institute Future of Humanity Institute…)
- Key figures include William MacAskill, Toby Ord, Nick Bostrom, Anders Sandberg.
- Described as a kind of hybrid between an academic center and a Silicon Valley startup: vegan cafeteria, open-plan offices, snack bars, gym, high-end furniture. Looks nice, but definitely no Wytham Abbey.
- Author interviews Anders Sandberg (FHI), who wears a cryonics medallion. Much is made of that, with a detour about the technology and the view of mainstream scientists, who see it as completely unviable, producing irreversible tissue damage.
Down the Precipice
- Next we move to a summary/criticism of the main ideas of Ord (whom the author gets to interview) and MacAskill (whom he doesn’t).
- We get a mini-crash course into The Precipice, the concept of Existential Risk and Ord’s estimations of it: a 1-in-6 chance of existential catastrophe in the next century.
- Author’s main bone of contention is he feels the framework and arguments minimize the importance of climate change. He contrasts Ord's estimates and MacAskill’s numbers from What We Owe the Future with the ‘global expert consensus’, epitomized in a couple of figures (Luke Kemp, Andrew Watson), especially on climate change.
World Optimization
- Next we get some examples of EA’s ‘clout and muscle’, which is meant to convey (not too subtly, I might add) both hybris and the unholy marriage of good-doing with wealth and a (MacAskillian or not) Will to Power:
- First, the flirting with finance, with Open Philanthropy’s tens of millions of dollars into poured into EA institutions, like Wytham Abbey and MacAskill’s vouching for Sam Bankman-Fried in his dealings with Elon Musk.
- EA linked to CSET (Center for Security and Emerging Technology) and the RAND Corporation, under Jason Matheny (former EA affiliate).
- On the political level, we get Carrick Flynn’s failed run for U.S. Congress in Oregon and EA/Rat ideas and principles underlining some theory and praxis of people like Dominic Cummings and Rishi Sunak.
- The narrative culmination is the The Center for AI Safety’s 2023 statement, signed by the usual suspects (Yudkowsky, Bostrom, MacAskill, Ord, Altman, Gates, Singer, Kurzweil, Chalmers…), funded by OP and calling for treating AI extinction risk as a global priority alongside pandemics and nuclear war.
‘In the long run we’re all dead’: The case against Long-termism
- The problem with predicting the future and Pascal Muggings: tiny shifts in probability estimates produce vastly different policy recommendations under longtermist math. This extreme sensitivity to assumptions would make longtermist arguments unstable and unreliable according to the author.
- Melanie Mitchell is also brought in to critique AI risk surveys among the community: undefined concept of AGI, non-representative samples, speculative, ungrounded probability estimates and a lack of the consensus and empirical basis of climate science.
- Next, David Thorstad is brought in to refute the ‘Time of Perils’ hypothesis and aligned AGI as a solver of mankind’s problems:
- There’s no empirical basis for believing existential risk will drop to near-zero after our current, uniquely dangerous period before achieving long-term stability.
- Maintaining risk levels near zero for billions of years is implausible.
- The belief that aligned AGI could detect and prevent all future existential threats and stabilize risk at near-zero levels indefinitely is considered unrigorous and highly speculative.
- Author also goes into a Cosmological detour (you can see it’s his academic speciality) which mostly tries to highlight how silly and fantastic AGI and technologies for extracting energy from the universe and for mind-uploading are. Even more, they get categorized as immoral: Becker sees the tiling of space with humans as imperialist (conveniently ignoring aliens, and takes it as a springboard for presenting and rejecting Utilitarianism (described as ‘ethical Taylorism’), the ‘Total View’ and Repugnant Conclusions. Omelas couldn’t be absent, of course, as well as the usual critique of naive Utilitarianism as a ‘means justify the ends’ in line with Lenin and Mao.
- Key ideological crux: Becker argues existential risk debates often mask political and social choices, that many current existential threats (e.g., nuclear war, climate change) are fundamentally political, not technological and that the hope that technology alone will solve these problems is misguided.
- Chapter concludes with some more psychologizing: EAs, like Rats, are ultimately driven by their fear of death and their futile, fantasy dreams of tiling the universe with people and harvesting all its energy to this end before the heat death of the universe.
- Instead of longtermist utopian visions of humanity spreading across the galaxy, we should be focusing on the present, real world, and current human beings rather than speculative far-future scenarios.
My Thoughts:
The first thing worth mentioning is this chapter feels a little bit (only a little) less ad hominem that the one about the Rats. In some respects, I imagine this was to be expected: EAs are more normie and less controversial in their views. The usual litany of suspects does make its appearance, though. I suspect Thorstad, who seems to be the author’s main source, influences this as well: while I personally intensely dislike his ideological priors (basically, turning the EA movement into Woke-lite) and his flirting with the TESCREALISTs, I do not think he engages too much in dishonest discourse. Still, while the chapter hardly leaves any negative EA stone unturned, it conveniently avoids putting much focus or explaining how much economic good the movement (even now, but even more so, perhaps, in the ‘classic EA’ phase) towards fighting global poverty, malaria bednets, direct cash transfers and personal, big, voluntary donations to said causes. You might not agree with MacAskill’s and Ord’s beliefs, but failing to mention how personally (and unusually, for moral philosophers) consistent they are with them and how frugal (donating up to 50 and 30% of their incomes) feels cheap.
Once you cut to the chase, the core of Becker’s argument is the usual, leftist critique of Effective Altruism: the movement shouldn’t be in kahoots with billionaires, it should focus exclusively on current issues and avoid futuristic speculation, and it should focus on politics and ‘changing the system’, aligning itself with some unspecified better goal and better framework, presumably egalitarian and anti-capitalist. It should also reject the Utilitarian philosophy and the soulless, instrumental number-crunching. As such, it has been answered time and again in different posts and I don’t think much would be gained through a detailed discussion of each of those points. It is not an illegitimate framework, or criticism. It is also, in my humble opinion, deeply, incorrigibly wrong and sectarian. To its proponents, I’d just retort with what Richard Feynman once said about Physicists wanting to have mathematicians at their disposal: “Now mathematicians can do what they want to do, one should not criticize them because they’re not slaves to physics. It is not necessary that just because this would be useful to you, they have to do it that way. They can do what they will, it’s their own job, and if you want something else, then you work it out yourself”.
Talking about oneself: I wouldn’t call myself a Long-termist or a Utilitarian (and probably not and EA either; EA-adjacent has become too trite of a label, so I guess I could settle with Uneffective Egoist). Perhaps it might be useful to go through some of the arguments of the chapter through my own lens:
- Like Becker, I personally disagree very strongly with Pascalian Muggings, but in a way, they are rather unavoidable if you accept a Utilitarian axiomatics plus negotiation uncertainties though Bayesian priors (or as I prefer to call them, bullshit probabilities). Once you accept the premise of a moral imperative to impartially maximize the total number of good, conscious experiences, you get into Population Ethics, and no answers inside this field are fully satisfactory to anyone. Utilitarians sometimes take to task other theories for avoiding the topic, but I tend to think it’s a bit of a dead end: pragmatically, societies do not have the power to effectively control the number of people that will be born (or not) anyway, and are unlikely to develop it. But as I said, this derives from Utilitarian axioms, and you can’t really refute them from the outside: if you want something else, then you work it out yourself. It’s at the heart of Utilitarian ethics that you can compare units of happiness (and people), and that you should do ‘the greatest good for the greatest number’. This means you can’t avoid big enough numbers swamping anything you consider good or valuable, and deontological arguments will also appear unintuitive and impractical at the limit: killing one person to save five is bad, but what about one person to save 50? 100? a 100000? We can always play the numbers game, and even if you aren’t a Utilitarian, there will be some number that breaks your deontological intuitions.
- Still, I feel it shows ignorance or bad faith to go back to naive Utilitarianism to reject their principles. The vast majority of Utilitarians (including, I guess, almost all EAs) accept that rules and social norms should be respected almost always, that virtues like honesty and trustworthiness should be cultivated, that calculation of the best outcomes is best made considering longer terms than any specific action (all the more so given uncertainty). Presenting this otherwise is both specious and disingenuous.
- The Global Warming debate and its importance is framed in what seems a dishonest way. If you accept the concept of Existential Risk and give them any credence, it logically follows that any such risk is much worse than any other horrible, terrible, undesirable one that does not lead to human extinction. Becker clearly rejects the plausibility of X-risks and thinks the significance of Global Warming is undervalued by EAs, and has searched experts to back up his claims. Fair enough. I don’t think MacAskill or Ord would have any problem with discussing and reevaluating the risks from Global Warming, given evidence, models and the consensus scientific views, but much to Becker’s chagrin, I don’t think the models and predictions on this issue are as consensual and in agreement with his own views as he thinks. I’ll admit that I am an ignoramus here, though, but I have no reason to doubt Ord’s and MacAskill’s intellectual honesty. If Becker is skeptic (as perhaps he well should be) about predictions for the remote future, he should perhaps apply a lesser but significant degree of skepticism with more close-in-time but extremely complex and dubious problems.
In the end, Becker’s critique of Effective Altruism feels like a familiar iteration of a broader ideological clash: universalist, truth-oriented, speculative moral philosophy and practice versus politically grounded, present-centered, and system-critical frameworks. While I strongly disagree with many of Becker’s judgments, I don’t dismiss the validity of scrutinizing longtermism and utilitarian reasoning. However, I ultimately side with those who see intellectual exploration, including uncomfortable or unfashionable lines of inquiry, as not only legitimate but necessary, and who resist attempts to circumscribe inquiry and practice according to ideological comfort zones.
Is there actually an official IPCC position on how likely degrowth from climate impacts is? I had a vague sense that they were projecting a higher world gdp in 2100 than now, but when I tried to find evidence of this for 15 minutes or so, I couldn't actually find any. (I'm aware that even if that is the official IPCC best-guess position that does not necessarily mean that climate experts are less worried about X-risk from climate than AI experts are about X-risk from AI.)