Hide table of contents

Adam Becker recently published More Everything Forever. From the initial blurbs and early reactions, I expected it to be broadly representative of the now familiar leftist critique of Effective Altruism, Rationalism, and adjacent communities: essentially a highly critical summary of what is often referred to as TESCREAL. This expectation has proven accurate.

I’ve been summarizing each chapter carefully while also adding my own commentary along the way. Since Chapter 4 is the main (though not exclusive) section where Becker directly focuses on Effective Altruism itself, I thought it might be particularly relevant to share my notes and reflections on this chapter here.

Chapter 4 - “The Ethicist at the End of the Universe”

Secrets of Trajan House exposed!

  • Becker visits Trajan House in Oxford, home to many EA organizations (Centre for Effective Altruism, Global Priorities Institute Future of Humanity Institute…)
  • Key figures include William MacAskill, Toby Ord, Nick Bostrom, Anders Sandberg.
  • Described as a kind of hybrid between an academic center and a Silicon Valley startup: vegan cafeteria, open-plan offices, snack bars, gym, high-end furniture. Looks nice, but definitely no Wytham Abbey.
  • Author interviews Anders Sandberg (FHI), who wears a cryonics medallion. Much is made of that, with a detour about the technology and the view of mainstream scientists, who see it as completely unviable, producing irreversible tissue damage.

Down the Precipice

  • Next we move to a summary/criticism of the main ideas of Ord (whom the author gets to interview) and MacAskill (whom he doesn’t).
  • We get a mini-crash course into The Precipice, the concept of Existential Risk and Ord’s estimations of it: a 1-in-6 chance of existential catastrophe in the next century.
  • Author’s main bone of contention is he feels the framework and arguments minimize the importance of climate change. He contrasts Ord's estimates and MacAskill’s numbers from What We Owe the Future with the ‘global expert consensus’, epitomized in a couple of figures (Luke Kemp, Andrew Watson), especially on climate change.

World Optimization

  • Next we get some examples of EA’s ‘clout and muscle’, which is meant to convey (not too subtly, I might add) both hybris and the unholy marriage of good-doing with wealth and a (MacAskillian or not) Will to Power:
  • First, the flirting with finance, with Open Philanthropy’s tens of millions of dollars into poured into EA institutions, like Wytham Abbey and MacAskill’s vouching for Sam Bankman-Fried in his dealings with Elon Musk.
  • EA linked to CSET (Center for Security and Emerging Technology) and the RAND Corporation, under Jason Matheny (former EA affiliate).
  • On the political level, we get Carrick Flynn’s failed run for U.S. Congress in Oregon and EA/Rat ideas and principles underlining some theory and praxis of people like Dominic Cummings and Rishi Sunak.
  • The narrative culmination is the The Center for AI Safety’s 2023 statement, signed by the usual suspects (Yudkowsky, Bostrom, MacAskill, Ord, Altman, Gates, Singer, Kurzweil, Chalmers…), funded by OP and calling for treating AI extinction risk as a global priority alongside pandemics and nuclear war.

‘In the long run we’re all dead’: The case against Long-termism

  • The problem with predicting the future and Pascal Muggings: tiny shifts in probability estimates produce vastly different policy recommendations under longtermist math. This extreme sensitivity to assumptions would make longtermist arguments unstable and unreliable according to the author.
  • Melanie Mitchell is also brought in to critique AI risk surveys among the community: undefined concept of AGI, non-representative samples, speculative, ungrounded probability estimates and a lack of the consensus and empirical basis of climate science.
  • Next, David Thorstad is brought in to refute the ‘Time of Perils’ hypothesis and aligned AGI as a solver of mankind’s problems:
    • There’s no empirical basis for believing existential risk will drop to near-zero after our current, uniquely dangerous period before achieving long-term stability.
    • Maintaining risk levels near zero for billions of years is implausible.
    • The belief that aligned AGI could detect and prevent all future existential threats and stabilize risk at near-zero levels indefinitely is considered unrigorous and highly speculative.
  • Author also goes into a Cosmological detour (you can see it’s his academic speciality) which mostly tries to highlight how silly and fantastic AGI and technologies for extracting energy from the universe and for mind-uploading are. Even more, they get categorized as immoral: Becker sees the tiling of space with humans as imperialist (conveniently ignoring aliens, and takes it as a springboard for presenting and rejecting Utilitarianism (described as ‘ethical Taylorism’), the ‘Total View’ and Repugnant Conclusions. Omelas couldn’t be absent, of course, as well as the usual critique of naive Utilitarianism as a ‘means justify the ends’ in line with Lenin and Mao.
  • Key ideological crux: Becker argues existential risk debates often mask political and social choices, that many current existential threats (e.g., nuclear war, climate change) are fundamentally political, not technological and that the hope that technology alone will solve these problems is misguided.
  • Chapter concludes with some more psychologizing: EAs, like Rats, are ultimately driven by their fear of death and their futile, fantasy dreams of tiling the universe with people and harvesting all its energy to this end before the heat death of the universe.
  • Instead of longtermist utopian visions of humanity spreading across the galaxy, we should be focusing on the present, real world, and current human beings rather than speculative far-future scenarios.

My Thoughts:

The first thing worth mentioning is this chapter feels a little bit (only a little) less ad hominem that the one about the Rats. In some respects, I imagine this was to be expected: EAs are more normie and less controversial in their views. The usual litany of suspects does make its appearance, though. I suspect Thorstad, who seems to be the author’s main source, influences this as well: while I personally intensely dislike his ideological priors (basically, turning the EA movement into Woke-lite) and his flirting with the TESCREALISTs, I do not think he engages too much in dishonest discourse. Still, while the chapter hardly leaves any negative EA stone unturned, it conveniently avoids putting much focus or explaining how much economic good the movement (even now, but even more so, perhaps, in the ‘classic EA’ phase) towards fighting global poverty, malaria bednets, direct cash transfers and personal, big, voluntary donations to said causes. You might not agree with MacAskill’s and Ord’s beliefs, but failing to mention how personally (and unusually, for moral philosophers) consistent they are with them and how frugal (donating up to 50 and 30% of their incomes) feels cheap.

Once you cut to the chase, the core of Becker’s argument is the usual, leftist critique of Effective Altruism: the movement shouldn’t be in kahoots with billionaires, it should focus exclusively on current issues and avoid futuristic speculation, and it should focus on politics and ‘changing the system’, aligning itself with some unspecified better goal and better framework, presumably egalitarian and anti-capitalist. It should also reject the Utilitarian philosophy and the soulless, instrumental number-crunching. As such, it has been answered time and again in different posts and I don’t think much would be gained through a detailed discussion of each of those points. It is not an illegitimate framework, or criticism. It is also, in my humble opinion, deeply, incorrigibly wrong and sectarian. To its proponents, I’d just retort with what Richard Feynman once said about Physicists wanting to have mathematicians at their disposal: “Now mathematicians can do what they want to do, one should not criticize them because they’re not slaves to physics. It is not necessary that just because this would be useful to you, they have to do it that way. They can do what they will, it’s their own job, and if you want something else, then you work it out yourself”.

Talking about oneself: I wouldn’t call myself a Long-termist or a Utilitarian (and probably not and EA either; EA-adjacent has become too trite of a label, so I guess I could settle with Uneffective Egoist). Perhaps it might be useful to go through some of the arguments of the chapter through my own lens:

  • Like Becker, I personally disagree very strongly with Pascalian Muggings, but in a way, they are rather unavoidable if you accept a Utilitarian axiomatics plus negotiation uncertainties though Bayesian priors (or as I prefer to call them, bullshit probabilities). Once you accept the premise of a moral imperative to impartially maximize the total number of good, conscious experiences, you get into Population Ethics, and no answers inside this field are fully satisfactory to anyone. Utilitarians sometimes take to task other theories for avoiding the topic, but I tend to think it’s a bit of a dead end: pragmatically, societies do not have the power to effectively control the number of people that will be born (or not) anyway, and are unlikely to develop it. But as I said, this derives from Utilitarian axioms, and you can’t really refute them from the outside: if you want something else, then you work it out yourself. It’s at the heart of Utilitarian ethics that you can compare units of happiness (and people), and that you should do ‘the greatest good for the greatest number’. This means you can’t avoid big enough numbers swamping anything you consider good or valuable, and deontological arguments will also appear unintuitive and impractical at the limit: killing one person to save five is bad, but what about one person to save 50? 100? a 100000? We can always play the numbers game, and even if you aren’t a Utilitarian, there will be some number that breaks your deontological intuitions.
  • Still, I feel it shows ignorance or bad faith to go back to naive Utilitarianism to reject their principles. The vast majority of Utilitarians (including, I guess, almost all EAs) accept that rules and social norms should be respected almost always, that virtues like honesty and trustworthiness should be cultivated, that calculation of the best outcomes is best made considering longer terms than any specific action (all the more so given uncertainty). Presenting this otherwise is both specious and disingenuous.
  • The Global Warming debate and its importance is framed in what seems a dishonest way. If you accept the concept of Existential Risk and give them any credence, it logically follows that any such risk is much worse than any other horrible, terrible, undesirable one that does not lead to human extinction. Becker clearly rejects the plausibility of X-risks and thinks the significance of Global Warming is undervalued by EAs, and has searched experts to back up his claims. Fair enough. I don’t think MacAskill or Ord would have any problem with discussing and reevaluating the risks from Global Warming, given evidence, models and the consensus scientific views, but much to Becker’s chagrin, I don’t think the models and predictions on this issue are as consensual and in agreement with his own views as he thinks. I’ll admit that I am an ignoramus here, though, but I have no reason to doubt Ord’s and MacAskill’s intellectual honesty. If Becker is skeptic (as perhaps he well should be) about predictions for the remote future, he should perhaps apply a lesser but significant degree of skepticism with more close-in-time but extremely complex and dubious problems.

In the end, Becker’s critique of Effective Altruism feels like a familiar iteration of a broader ideological clash: universalist, truth-oriented, speculative moral philosophy and practice versus politically grounded, present-centered, and system-critical frameworks. While I strongly disagree with many of Becker’s judgments, I don’t dismiss the validity of scrutinizing longtermism and utilitarian reasoning. However, I ultimately side with those who see intellectual exploration, including uncomfortable or unfashionable lines of inquiry, as not only legitimate but necessary, and who resist attempts to circumscribe inquiry and practice according to ideological comfort zones.

52

1
0
3

Reactions

1
0
3

More posts like this

Comments17
Sorted by Click to highlight new comments since:

The global warming thing is interesting to me because my sense is that Ord and MacAskill think of themselves as relying on expert consensus and published literature, rather than as having somehow outsmarted it. So why the difference between them and the author in what it shows?

I wish I could be of help in this, but I just lack the expertise. I think part of the issue is that 'the consensus' (as per IPCC reports) don't model worst-case scenarios, and I think most climate scientists do not predict human extinction from warming, even at extreme levels. It also doesn't make rational sense why Ord or McAskill would try to 'outsmart' the literature: if anything, I'd guess they prefer to be able to include Global Warming among existential risks, as it's an easy and popular win cause, so my prior is that they do indeed gauge well the expert consensus. Becker's sources are mostly those two mentioned scientists, which are likely (from a quick glance) to come from collapse-focused research that emphasizes high uncertainty and worst-case feedback loops.

Thanks for the discussion, David and Manuel.

I think most climate scientists do not predict human extinction from warming

I very much agree, and guess Toby's and Will's estimates for the existential risk from climate change are much higher than the median expert's guess for the risk of human extinction from climate change. Toby guessed an existential risk from climate change from 2021 to 2120 of 0.1 %. Richards et al. (2023) estimates "∼6 billion deaths [they say "∼5 billion people" elsewhere] due to starvation by 2100" for a super unlikely "∼8–12 °C+" of global warming by then, and I think they hugely overestimated the risk. Most importantly, they assumed land use and cropland area to be constant.

Yeah, I think I recall David Thorstad complaining that Ord's estimate was far too high also.

Be careful not to conflate "existential risk" in the special Bostrom-dervied definition that I think Ord, and probably Will as well, are using with "extinction risk" though. X-risk from climate *can* be far higher than extinction risk, because regressing to a pre-industrial state and then not succeeding in reindustrialising (perhaps because easily accessible coal has been used up), counts as an existential risk, even though it doesn't involve literal extinction. (Though from memory, I think Ord is quite dismissive of the possibility that there won't be enough accessible coal to reindustrialise, but I think Will is a bit more concerned about this?) 

Thanks for the clarification, David. There are so many concepts of existential risk, and they are often so vague that I think estimates of existential risk can vary by many orders of magnitude even holding constant the definition in words of a given author. So I would prefer discussions to focus on outcomes like human extinction which are well defined, even if their chance remains very hard to estimate.

I also think human extinction without recovery to a similarly promising state is much less likely than human extinction. For a time from human extinction to that kind of recovery described by an exponential distribution with a mean of 66 M years, which was the time from the last mass extinction until humans evolving, and 1 billion years during which the Earth will remain habitable, and therefore recovery is possible, the probability of recovery conditional on human extinction would be 2.63*10^-7 (= e^(-10^9/(66*10^6))).

I have talked to IPCC people. I think there are some double standards required to believe existential risk (as typically defined in the longtermist literature as permanently preventing humanity from reaching future technological maturity) from climate change is considered considerably less likely by climate experts as existential risk from unfriendly artificial general intelligence is considered by AI experts.

What did the IPCC people say exactly?

What @Manuel Del Río Rodríguez 🔹 call "collapse-focused" views. Most minimally stated, that medium-term involuntary global degrowth is likely if CO2 emissions aren't strongly curbed in the short term.

Is there actually an official IPCC position on how likely degrowth from climate impacts is? I had a vague sense that they were projecting a higher world gdp in 2100 than now, but when I tried to find evidence of this for 15 minutes or so, I couldn't actually find any. (I'm aware that even if that is the official IPCC best-guess position that does not necessarily mean that climate experts are less worried about X-risk from climate than AI experts are about X-risk from AI.)

Yeah, I think the problem is surveying experts for their p(doom) isn't something that has been done with climate experts AFAICT. (I'll let you decide over whether this should be done or whether Mitchell is right and this methodology is bad to begin with.) But he stated the IPCC is planning to more extensively discuss degrowth in future reports.

That may be true, but it isn't the argument Becker is making; it would still mean that the book author is at best dissembling when he says that expert consensus on x-risks from global warming is very different from what Ord and MacAskill state.

Two direct quotes: "There are two issues here. The first is that Ord and MacAskill are out of step with the scientific mainstream opinion on the civilizational impacts of extreme climate change. In part, this seems to stem from a failure to imagine how global warming can interact with other risks (itself a wider issue with their program), but it’s also a failure to listen to experts on the subject, even ones they contact themselves".

"Ord and MacAskill’s confidence that climate change probably doesn’t pose the kind of existential threat they’re worried about is unwarranted. And the fact that they’re primarily worried about existential threats in the first place is the other problem: once a threat has been deemed existential, it’s impossible to outweigh it with any less- than-existential threat in the present day".

The first one is the clearest pointing in the direction that Ord's and MacAskil's estimation aren't within the pale of scientific mainstream opinion. It connects to a footnote (16) that links to https://digressionsnimpressions.typepad.com/digressionsimpressions/2022/11/on-what-we-owe-the-future-no-not-on-sbfftx.html which is definitely not some summary or compilation of mainstream views on global warming effects, but to a philosopher's review of What We Owe the Future. Perhaps this is a mistake. Note 14 does link to an article by none other than E. Torres on 'What “longtermism” gets wrong about climate change' which seems to be the authority produced for the thesis that Ord and MacAskill's views are far from the scientific mainstream on this. Torres states having contacted with 'a number of leading researchers' he cherrypicks - selective expert sourcing via Torres, not by systematic IPCC consensus.

Thanks for the write-up. I'm broadly sympathetic to a lot of these criticisms tbh, despite not being very left-leaning. A couple of points you relate I think are importantly false:
 

(Thorstad's claim that) there’s no empirical basis for believing existential risk will drop to near-zero after our current, uniquely dangerous period before achieving long-term stability.

I don't know about 'empirical', but there's a simple mathematical basis for imagining it dropping to near zero in a sufficiently advanced future where we have multiple self-sustaining and hermetically independent settlements e.g. (though not necessarily) on different planets. Then even if you assume disasters befalling one aren't independent, you have to believe they're extremely correlated for this not to net out to extremely high civilisational resilience as you get to double digit settlements. That level of correlation is possible if it turns out to be possible e.g. to trigger a false vacuum decay - in which case Thorstad is right - or if a hostile AGI could wipe out everything before it - though that probability will surely either be realised or drop close to 0 within a few centuries. 

If you accept the concept of Existential Risk and give them any credence, it logically follows that any such risk is much worse than any other horrible, terrible, undesirable one that does not lead to human extinction.

It doesn't, and I wish the EA movement would move away from this unestablished claim. Specifically, one must have some difference in credence between achieving whatever longterm future one desires given no 'minor' catastrophe and achieving it given at least one. That credence differential is, to a first approximation, the fraction representing how much of '1 extinction' your minor catastrophe is. Assuming we're reasonably ambitious in our long term goals (e.g., per above, developing a multiplanetary or interstellar civilisation), it seems crazy to me to suppose that fraction should be less than 1/10. I suspect it should be substantially higher, since on restart we would have to survive a high risk in time-of-perils-2 while proceeding to the safe end state much slower, given the depletion of fossil fuels and other key resources.

If we think a restart is >= 1/10x as bad as extinction then we have to ask serious questions about whether it's >= 10x as likely. I think it's at least defensible to claim that e.g. extreme climate change is 10x as likely as an AI destroying literally all humanity.  

Hello, and thanks for engaging with it. A couple of notes about the points you mention:

I have only read Thorstad's arguments as they appear summarized in the book (he does have a blog in which one of his series, which I haven't read yet, goes into detail on this:  https://reflectivealtruism.com/category/my-papers/existential-risk-pessimism ). I have gone back to the chapter, and his thesis, in a bit more detail, would be that  Ord's argument is predicated on a lot of questionable assumptions, i.e.,the time of perils will be short, that the current moment is very dangerous, the future time will be much less so and it will stay so for a long time. He questions the evidence for all those assumptions, but particularly the last: "For humans to survive for a billion years, the annual average risk of our extinction needs to be no higher than one in a billion. That just doesn’t seem plausible—and it seems even less plausible that we could know something like that this far in advance." He also goes on to expand it citing extreme uncertainty of events far in time, that it is unlikely that treaties or world government could keep risk low, that 'becoming more intelligent' is too vague and AGI absurdly implausible ("The claim that humanity will soon develop superhuman artificial agents is controversial enough,” he writes. “The follow-up claim that superintelligent artificial systems will be so insightful that they can foresee and prevent nearly every future risk is, to most outside observers, gag-inducingly counterintuitive.”).

As for the second statement, my point wasn’t that extinction always trumps everything else in expected value calculations, but that if you grant the concept of existential risk any credence, then -ceteris paribus-, the sheer scale of what’s at stake (e.g., billions of future lives across time and space) makes extinction risks of overriding importance in principle. That doesn’t mean that catastrophic-but-non-extinction events are negligible, just that their moral gravity derives from how they affect longterm survival and flourishing. I think you make a very good argument that massive, non-extinction catastrophes might be nearly as bad as extinction if they severely damage humanity’s trajectory but I feel it is highly speculative on the difficulties of making comebacks and on the likelihood of extreme climate change, and I still find the difference between existential risk and catastrophe(s) significant. 

Fwiw I commented on Thorstad's linkpost for the paper when he first posted about it here. My impression is that he's broadly sympathetic to my claim about multiplanetary resilience, but either doesn't believe we'll get that far or thinks that the AI counterconsideration dominates it.

In this light, I think that the claim that annual x-risk being lower than 1/(10^-9) being 'implausible' is much too strong if it's being used to undermine EV reasoning. Like I said - if we become interstellar and no universe-ending doomsday technologies exist, then multiplicativity of risk gets you there pretty fast. If each planet has, say 1/(10^5) annual chance of extinction, then n planets have 1/(10^(5^n)) chance of all independently going extinct in a given year. For n=2 that's already one in ten billion.

Obviously there's a) a much higher chance that they could go extinct in different years and b) that they could go all extinct in any given period from non-independent events such as war. But even so, it's hard to believe that increasing k, say to double digits, doesn't rapidly outweigh such considerations, especially given that an advanced civilisation could probably create new self-sustaining settlements in a matter of years.

I feel it is highly speculative on the difficulties of making comebacks and on the likelihood of extreme climate change

I don't understand how you think climate change is more speculative than AI risk. I think it's reasonable to have higher credence in human extinction from the latter, but those scenarios are entirely speculative. Extreme climate change is possible if a couple of parameters turn out to have been mismeasured.

As for the probability of making comebacks, I'd like to write a post about this, but the narrative goes something like this:

  • to 'flourish' (in an Ordian sense), we need to reach a state of sufficiently low x-risk
  • per above, by far the mathematically plausible way of doing this is just increasing our number of self-sustaining settlements
    • you could theoretically do it with an exceptionally stable political/social system, but I'm with Thorstad that the level of political stability this requires seems implausible
  • to reach that state, we have to develop advanced technologies - well beyond what we have now. So the question about 'comebacks' is misplaced - the question is about our prospect of getting from the beginning to (a good) end of at least one time of perils without a catastrophe
  • Dating our current time of perils to 1945, it looks like we're on course, barring global catastrophes, to develop a self-sustaining civilisation in maybe 120-200 years
  • Suppose there's a constant k probability annual risk of a catastrophe that regresses us to pre-time-of perils technology. Then our outlook in 1945 was, approximately, (1-k)^160 chance of getting to a multiplanetary state. Since we've made it 80 years in, we have a substantially better ~(1-k)^80.
  • If we restart from pre-1945 levels of technology, we will do so with max 10% of the fossil fuel energy we had, as well as many other depleted resources (fertiliser, uranium, etc). This will slow any kind of reboot substantially. See e.g. comparisons of coal to photovoltaics here.
  • There's huge uncertainty here, but when you multiply out the friction from all the different depleted resources, I think progress the second time around will be optimistically 1/2 the speed, and pessimistically 1/10x or worse. (Based on above link, if photovoltaics were to entirely substitute fossil fuels, that drag alone would be around a ~30/5.5 multiplier on the cost of generating energy, which seems like it could easily slow economic development by a comparable amount)
  • That means in a reboot, we have optimistically (1-k)^320 chance of getting to a good outcome, pessimistically (1-k)^1600
  • During that reboot, we can expect the new civilisation to preferentially use up the most efficient resources just as we do (it doesn't have to destroy them, just move them to much higher entropy states, such as our current practice of flushing fertiliser into the ocean) - but they have 2x, 10x or however much longer doing so.
  • That means civilisation 3 would have as much a disadvantage over civilisation 2 as civilisation 2 would over us, giving it optimistically a (1-k)^640 chance of a good outcome, pessimistically a (1-k)^16000 chance.

If we plug in k=0.001, which seems to be a vaguely representative estimate among x-risk experts, then in 1945 we would have had an 85% chance, today we would have a 92% chance, after one backslide we would have optimistically 73%, pessimistically 20%, and after Backslide Two we would have optimistically 53%, pessimistically basically 0.

We can roughly convert these to units of 'extinction' by dividing the loss of probability by our current prospects. So going to probability 53%, would be losing 32% of our current prospects, which is 32%/85% as bad in the long term as extinction.

This is missing a lot of nuance, obviously, which I've written about in this sequence, so we certainly shouldn't take these numbers very seriously. But I think they overall paint a pretty reasonable picture of a 'minor' catastrophe being, in long-run expectation and aside from any short-term suffering or change in human morality, perhaps in the range of 15-75% as bad as extinction. Lots of room for discussing particulars, but not something we should dismiss as extinction being 'much worse' than - and in particular, not sufficiently lower that we can in practice afford to ignore the relative probabilities of extinction vs lesser global catastrophe.

Executive summary: This post offers a detailed summary and critical commentary on Chapter 4 of More Everything Forever, a new book by Adam Becker that presents a strongly critical, leftist-aligned analysis of Effective Altruism (EA), longtermism, and Rationalist ideas, arguing that EA’s speculative and utilitarian framework is politically naive and ethically misguided—though the author of the post ultimately finds Becker’s critique familiar, ideologically constrained, and unpersuasive.

Key points:

  1. Becker critiques the culture and infrastructure of EA through descriptions of Trajan House and interviews with figures like Anders Sandberg, portraying the community as a mix of academia, tech startup culture, and speculative futurism (e.g., cryonics).
  2. Main intellectual targets are longtermism and existential risk priorities—Becker challenges Ord’s 1-in-6 x-risk estimate and criticizes the deprioritization of climate change relative to other speculative risks like AGI.
  3. Political critique of EA’s influence and funding highlights ties to powerful institutions (e.g., Open Philanthropy, RAND, UK political actors), arguing this represents elite overreach and ideological overconfidence.
  4. Philosophical and methodological objections focus on Utilitarianism and Pascalian Muggings, arguing that longtermist reasoning is hypersensitive to speculative assumptions and lacks empirical robustness, especially compared to climate science.
  5. Post author pushes back on the critique, arguing that Becker omits EA’s contributions to global health and poverty, misrepresents common EA positions, and presents a reductive leftist framework that fails to seriously engage with utilitarian ethics or pluralistic intellectual inquiry.
  6. The author concludes that while critique is valid and should be welcome, Becker's framing feels ideologically rigid, dismissive of good-faith philosophical exploration, and more focused on scoring points than engaging EA's best ideas.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities