Edit: If you are landing here from the EA Forum Digest, note that this piece is not about Manifest and I don't want it it to be framed as being about Manifest.
Recently, I've noticed a growing tendency within EA to dissociate from Rationality. Good Ventures have stopped funding efforts connected with the rationality community and rationality, and there are increasing calls for EAs to distance themselves.
This trend concerns me, and I believe it's good to make a distinction when considering this split.
We need to differentiate between 'capital R' Rationality and 'small r' rationality. By 'capital R' Rationality, I mean the actual Rationalist community, centered around Berkeley: A package deal that includes ideas about self-correcting lenses and systematized winning, but also extensive jargon, cultural norms like polyamory, a high-decoupling culture, and familiarity with specific memes (ranging from 'Death with Dignity' to 'came in fluffer').
On the other hand, 'small r' rationality is a more general concept. It encompasses the idea of using reason and evidence to form conclusions, scout mindset, and empiricism. It also includes a quest to avoid getting stuck with beliefs resistant to evidence, techniques for reflecting on and improving mental processes, and, yes, many of the core ideas of Rationality, like understanding Bayesian reasoning.
If people want to distance themselves, it's crucial to be clear about what they're distancing from. I understand why some might want to separate from aspects of the Rationalist community – perhaps they dislike the discourse norms, worry about negative media coverage, or disagree with prevalent community views.
However, distancing yourself from 'small r' rationality is far more radical and likely less considered. It's similar to rejecting core EA ideas like scope sensitivity or cause prioritization just because one dislikes certain manifestations of the EA community (e.g., SBF, jargon, hero worship).
Effective altruism is fundamentally based on pursuing good deeds through evidence, reason, and clear thinking - in fact when early effective altruists were looking for a name, one of the top contenders was rational altruism. Dissecting the aspiration to think clearly would in my view remove something crucial.
Historically, the EA community inherited a lot of epistemic aspects from Rationality[1] – including discourse norms, emphasis on updating on evidence, and a spectrum of thinkers who don't hold either identity closely, but can be associated with both EA and rationality. [2]
Here is the crux: if the zeitgeist pulls effective altruists away from Rationality, they should invest more into rationality, not less. As it is critical for effective altruism to cultivate reason, someone will need to work on it. If people in some way connected to Rationality are not who EAs will mostly talk to, someone else will need to pick up the baton.
- ^
Clare Zabel in 2022 expressed similar worry:
Right now, I think the EA community is growing much faster than the rationalist community, even though a lot of the people I think are most impactful report being really helped by some rationalist-sphere materials and projects. Also, it seems like there are a lot of projects aimed at sharing EA-related content with newer EAs, but much less in the way of support and encouragement for practicing the thinking tools I believe are useful for maximizing one’s impact (e.g. making good expected-value and back-of-the-envelope calculations, gaining facility for probabilistic reasoning and fast Bayesian updating, identifying and mitigating one’s personal tendencies towards motivated or biased reasoning). I’m worried about a glut of newer EAs adopting EA beliefs but not being able to effectively evaluate and critique them, nor to push the boundaries of EA thinking in truth-tracking directions.
- ^
EA community actually inherited more than just ideas about epistemics: compare for example Eliezer Yudkowsky's essay on Scope Insensitivity from 2007 with current introductions to effective altruism in 2024.
This confused me at first until I looked at the comments. The EA Forum post you linked to doesn't specifically say this. The Good Ventures blog post that forum post links to doesn't specifically say this either. I think you must be referring to the comments on that forum post, particularly between Dustin Moskovitz (who is now shown as "[anonymous]") and Oliver Habryka.
There are three relevant comments from Dustin Moskovitz, here, here, and here. These comments are oblique and confusing (and he seems to say somewhere else he's being vague on purpose). But I think Dustin is saying that he's now wary of funding things related to "the rationalist community" (defined below).
Edit (2025-05-03 at 12:59 UTC): To make it easier to see which comments on that post are Dustin Moskovitz's, you can use the Wayback Machine.
Dustin seems to indicate there are multiple reasons he doesn't want to fund things related to "the rationalist community" anymore, but he doesn't fully get into these reasons. From his comments, these reasons seem to include both a long history of problems (again, kept vague) and the then-recent Manifest 2024 conference that was hosted at Lighthaven (the venue owned by Lightcone Infrastructure, the organization that runs the LessWrong forum, which is the online home of "the rationalist community"). Manifest 2024 attracted negative attention due to the extreme racist views of many of the attendees.
I think the way you tried to make this distinction is not helpful and actually adds to the confusion. We need to distinguish two very different things:
The online and Bay Area-based "rationalist community" (2) tends to believe it has especially good insight into older, more universal concept of rationality (1) and that self-identified "rationalists" (2) are especially good at being rational or practicing rationality in that older, more universal sense (1). Are they?
No.
Calling yourselves "rationalists" and your movement or community "rationalism" is just a PR move, and a pretty annoying one at that. It's annoying for a few reasons, partly because it's arrogant and partly because it leads to confusion like the confusion in this post, where the centuries-old and widely-known concept of rationality (1) gets conflated with an eccentric, niche community (2). It makes ancient, universal terms like "rational" and "rationality" contested ground, with this small group of people with unusual views — many of them irrational — staking a claim on these words.
By analogy, this community could have called itself "the intelligence movement" or "the intelligence community". Its members could have self-identified as something like "intelligent people" or "aspirationally intelligent people". That would have been a little bit more transparently annoying and arrogant.
So, is Good Ventures or effective altruism ever going to disavow or distance itself from the ancient, universal concept of rationality (1)? No. Absolutely not. Never. That would be absurd.
Has Good Ventures disavowed or distanced itself from LessWrong/Bay Area "rationalism" or "the rationalist community" (2)? I don't know, but those comments from Dustin that I linked to above suggest that maybe this is this case.
Will effective altruism disavow or distance itself from LessWrong/Bay Area "rationalism" or "the rationalist community" (2)? I don't know. I want this to happen because I think "the rationalist community" (2) decreases the rationality (1) of effective altruism. The more influence the LessWrong/Bay Area "rationalist" subculture (2) has over effective altruism, the less I like effective altruism and the less I want to be a part of it.
If Dustin and Good Ventures are truly done with "the rationalist community" (2), that sounds like good news for Dustin, for Good Ventures, and probably for effective altruism. It's a small victory for rationality (1).
Unimportant note: I made some edits to this comment for clarity on 2025-05-03 at 09:37 UTC, specifically the part about Dustin Moskovitz. I was a bit confused trying to read my own comment nine days later, so I figured I could improve the clarity. The edits don't change the substance of this comment. You can see the previous version of the comment in the Wayback Machine, but don't bother, there's really no point.