This is my personal take, not an organizational one. Originally written May 2025, revived for Draft Amnesty Week.
When I discovered the beginnings of EA, it felt like coming home. I had finally found a group of people who cared as much as I did about saving lives.
When I discovered rationality not long after, it felt new. The intensity of LessWrong was both offputting and fascinating to me, and I read a lot of the site in the summer of 2011. I had studied sociology and was used to thinking in terms of ethics and emotions, but not in terms of statistics or economics. LessWrongers were picky and disagreeable, but I learned reasoning skills from them.
Before EA even had a name, LessWrong was one of the best places to talk about ideas now associated with EA. My first post in 2011 was about what would later be called earning to give.
I attended LessWrong meetups in Boston because it was fun to toss ideas around, but I hosted EA meetups because I wanted to strengthen a movement I believed was saving lives. After having kids I continued to prioritize EA, but I didn’t stay involved in rationality as a community.
It did continue to influence how I thought, though. If I wrote something that didn’t quite feel right, I’d imagine how a rationalist critic would respond. Often this steered me away from sloppy reasoning.
……
At their best, the two movements have in common a willingness to dig into weird possibilities and to take ideas seriously. How literally should you take expected value calculations? What if AI is much less predictable than we hope? Does this research study really say what the headlines report?
I’ve also heard plenty of criticisms of both sides:
- EAs have adopted key ideas (existential risk, astronomical waste) from people in weird corners of the internet while holding their nose at those same people and spaces
- Rationalists prioritize ideas and navel-gazing over real-world impact, and squander their weirdness points
- EA cares too little for accuracy or what’s really important, and too much for image and respectability
- Rationality tolerates or encourages repulsive ideas and behaviors
……
So how do these movements currently relate to each other? How should they relate?
Among individuals, there’s overlap, but most people who identify with one don’t identify with the other.
- A quarter of respondents on the 2024 LessWrong survey identify as EAs.
- 7% of respondents on the 2024 EA Survey learned about EA via LessWrong; this is more than the combined number of respondents who came via GiveWell and Giving What We Can.
- Some cities have basically no overlap between the two scenes, and others have a lot, especially the Bay Area. (In general I think the Bay Area is shaped by having an unusual number of subcultures that cross-pollinate each other.)
Personally, EA is my main focus, but rationality continues to be one of the spaces I draw value from. I want to stay open to good ideas and techniques from thinkers in the rationality space, even if I object to other ideas or actions from the same thinkers. Some ideas I’ve valued:
- “Pulling the rope sideways,” popularizing prediction markets, and takes on how humans fool ourselves from Robin Hanson. I find his writing infuriating and illuminating by turns.
- Goal factoring and other techniques from CFAR; I don’t endorse everything that ever came from their org, but they did spread some useful tools. I think goal factoring may in turn have been inspired by Geoff Anders’ goal mapping.
The fact that some people in EA and rationality get value from the other space doesn’t mean everyone has to do the same. If a space gives you the creeps, you don’t have to engage with it.
My best guess is that organizations and projects in EA and rationalist spaces are best off with some freedom from each other, so they can pursue projects in their different ways.
A week for sharing incomplete, scrappy, or long languishing drafts. Read more.