Forum note: this post embodies the spirit of a new project I—Matt Reardon—am starting to reinvigorate in-person EA communities. I'm hiring a co-founder and I'm interested in meeting others who want to collaborate on this vision. My DMs are open.
Sequence thinkers will be forgiven and rejoice
In some fleeting moments lately, I catch glimpses of 2022—the year Effective Altruism’s ascent seemed unstoppable. Universally positive (if limited) press, big groups at top universities across the world, a young EA entrepreneur was the darling of the financial industry, the first EA running for Congress calling in enormous financial and personnel resources for his campaign. More than those public-facing facts though, was a feeling on the ground that if you had a good grip on things and a plausible idea, you would get funding, go to the Bay, and make it happen.
I actually regret how slow I was to see it at the time. You could just do things, and yes, that’s always been true, but at that time, you didn’t even have excuses. In my first year as an advisor at 80,000 Hours, I allowed people a lot of excuses. I think this came from some emphatically non-prophetic feeling of “this is too good to be true.”
And, of course, it was. But now we’re here again: buoyed by another darling startup whose founders are committed to giving away most of their wealth to things that will do the most good™. Some of the same risks are present too.[1] The biggest difference though, is how EAs now conceive of themselves and their movement. FTX proudly brandished its heart-and-lightbulb in one of the most brazen acts of moral self-licensure in history. Anthropic hems and haws about their relationship to scope-sensitive do-gooding.
But the future belongs to those who mince words less. I would like my friends at Anthropic and my EAs across the board to win the future. And while I can’t predict the *whole* future, I do predict that Effective Altruism’s flag will fly high and proud in the next few years. New projects will launch, ambition will again be the coin of the realm, and no one will be able to deny the underlying reality that a community formed of a commitment to understanding and bettering the world at scale is the reason why. The only decision to be made is whether this will happen in spite of us or because of us.
The Retreat
The case for “in spite of” is probably straightforward to most readers of this blog. The dizzying heights we reached so quickly in the FTX era set us up for an even more disorienting fall. EA’s decline into mild disrepute is mostly a consequence of EAs themselves playing into that catastrophizing narrative and refusing to hit back at their critics out of shame. While I think this is basically correct, a more nuanced account reveals why we might struggle to meet the present ascent as well as we could.
The FTX collapse was not just a lightning bolt of humiliation that shocked EAs into shame, it also accelerated two pre-existing trends in the movement: the professionalization of EA and the growing consensus that addressing transformative AI dominated all other cause areas in expected value terms.
Professionalization entails a lot of things. For one, when you’re the tiny germ of a new idea in the head of a few undergrads in 2009, your movement is about you, your friends, and how your call to action can be made consistent with everyday life, i.e., try to be smart about how you donate 10-50% of your income. When you draw 11-figure funding and have shored up consensus about some seriously novel, globe-spanning neglected problems, you’ll need to create some institutions and build some alliances. Now, you’re no longer asking people to modify their normal lives in light of your weird ideas; you’re modifying your weird ideas to meet the expectations of their competent, would-be implementers. At a minimum, you’ll want some nice offices, salaries, and HR policies, but the urgency of drawing the best possible implementers means you also need to make it clear that the subject matter they work on is the only exotic requirement: no pledges, no veganism, no evidential decision theory.
The consensus on AI hardly needs elaboration, but I do think a neglected frame here is noticing that the consensus is a triumph of something akin to sequence thinking. You achieve your ultimate goal by succeeding at each logical step leading up to it, so you focus on whichever step is in front of you. Want to do the most good? Just work on the most important problem. Want to get more of that work done? Get more people to do it. More people do it when you don’t get bogged down in reasons and morals and world models.
These two trends complemented each other too. As consensus around AI risk grew, so did the opportunity to be more legible to professional types. We don’t have to start with your empathy for the homeless man on the street, do the shallow pond experiment, sell you on RCTs, raise the animal welfare question, and then talk about the numbers at stake in nuclear near-misses to get you bought in on AI risk if we just need you to balance our books. We’re an interest group like many others, we think this newfangled AI might be bad somehow and it’s a really hot topic right now. Interested in helping out? We pay all right for a nonprofit.
I concede that Occam’s razor is against me here.
The asceticism, veganism, group houses, and polycules sat in uneasy tension with that pitch. There was also a reasonable thought that when you start with first-principles philosophy, it’s hard to avoid people forming opinions on all kinds of things and breaking off into still-associated subgroups who present at least some risk of reputational contagion. So quite sensibly, you diversify your approach and see how promising it looks to have a clean, professional, not-very-ideological, arguably-non-committal AI safety “field” as opposed to an EA “community.”
And I want to be clear that although this isn’t my cup of tea, it is an emphatically worthwhile experiment. Relying on everyone to come around to your idiosyncratic world view is not a strategy. Everyone is not going to just. At some point, you need to play nice with people who see things very differently than you and make them feel fully respected if you want to accomplish big things. Professionalism and legibility are fine ways to do that. The worry is that you elevate means over ends. In your rush to be pragmatic and accumulate respect and power in service of a better world, you neglected the better world part, banking on the notion that you locked that in ten years ago.
What Greatness Demands
My view is that a vision of a better world and the principles that cultivate it are really hard to pin down. They require almost all hands on deck all the time. For this, I am really glad that Forethought exists and that Will MacAskill—who will never escape the Effective Altruist label so long as he lives—is near the helm.
The issue is that while we were feeling out professionalization and AI uber alles, FTX happened and what was an experiment became the only lifeboat in a storm. Nearly all hands left the deck. Even Forethought is framed as an AI project, though I’m heartened that they draw so much from history and prioritize helping society do moral reflection.
My real issue is that EA in all its forms is still so small. MacAskill shouldn’t be near the helm. He and all the other members of the original EA student group should expect some edge from being around so long as the movement grew and from being selected for the ambition to make the movement happen at all, but the way we’ll know EA is really winning is when the Oxford crew is wholly eclipsed by a new generation of even more ambitious and clear-thinking altruists.
And we won’t find them if our exclusive offer is cushy, well-scoped research roles at buttoned-up think tanks. To lead this movement, to safeguard the future for our highest values, we need to ask people to own the whole outcome: understand things from first principles, build and articulate their own world models. Tell us we’re wrong. Compel us to see the greater good and find the better solutions to create a better world.
You can’t instruct people, or even hire them, to surpass you like that if you regard yourself as holding all the cards and doling out your specific wisdom on specific topics. And to be honest, we know that compute governance, scary demos, and RSPs (gulp) are all pretty weaksauce. Maybe these being the only tools we’ve dredged up is symptomatic of the fact that lots of people in this space can’t even articulate the central worry that well.
Perhaps there are no better ideas out there. We’re going with what we’ve got. Time is short. Understandable. My pitch is that we invest more in giving people the whole picture—from the beginning. Why are we here? What’s our best picture of the good world? How did *we* locate the problems we’re most worried about now?
When I ask the most admirable and impressive people trying to save the world at scale how they got into this line of work and what pushes them to the laudable (but so far insufficient) heights they’ve reached, the most common story is the classic EA one. “I wanted to do good; I wanted to understand how to do good.” They ran into Singer. They read old 80k. They spent time with the abstractions and debated Pascal’s Mugging. They found the best tools and they built their own models. Now they’re leaders who rely on those models and the judgment they gained building them to make a hundred decisions a day and steer their projects to value in a hostile and confusing world. Most of all, they believe in the tools more than the conclusions.
The best of us are all relatively heads-down on last mile projects though.[2] This might be crunch time. In expectation, this is the most essential work. If no one is doing this and gaining traction, why would anyone feel compelled to sign on? The uneasy de-emphasis of Effective Altruism qua Effective Altruism is holding well enough and bringing EA fully back into the arena is several full-time jobs.
We’re still small. Retreating from FTX arguably made us smaller and less able to bounce back. The baseline reality is that great people still keep the good in their hearts though and despite their full plates they’re more and more willing to say so. It has always looked silly to deny it. And more than anything, there’s nothing here worth denying.
Effective Altruism is Good and Right
The point that I began to bleed into above is that sequence thinking can solve specific problems and make smart bets in context, but it doesn’t make strong people generally. Taking up the task to understand and act in the world does. That task is deeply personal and inherently open-ended. One’s orientation towards it should not be “what’s in the news lately?” or “how do I get a good job out of this?”
The orientation you want is: “what kind of person do I want to be?” I suspect that to even ask the question immediately pushes you towards some tentative answers. You want to do good. You want to help others. You want to be fair minded about that, maybe even impartial. You want to do more good rather than less. You want to believe true things and understand the world. To modern ears, it can sound cringe and over-earnest, but what other answer can you give?
And the reason EA will be great again is that the best EAs, of which there are many, all want to be EAs. It’s in their bones. If they weren’t staring down the barrel of an intelligence explosion, or if the prospect of transformative AI somehow disappears with relative certainty, they would open their Animal Welfare for Dummies books the next day. Or they’d roll up their sleeves to defend America. Some of them are doing these things now. The point is that the EAs are ready to do what’s right with no fuss and no ego. Believe it or not, that’s what AI safety work entailed just seven or eight years ago.
I know this because I know them. And to know them is to see the earnestness and care they put into understanding our situation and letting their views be guided by sturdy conceptions of the good, deep humility, and above all regard for what’s actually true rather than what sounds good. There’s a hunger for that in people: somewhere they can do real thinking about what really matters.
Values-agnostic arguments for AI risk surely have their place in the broad sweep of public discourse. And it may even be a big place, jockeying among all the other object-level, point-scoring rhetoric out there, but a large class of special people will always have an allergy to it. They want to understand how this fits in with everything else happening in the world, how this fits in with their highest ideals and what they want their lives to be about. The METR graphs just don’t offer that. Effective Altruism does.

"And the reason EA will be great again is that the best EAs, of which there are many, all want to be EAs. It’s in their bones. If they weren’t staring down the barrel of an intelligence explosion, or if the prospect of transformative AI somehow disappears with relative certainty, they would open their Animal Welfare for Dummies books the next day. Or they’d roll up their sleeves to defend America. Some of them are doing these things now. The point is that the EAs are ready to do what’s right with no fuss and no ego. Believe it or not, that’s what AI safety work entailed just seven or eight years ago."
<3
"The reader will note that I am not the best of us."
The reader notes that one day you'll have to stop pretending, and grudgingly admit that you are.
"the way we’ll know EA is really winning is when the Oxford crew is wholly eclipsed by a new generation of even more ambitious and clear-thinking altruists"
Amen
You have such a way with words.
I thought you might catch that last one. I hope you took it personally.
Reasonable if you don't want to publicly go into internecine tensions, but the obvious question seems to be how you see this relating to principles-first EA, which is, on its face, a similar idea.