Hide table of contents

TL;DR: Define your line that if crossed, you would consider this issue one of (if not the most) pressing issues, or at least pressing enough to warrant some of your time.  

I want to start with a clarification that I learned while writing this post. In the United States, charities with 501(c)(3) tax-exempt status are permitted to discuss policy and engage in advocacy, but are prohibited from participating in partisan political campaigns. I have also read the EA Forum post Politics on the EA Forums and I believe this post is consistent with those norms. I am not advocating for or against any party, candidate, or electoral campaign. The question I want to raise is broader: whether creeping authoritarianism, anti-fascism, and authoritarian lock-in should be discussed more explicitly in EA spaces as subjects of analysis and concern. Although my own experience is local to me in Canada, the question is clearly relevant to the current situation in the United States and globally.

I’m asking this sincerely: why isn’t anti-fascism a bigger topic at EAG events or on this forum? I was thinking about it while planning my trip to EAG San Francisco 2026. Should I be travelling to the US? If I do, should I organize a workshop on mutual aid? Why aren't the people who live there more engaged with this topic?  I’m not an academic. I run a landscape design-build firm. I’m not an AI safety researcher. I’m not a policy person. I’m someone who’s been pulled, pretty abruptly, into local anti-fascist work in Toronto and now can’t unsee the pattern.

At EAG, Toby Ord gave an opening talk that included a metaphor I can’t stop thinking about. This is my paraphrase from my notes, not a quote: he talked about “AGI” as a term that’s useful when you’re far away, like looking up at clouds and snowy peaks on a mountain. From below, you can point and say: “That’s where I’m headed.” But as you climb, your visibility gets worse. You enter a fog gradually. It tightens. Eventually, you might emerge above the clouds and see clearly again. The point that stuck for me is: at no moment can you put your finger on it and say, with confidence, “this is the exact step where I entered the cloud.” The boundary is not crisp as you approach.

I think creeping authoritarianism works the same way. “Authoritarianism” and “fascism” feel like obvious labels for obvious states of the world when you’re looking at history books. Up close, what you experience is a slow drift in what counts as normal: rhetoric becomes a little more dehumanizing, intimidation becomes a little more tolerated, policing becomes a little more comfortable clearing space for the wrong people, institutions get a little more captured, and the set of “respectable” policy options slides. Each step is individually arguable. Collectively, it’s a path.

If that mapping is even partly right, it creates a nasty decision problem. Waiting until it’s “obvious” is exactly what the fog punishes. By the time you can say it cleanly, you’re already deep in it. Which is why I keep coming back to the question: why does this barely show up as a topic of discussion at EAGs, never mind as something that gets real intellectual attention?

To steelman the obvious objections: I get why EA doesn’t default to this.

  • “Isn’t this too short-term?” EA is built around scope sensitivity, long-term consequences, and global priorities. A lot of political turbulence is noise.

     
  • “Won’t this drag EA into a partisan culture war?” If you’re trying to keep a community functional, you should be allergic to anything that reliably produces heat instead of light. Heat being intensity, polarity and emotion vs light being actual clarity or useful insight.

     
  • “EA tools don’t apply.” It’s hard to conduct Randomized Controlled Trials on “prevent fascism.” It’s hard to quantify tractability. It’s hard to build clean cost-effectiveness models on complex social dynamics.

     
  • “We have limited attention and money.” Even if this matters, maybe it’s not where EA’s comparative advantage is.

     

All fair. But here’s the counterweight that keeps me stuck on this: the risk I’m pointing at is not ordinary political disagreement. It is that authoritarian norms may ratchet upward and become locked in before communities like ours decide they are urgent enough to warrant serious attention. The earlier you intervene in norm formation, the easier it is. The later you intervene, the more you’re not “persuading” so much as trying to unwind institutional and cultural cement. That difference matters for leverage.

Also, I’m not actually convinced this is “short-term” in the relevant sense. Some risks unfold quickly and then persist for a long time. AI is discussed at EAG partly because it could matter in 12 years, or 12 months. On the current trajectory, the United States could look meaningfully different in 12 months, too, in ways that compound and constrain everything else we care about. I’m not saying “stop working on AI.” I’m saying: if you’re already worried about authoritarian lock-in as an x-risk-ish shape, it’s strange to treat present-day political drift as categorically off-limits.

There’s another thing I want to name, because I felt it in my own body at EAG.

In one workshop (ironically, about engineering the Overton window), I felt slightly self-conscious about telling people at the table that I’m part of the Toronto anti-fascist movement. That discomfort is weird. It’s not like I was about to confess I run an underground raccoon-fighting ring. “Anti-fascist” should be a boring label. Yet it carries stigma, even among people who are otherwise very logical and very serious about preventing harm.

This is where I think the idea of collective illusion might be relevant. A simple definition: a collective illusion is when many people privately reject a norm or assumption but go along with it because they think everyone else accepts it. People stay silent, so everyone updates incorrectly, and the silence reinforces itself. I mentioned this topic at one table, and someone booked a one-on-one with me to basically say: "thank you for saying that. It’s my first EAG, and I was wondering why no one else was talking about it." That interaction made me suspect I’m not alone. It made me suspect there are more people who are uneasy, but we’re all doing the “I guess we don’t talk about that here” thing.

Another possible explanation is simple topic fatigue. For many attendees, authoritarian drift, democratic backsliding, and far-right mobilization have been in the news almost daily for over a year. When something is constantly present, it can start to feel less like an urgent coordination problem and more like ambient background noise, even if the underlying risk is still increasing. In that sense, topic fatigue may itself be part of the cloud: not a reason the risk is smaller, but one reason it becomes harder to see clearly.

And to be clear, I’m not saying EAs don’t notice authoritarian creep. I think everyone can agree that the drift is visible and uncomfortable. My claim is narrower: when does it become a normal topic in EA spaces, discussed with the same seriousness as other pressing risks? Not as partisan signalling, not as moral theatre, but as an objective problem that deserves analysis.

At this point, I should ground this in why it feels urgent to me, personally, in Canada.

Toronto is globally branded as a diverse city. The motto is “Diversity Our Strength.” And yet we are hosting rallies from groups like “Canada First” that read, to me, as MAGA-Canada anti-immigration politics with thinly veiled white supremacist vibes. I’m not going to litigate every claim about funding or coordination across borders here. The point is the local pattern: organizing that normalizes dehumanizing language, “mass deportation”-style rhetoric, and imported “ICE” aesthetics as if it’s just another spicy policy preference.

The first action I went to was countering that kind of rally. And what activated me wasn’t even the rally itself. It was watching police use excessive force to clear a path through peaceful counter-protesters so the group could march. For me, my “line in the sand” was crossed when I watched an officer hit an elderly woman in the face with the handle of his mountain bike while forcing through the crowd. That was the moment where something snapped from “this is concerning” to “this is not okay, and I am now involved.”

This is where the paradox of tolerance comes into play. The basic idea (often attributed to Karl Popper) is that a tolerant society can’t afford to be indefinitely tolerant of movements that aim to destroy tolerance, because if it does, tolerance gets extinguished. You can debate the boundaries of that principle, and you should. But the concept names a real governance problem: “free speech” and “the right to gather” are not the only values in play when the content is intimidation, dehumanization, or organizing for discriminatory power. A diverse democracy has to defend the conditions that allow diversity and democracy to exist. Pretending otherwise is a category error, not neutrality.

Now, I want to make a point that might sound like false humility, but I mean it straightforwardly.

I don’t know what the right solution is.

I know what I’m doing: I show up. I’m a body on the other side of the argument. I bring hot apple cider to people in the cold. I’ll wear an inflatable frog costume and stand there if that’s useful. I’m willing to be visibly, inconveniently present. That’s not a grand strategy. It’s a human one. But I don’t think “more bodies” is the full answer, and I’m not convinced my current actions are the best use of marginal effort.

Here’s the thing, though: I also don’t believe the people in this community are stuck. I think if a bunch of the so-called galaxy brains in EA spent a serious hour on this, they’d generate better ideas than I’m capable of generating. Not because my brain is smooth, but because they’re trained to reason about complex risk, coordination problems, and leverage.

So my ask is not “everyone become an activist.” My ask is: “can we treat this as a real problem that deserves real thinking?”.

One helpful frame I’ve found recently is mutual aid. I’m thinking specifically of Mutual Aid: Building Solidarity During This Crisis (and the Next) by Dean Spade. The parts that feel relevant here are less about ideology and more about practical categories of action: community support, resource sharing, coordination, disaster relief, horizontal networks, and building systems that function when institutions fail or become hostile.

If you want something concrete that fits EA strengths, I think resiliency building is the obvious bridge: designing resilience systems that are scalable, repeatable, and resource-efficient. This community is unusually good at systems thinking. It’s unusually good at coordination design. It’s unusually good at asking “what’s the highest-leverage intervention?” That seems applicable to building mutual aid capacity and civic resilience in ways that reduce the surface area for intimidation and normalize pro-social coordination before crises intensify.

And this circles back to Ord’s cloud metaphor. If the fog makes thresholds hard to perceive in real time, then relying on vibes is a bad plan. Which is why I want to end with a precise prompt.

Define your line in the sand.

Not “when it gets bad.” Not “when democracy is threatened.” I mean: what specific, observable shift would cause you to reallocate attention, time, or resources toward this problem? What would have to happen in your country, your city, your institution, your workplace, your community? What would be the trigger that makes you say: This is now among my most pressing priorities? I think it’s most important for you to know it for yourself, but please tell us in the comments if you are feeling brave. 

Then, if you’re willing, do the second step: Consider what expertise you have and what kind of contribution you think is plausibly helpful. Policy analysis? Communications? Community-building? Legal work? Measurement? Coordination platforms? Funding? Security for vulnerable groups? Something else? I was considering asking people to post this in the comments, but honestly, we need someone to build a better database than the comments on this post. I am part of a dozen disparate signal chats loosely connected by this topic, but the organization is pretty dismal compared to what could be done. 

My claim is not “EA must become an anti-fascist movement.” My claim is narrower and, I think, more EA-compatible:

  • If authoritarian drift has cloud-like thresholds, waiting for certainty is a mistake.

     
  • If lock-in is the danger, early intervention is disproportionately valuable.

     
  • If stigma and collective illusion suppress discussion, we should notice that dynamic and correct for it.

     
  • If we don’t know the best action, that’s exactly when you want a community of unusually capable reasoners to spend time on the question.

     

So: where’s your line? And what would you actually do when it’s crossed?

 

PS: This was my first post, squeaking in just as the Draft Amnesty Week comes to an end. This was intentional. Please be kind. <3 

PSS: Someone informed me while writing this that there was an Unofficial Satellite event called "Democracy Unconference" that happened during EAG SF 2026, though I was unaware. It was hard to find, partly because it was unofficial due to its political nature and some of the issues raised early on in my post. 

 

Some interesting/connected links:

https://forum.effectivealtruism.org/posts/EDBQPT65XJsgszwmL/long-term-risks-from-ideological-fanaticism

https://forum.effectivealtruism.org/posts/eJNH2CikC4scTsqYs/prediction-markets-and-many-experts-think-authoritarian

https://forum.effectivealtruism.org/posts/kmx3rKh2K4ANwMqpW/destabilization-of-the-united-states-the-top-x-factor-ea?view=postCommentsNew&postId=kmx3rKh2K4ANwMqpW

https://www.metaculus.com/questions/36389/us-no-longer-a-democracy-by-2030/

https://manifold.markets/Siebe/if-trump-is-elected-will-the-us-sti

64

4
1
2

Reactions

4
1
2

More posts like this

Comments4
Sorted by Click to highlight new comments since:

Have you looked into Power for Democracies? https://powerfordemocracies.org - EA does in fact do anti-fascist interventions evaluation

Thanks for posting this, it made me think.

Here are my thoughts:
• Authoritarianism is a real risk. I think this has been clear for a while, but I've had multiple upwards updates.
• I agree that it's possible to analyse the issue of fascism in a non-partisan way.  Unfortunately, most 'anti-fascist' work focuses on only one side of the political spectrum. I think this is a mistake: 'anti-facist activists' are often just as fascist as anyone on the right and it's quite plausible that if the right loses the next election, then instead of aiming to restore frayed norms and institutions, folks on the left decide that the only option is to fight fire with fire.
• The mutual aid suggestion comes of as really strange to me. The argument for mutual aid as a way of building the EA community feels much stronger than the argument of engaging in mutual aid as a way to fight fascism, especially if you believe fascism is an urgent threat here and now rather than a possibility that we need to prepare for in case it happens at some undefined point in the future.
• "Mass deportion" really feels like a distinct question from facism - it's not really fascism if the government is just enforcing the law and there are proper procedural safeguards; on the other hand, even small scale deportations can be linked to fascism if they're being leveraged cynically to chill speech.

Such a thought provoking post. Each of this has to confront this at some soul level. Thank you Alex for putting this idea down.

Thanks for writing. 

I'm having a hard time understanding the mutual aid example, but maybe that stems from my relative lack of knowledge in that area. Wikipedia tells me that "[m]utual aid groups are distinct in their drive to flatten the hierarchy, searching for collective consensus decision-making across participating people rather than placing leadership within a closed executive team." But I expect that one of the effects of such strong decentralized/diffused governance and structure is that it would be very hard for a small group of people to have great leverage. Stated differently, I sense some tension between a focus on "scalable" and "repeatable" operations and being controlled by / responsive to the local community. I'm not suggesting that there is no value there, but I would associate scalable, repeatable operations more with top-down governance.

Against that, I think we have to weigh that the appearance and/or reality of increasing politicization would make it harder for other EA cause areas to achieve their objectives.

More from Alex_Z
Curated and popular this week
Relevant opportunities