Hide table of contents

TL;DR: Define your line that if crossed, you would consider this issue one of (if not the most) pressing issues, or at least pressing enough to warrant some of your time.  

I want to start with a clarification that I learned while writing this post. In the United States, charities with 501(c)(3) tax-exempt status are permitted to discuss policy and engage in advocacy, but are prohibited from participating in partisan political campaigns. I have also read the EA Forum post Politics on the EA Forums and I believe this post is consistent with those norms. I am not advocating for or against any party, candidate, or electoral campaign. The question I want to raise is broader: whether creeping authoritarianism, anti-fascism, and authoritarian lock-in should be discussed more explicitly in EA spaces as subjects of analysis and concern. Although my own experience is local to me in Canada, the question is clearly relevant to the current situation in the United States and globally.

I’m asking this sincerely: why isn’t anti-fascism a bigger topic at EAG events or on this forum? I was thinking about it while planning my trip to EAG San Francisco 2026. Should I be travelling to the US? If I do, should I organize a workshop on mutual aid? Why aren't the people who live there more engaged with this topic?  I’m not an academic. I run a landscape design-build firm. I’m not an AI safety researcher. I’m not a policy person. I’m someone who’s been pulled, pretty abruptly, into local anti-fascist work in Toronto and now can’t unsee the pattern.

At EAG, Toby Ord gave an opening talk that included a metaphor I can’t stop thinking about. This is my paraphrase from my notes, not a quote: he talked about “AGI” as a term that’s useful when you’re far away, like looking up at clouds and snowy peaks on a mountain. From below, you can point and say: “That’s where I’m headed.” But as you climb, your visibility gets worse. You enter a fog gradually. It tightens. Eventually, you might emerge above the clouds and see clearly again. The point that stuck for me is: at no moment can you put your finger on it and say, with confidence, “this is the exact step where I entered the cloud.” The boundary is not crisp as you approach.

I think creeping authoritarianism works the same way. “Authoritarianism” and “fascism” feel like obvious labels for obvious states of the world when you’re looking at history books. Up close, what you experience is a slow drift in what counts as normal: rhetoric becomes a little more dehumanizing, intimidation becomes a little more tolerated, policing becomes a little more comfortable clearing space for the wrong people, institutions get a little more captured, and the set of “respectable” policy options slides. Each step is individually arguable. Collectively, it’s a path.

If that mapping is even partly right, it creates a nasty decision problem. Waiting until it’s “obvious” is exactly what the fog punishes. By the time you can say it cleanly, you’re already deep in it. Which is why I keep coming back to the question: why does this barely show up as a topic of discussion at EAGs, never mind as something that gets real intellectual attention?

To steelman the obvious objections: I get why EA doesn’t default to this.

  • “Isn’t this too short-term?” EA is built around scope sensitivity, long-term consequences, and global priorities. A lot of political turbulence is noise.

     
  • “Won’t this drag EA into a partisan culture war?” If you’re trying to keep a community functional, you should be allergic to anything that reliably produces heat instead of light. Heat being intensity, polarity and emotion vs light being actual clarity or useful insight.

     
  • “EA tools don’t apply.” It’s hard to conduct Randomized Controlled Trials on “prevent fascism.” It’s hard to quantify tractability. It’s hard to build clean cost-effectiveness models on complex social dynamics.

     
  • “We have limited attention and money.” Even if this matters, maybe it’s not where EA’s comparative advantage is.

     

All fair. But here’s the counterweight that keeps me stuck on this: the risk I’m pointing at is not ordinary political disagreement. It is that authoritarian norms may ratchet upward and become locked in before communities like ours decide they are urgent enough to warrant serious attention. The earlier you intervene in norm formation, the easier it is. The later you intervene, the more you’re not “persuading” so much as trying to unwind institutional and cultural cement. That difference matters for leverage.

Also, I’m not actually convinced this is “short-term” in the relevant sense. Some risks unfold quickly and then persist for a long time. AI is discussed at EAG partly because it could matter in 12 years, or 12 months. On the current trajectory, the United States could look meaningfully different in 12 months, too, in ways that compound and constrain everything else we care about. I’m not saying “stop working on AI.” I’m saying: if you’re already worried about authoritarian lock-in as an x-risk-ish shape, it’s strange to treat present-day political drift as categorically off-limits.

There’s another thing I want to name, because I felt it in my own body at EAG.

In one workshop (ironically, about engineering the Overton window), I felt slightly self-conscious about telling people at the table that I’m part of the Toronto anti-fascist movement. That discomfort is weird. It’s not like I was about to confess I run an underground raccoon-fighting ring. “Anti-fascist” should be a boring label. Yet it carries stigma, even among people who are otherwise very logical and very serious about preventing harm.

This is where I think the idea of collective illusion might be relevant. A simple definition: a collective illusion is when many people privately reject a norm or assumption but go along with it because they think everyone else accepts it. People stay silent, so everyone updates incorrectly, and the silence reinforces itself. I mentioned this topic at one table, and someone booked a one-on-one with me to basically say: "thank you for saying that. It’s my first EAG, and I was wondering why no one else was talking about it." That interaction made me suspect I’m not alone. It made me suspect there are more people who are uneasy, but we’re all doing the “I guess we don’t talk about that here” thing.

Another possible explanation is simple topic fatigue. For many attendees, authoritarian drift, democratic backsliding, and far-right mobilization have been in the news almost daily for over a year. When something is constantly present, it can start to feel less like an urgent coordination problem and more like ambient background noise, even if the underlying risk is still increasing. In that sense, topic fatigue may itself be part of the cloud: not a reason the risk is smaller, but one reason it becomes harder to see clearly.

And to be clear, I’m not saying EAs don’t notice authoritarian creep. I think everyone can agree that the drift is visible and uncomfortable. My claim is narrower: when does it become a normal topic in EA spaces, discussed with the same seriousness as other pressing risks? Not as partisan signalling, not as moral theatre, but as an objective problem that deserves analysis.

At this point, I should ground this in why it feels urgent to me, personally, in Canada.

Toronto is globally branded as a diverse city. The motto is “Diversity Our Strength.” And yet we are hosting rallies from groups like “Canada First” that read, to me, as MAGA-Canada anti-immigration politics with thinly veiled white supremacist vibes. I’m not going to litigate every claim about funding or coordination across borders here. The point is the local pattern: organizing that normalizes dehumanizing language, “mass deportation”-style rhetoric, and imported “ICE” aesthetics as if it’s just another spicy policy preference.

The first action I went to was countering that kind of rally. And what activated me wasn’t even the rally itself. It was watching police use excessive force to clear a path through peaceful counter-protesters so the group could march. For me, my “line in the sand” was crossed when I watched an officer hit an elderly woman in the face with the handle of his mountain bike while forcing through the crowd. That was the moment where something snapped from “this is concerning” to “this is not okay, and I am now involved.”

This is where the paradox of tolerance comes into play. The basic idea (often attributed to Karl Popper) is that a tolerant society can’t afford to be indefinitely tolerant of movements that aim to destroy tolerance, because if it does, tolerance gets extinguished. You can debate the boundaries of that principle, and you should. But the concept names a real governance problem: “free speech” and “the right to gather” are not the only values in play when the content is intimidation, dehumanization, or organizing for discriminatory power. A diverse democracy has to defend the conditions that allow diversity and democracy to exist. Pretending otherwise is a category error, not neutrality.

Now, I want to make a point that might sound like false humility, but I mean it straightforwardly.

I don’t know what the right solution is.

I know what I’m doing: I show up. I’m a body on the other side of the argument. I bring hot apple cider to people in the cold. I’ll wear an inflatable frog costume and stand there if that’s useful. I’m willing to be visibly, inconveniently present. That’s not a grand strategy. It’s a human one. But I don’t think “more bodies” is the full answer, and I’m not convinced my current actions are the best use of marginal effort.

Here’s the thing, though: I also don’t believe the people in this community are stuck. I think if a bunch of the so-called galaxy brains in EA spent a serious hour on this, they’d generate better ideas than I’m capable of generating. Not because my brain is smooth, but because they’re trained to reason about complex risk, coordination problems, and leverage.

So my ask is not “everyone become an activist.” My ask is: “can we treat this as a real problem that deserves real thinking?”.

One helpful frame I’ve found recently is mutual aid. I’m thinking specifically of Mutual Aid: Building Solidarity During This Crisis (and the Next) by Dean Spade. The parts that feel relevant here are less about ideology and more about practical categories of action: community support, resource sharing, coordination, disaster relief, horizontal networks, and building systems that function when institutions fail or become hostile.

If you want something concrete that fits EA strengths, I think resiliency building is the obvious bridge: designing resilience systems that are scalable, repeatable, and resource-efficient. This community is unusually good at systems thinking. It’s unusually good at coordination design. It’s unusually good at asking “what’s the highest-leverage intervention?” That seems applicable to building mutual aid capacity and civic resilience in ways that reduce the surface area for intimidation and normalize pro-social coordination before crises intensify.

And this circles back to Ord’s cloud metaphor. If the fog makes thresholds hard to perceive in real time, then relying on vibes is a bad plan. Which is why I want to end with a precise prompt.

Define your line in the sand.

Not “when it gets bad.” Not “when democracy is threatened.” I mean: what specific, observable shift would cause you to reallocate attention, time, or resources toward this problem? What would have to happen in your country, your city, your institution, your workplace, your community? What would be the trigger that makes you say: This is now among my most pressing priorities? I think it’s most important for you to know it for yourself, but please tell us in the comments if you are feeling brave. 

Then, if you’re willing, do the second step: Consider what expertise you have and what kind of contribution you think is plausibly helpful. Policy analysis? Communications? Community-building? Legal work? Measurement? Coordination platforms? Funding? Security for vulnerable groups? Something else? I was considering asking people to post this in the comments, but honestly, we need someone to build a better database than the comments on this post. I am part of a dozen disparate signal chats loosely connected by this topic, but the organization is pretty dismal compared to what could be done. 

My claim is not “EA must become an anti-fascist movement.” My claim is narrower and, I think, more EA-compatible:

  • If authoritarian drift has cloud-like thresholds, waiting for certainty is a mistake.

     
  • If lock-in is the danger, early intervention is disproportionately valuable.

     
  • If stigma and collective illusion suppress discussion, we should notice that dynamic and correct for it.

     
  • If we don’t know the best action, that’s exactly when you want a community of unusually capable reasoners to spend time on the question.

     

So: where’s your line? And what would you actually do when it’s crossed?

 

PS: This was my first post, squeaking in just as the Draft Amnesty Week comes to an end. This was intentional. Please be kind. <3 

PSS: Someone informed me while writing this that there was an Unofficial Satellite event called "Democracy Unconference" that happened during EAG SF 2026, though I was unaware. It was hard to find, partly because it was unofficial due to its political nature and some of the issues raised early on in my post. 

 

Some interesting/connected links:

https://forum.effectivealtruism.org/posts/EDBQPT65XJsgszwmL/long-term-risks-from-ideological-fanaticism

https://forum.effectivealtruism.org/posts/eJNH2CikC4scTsqYs/prediction-markets-and-many-experts-think-authoritarian

https://forum.effectivealtruism.org/posts/kmx3rKh2K4ANwMqpW/destabilization-of-the-united-states-the-top-x-factor-ea?view=postCommentsNew&postId=kmx3rKh2K4ANwMqpW

https://www.metaculus.com/questions/36389/us-no-longer-a-democracy-by-2030/

https://manifold.markets/Siebe/if-trump-is-elected-will-the-us-sti

158

10
5
3

Reactions

10
5
3

More posts like this

Comments23
Sorted by Click to highlight new comments since:
Ward
57
10
1
2
1

CEA had a policy of not allowing any events / talks on democracy at EAG SF. Their stated rationale was the 501c3 stuff, which as you note should in fact allow for such events. Instead, I think they're afraid of drawing the ire of the administration.

Some colleagues and I organized a non-CEA-affiliated democracy side event to EAG SF, which was quite well attended. Sorry if we didn't reach out to you! We were DMing people by hand based on who seemed relevant in the attendee spreadsheet. (We decided to do this 2 weeks in advance; we'll be better-prepared if we do it again :)

My current stance is we should accept that CEA wants to be highly risk averse here, and have a parallel set of orgs that do more political risk-taking. We should also understand that there's a lot of risk management going on behind the scenes, and not trust public EA comms to represent how worried people actually are about US democracy.

Assuming that this unspoken risk assessment and mitigation is occurring, should it be made more transparent to the EA community? In particular, the stated reason for disallowing certain types of discourse (which might otherwise fit well within EA norms) being false seems like a rather serious and significant rupture of the stated norms and goals of the community. 

Yeah, I'm pretty upset with CEA for being both cowardly and nontransparent here. It's tricky because of course part of what they're hoping for is just to fly under the radar. But I'd respect them more for being honest about being afraid, and it would be more informative for the community if they did so.

FWIW, I spoke to someone at CEA who wasn't directly involved in policy-setting but thought it was more likely that their policies came from scrambling rather than strategy. I'm confused by this, since I think their line has changed over time. My guess is there's some blurry intermediate between "thinking through a policy carefully" and "doing reflexive risk avoidance" that's where they're sitting.

That seems like a very flimsy argument to support their pretty questionable policy.

I don't see any mention here or in the comments about neglectedness, which seems like the most obvious reason for why EA isn't a good fit here. There are enormous, well-funded, long-established ecosystems dedicated to exactly this sort of thing - civil liberties organisations, legal defence funds, democratic governance NGOs, journalism, academic institutions, unions, anti-fascist networks etc.

I think there's some argument that the EA mindset could be applied to finding tractable interventions here but ultimately I just think there are more pressing problems that need our attention.

Low neglectedness can be outweighed by high importance or tractability. The hard part is being confident about tractability and room for more funding. I think one can make space for importance-focused efforts despite this uncertainty, especially with the consideration that rival actors are incentivized to increase it.

EA insights could be a valuable complement to existing ecosystems. Precisely because large political organizations have established roles to maintain, they may have operational or epistemic limitations. It's easy to draw analogies with large health charities that have received EA critique for marginal impact.

I have a bunch of thoughts, but I'll give just one: In order for "anti-fascism" to make sense as a guiding principle, we would need some kind of agreement about what fascism is and what alternative we're proposing. Because without a solid definition of what we're aiming for or why, we risk becoming ineffective and alienating a ton of people. Without putting too fine a point on it, organizations that call themselves antifa / anti-fascist generally attract a lot of far-leftists, communists, and anarchists, which scares off most people. There also are tends to be a lot of scope creep (e.g. saying that all cops are fascist bastards, or labelling center-right politicians like Ronald Reagan as fascists).

That's why I think it's generally better to guide yourself based on what you support rather than what you oppose. E.G. if you're worried about rule of law, you should directly advocate for rule of law. If you're worried about populist movements causing worse governance by taking power away from knowledgeable experts, then you should directly advocate for more meritocracy in government staffing decisions.

I realize that those are both wonky things to focus on, but that's kind of the point. EA's comparative advantage is that we're a small group of intelligent, committed people. We can accomplish a lot of things in the boring world of procedures, outside the limelight. When it comes to anti-fascist street actions / mutual aid, even if those tactics work (which I'm skeptical they do), EAs simply don't have the numbers for it.

As others have pointed out, Power for Democracies is an org with roots in the EA community that is working on this. Also, I would argue that 80k's current second top issue, extreme power concentration, has a lot of overlap with what you're talking about here. Furthermore, much of the longtermist inspired work that focuses not just on surviving but flourishing addresses this issue, but mostly in a very theoretical sense.  

But to answer your question directly, and putting to one side CEA's concerns about 501c3 stuff, I think it's not currently more of a concern because 1) maybe it's not the most neglected thing 2) maybe it doesn't feel very tractable and 3) perhaps most importantly, EA isn't doing much cause prioritisation work these days. 

However, I would note that it feels like there's a bit of a vibe shift on this issue, and more and more EAs are prioritising work in this area. 

Edit: I've got more time, so I want to add more detail.

On Power for Democracies

Power for Democracies identifies giving opportunities to support democracy. Co-founded by a senior advisor at Effektiv Spenden, it's basically trying to be the GiveWell or GivingGreen of democracy. 

Their recently concluded first project, Effectively Countering Authoritarian Playbooks, ran two parallel workstreams: prioritising countries and prioritising tactics.

For country selection, they built a framework around four dimensions — Importance, Threat, Tractability, and Opportunity — combining quantitative data from sources like V-Dem and Civicus with qualitative country profiles and expert consensus processes. 

They were looking for places where democracy is genuinely under pressure but where civil society intervention is still tractable. 

From a global pool, they selected seven priority countries: Hungary, Turkey, Italy, Indonesia, Poland, Argentina, and the US.

On the tactics side, they scanned roughly 35 common civil society approaches — strategic litigation, voter mobilisation, investigative journalism, anti-corruption lobbying, and so on — scoring each for quality of evidence and theoretical grounding. 

They then matched threats to tactics for each country, wrote deep-dive reports informed by expert interviews, and evaluated specific organisations using structured rubrics, independent researcher scoring, and funder reference checks.

They have three initial recommendations:

  • Freedom2Vote — voter mobilisation targeting underrepresented voters in the US
  • Media and Law Studies Association — legal defence for journalists facing prosecution in Turkey (~150 cases/year)
  • CELS — human rights and democracy advocacy through legal action in Argentina, operating since 1979

You can read an in-depth write-up of their research process here. You can donate to their recommended orgs here.

On the antifa movement more generally

If you want to know more about antifa, I recommend this lecture given by the author of the recent book 'Antifa: Portrait einer linksradikalen Bewegung' (unfortunately, it hasn't been translated). It's apparently one of the more rigorous academic histories of the movement. 

At 1:14:22 I asked him about the effectiveness of the movement and its different tactics, and the evidence for claims of effectiveness. 

His answer had two components. First, his general thesis on the effectiveness of the movement is that, through its actions, it has sensitised German society and the German state to the growing fascist movement. Second, he highlighted the success of recent efforts within the German antifa movement to gather intel and build archives concerning fascist individuals and groups. This has been used to prevent right-wing attacks or to very rapidly inform the authorities when apparent 'lone wolf' attackers have actually had fascist backgrounds. Apparently, they often do a better job of this than the German security services, and now receive funding from the Berlin city government. If you're an EA and are looking to do something about fascism beyond what Power for Democracies recommends, this feels like a good fit! 

Unfortunately, he didn't provide much detail on the evidence for the claims of effectiveness.

Thanks for posting this, it made me think.

Here are my thoughts:
• Authoritarianism is a real risk. I think this has been clear for a while, but I've updated upwards multiple times.
• I agree that it's possible to analyse the issue of fascism in a non-partisan way.  Unfortunately, most 'anti-fascist' work focuses on only one side of the political spectrum. I think this is a mistake: 'anti-facist activists' are often just as fascist as anyone on the right and it's quite plausible that if the right loses the next election, then instead of aiming to restore frayed norms and institutions, folks on the left decide that the only option is to fight fire with fire. This is a threat in and of itself, but it would also increase the ability of the right to lean more in this direction if they win power again.
• The mutual aid suggestion comes of as really strange to me. The argument for mutual aid as a way of building the EA community feels much stronger than the argument of engaging in mutual aid as a way to fight fascism. This is especially true if you believe fascism is an urgent threat here and now rather than a possibility that we need to prepare for in case it happens at some distant, undefined point in the future.
• "Mass deportion" really feels like a distinct question from facism - it's not really fascism if the government is just enforcing standard immigration laws and there are proper procedural safeguards; on the other hand, even small scale deportations can be legitimately linked to fascism if they're being leveraged cynically to chill speech. The raw numbers aren't the active ingredient or determining factor.

I'm curious to understand better where people disagree with this comment.

I believe that the assertion that "anti-fascists" are "often just as fascist" as the right and will engage in the same behaviour if given power is factually untrue. While there are loud groups of authoritarian communists (tankies) on the left which could be arguably described as fascist, these are a fringe group that are unlikely to get anywhere near the levers of power. Anti-fascists are a wide coalition consisting of a wide array of political views. 

I do not think that if the right loses the next election, that the left would be equally fascist. The current adminisatration flooded mineapolis with poorly trained thugs who made it unsafe to go outside as a non-white person. I do not believe that a President AOC or whoever will take actions of equivalent damage. 

Thanks, that's useful. I mostly agree with you, and mistakenly read the second bullet point as saying "work that opposes fascism should come from all sides of the political spectrum", which is something I agree with. I think the OP somewhat assumed that opposing fascism will look like 'work with your local anti-fascist network', but I expect much of it could look more like 'militarising Europe' (something the political left would typically oppose).

A lot of these claims are subtly different from the ones I made (not claiming that you were necessarily asserting that I agreed with them).

Engage in the same behaviour if given power is factually untrue

I wouldn't endorse this statement either. Left and right fascism express themselves differently. So I definitely wouldn't predict the 'same behaviour'.

Anti-fascists are a wide coalition consisting of a wide array of political views

There is a wide coalition against facism, but they don't call themselves antifa. It's a much narrower group that adopts that label.

I do not think that if the right loses the next election, that the left would be equally fascist

I don't expect that either. But they may still 'lock-in' some of the backsliding which would become the new standard from which behaviour is measured, enabling continued escalation from there.

The claim I made was "'anti-facist activists' are often just as fascist as anyone on the right" and I believe that's true. The impact of an election depends on the choices of a much broader set of people.

The current adminisatration flooded mineapolis with poorly trained thugs who made it unsafe to go outside as a non-white person. I do not believe that a President AOC or whoever will take actions of equivalent damage.

The damage that an action causes in the long-term has relatively little correlation with the damage that an action causes in the short-term. I'm not claiming 'equivalently damaging short-term effects'.

Excellent post! 

I appreciated the steelmanning, the clear action item, and the discussion of how EA's strengths could overlap with the need in this area. A lot of posts asking "why doesn't EA do X?" conclude that EA must be defective in some way for not making X a primary cause area; this post shows a refreshing understanding of community dynamics and makes a modest ask (making something a "normal problem" — good phrasing).

I was glad to hear about the Democracy Unconference, and I hope that people/groups with fewer legal concerns than CEA will continue to work on strategy in this space. It's tough for a movement built around elite persuasion and donations to work with nigh-unpersuadable elites in an environment where the wrong donation could disqualify you from a civil service job. But EA is capable of being flexible; we've found many different ways to create change over the last 15 years. Just on priors, I think there's room for creative action by the smart, motivated people in our tent.

Have you looked into Power for Democracies? https://powerfordemocracies.org - EA does in fact do anti-fascist interventions evaluation

Lets back up and ask a more basic question: What, exactly, do you mean by "fascism"? What is this thing that you are "anti"? As used in modern discourse, it seems to be used more as a slur than as a word with actual semantic content, and the people I have encountered in the past who self-identify as "anti-fascist" have not come off as serious people to me. So don't use that word like we all have a shared understanding of what actual thing it refers to. I don't, and unless you define it, I question whether you do either.

I saw that this comment was downvoted before. I think this is a mistake: many people will have similar questions. Indeed, I saw multiple indications in the post that the author likely defines 'fascism' than many people in EA.

Stronger: I think it's reasonable to wonder whether the author's is somewhat 'fuzzy', even though River's phrasing was a bit too direct for my taste.

I really appreciated this post — it made me stop and think about something I hadn’t spent much time on before. The question of what would need to be true for me to act made me pause because  I don’t have a clear answer. Other than voting (which obviously doesn’t change the situation elsewhere), I’m not sure what tangible, effective actions are available, or how to recognize early warning signs of something with such outsized opportunity for negative outcomes.

In my work in communications, I often think about how much impact conversations themselves can have — especially when they happen across divides. People are more likely to reconsider their views when they feel genuinely listened to, not argued with. So for me, part of “taking action” might mean practicing and teaching that kind of listening. Creating more understanding rather than more polarization.
It sounds small next to the scale of the problem, but open, honest dialogue feels like an early form of prevention — something that can keep space open for cooperation and empathy. When I (or we) talk about it at scale, that’s at least something I/we can do within the scope of my/our abilities. And that’s what I like about this forum: it gives us a place to discuss what might feel too charged or intimidating to bring up elsewhere.

I don’t have a grand strategy to add, but I did want to share this small perspective: that maybe the act of conversation itself is a worthwhile contribution.

The First Amendment to the US Constitution takes the correct approach to free speech (putting it in scare quotes doesn't make it less important), and others like Tim Urban of Wait But Why have explained why the Karl Popper argument as deployed these days is a misrepresentation of what he actually believed.

Thanks for writing. 

I'm having a hard time understanding the mutual aid example, but maybe that stems from my relative lack of knowledge in that area. Wikipedia tells me that "[m]utual aid groups are distinct in their drive to flatten the hierarchy, searching for collective consensus decision-making across participating people rather than placing leadership within a closed executive team." But I expect that one of the effects of such strong decentralized/diffused governance and structure is that it would be very hard for a small group of people to have great leverage. Stated differently, I sense some tension between a focus on "scalable" and "repeatable" operations and being controlled by / responsive to the local community. I'm not suggesting that there is no value there, but I would associate scalable, repeatable operations more with top-down governance.

Against that, I think we have to weigh that the appearance and/or reality of increasing politicization would make it harder for other EA cause areas to achieve their objectives.

ooh i really like this discussion. one thing i'd add - i've always thought resilient systems-building and good governance work that can protect against many kinds of risks has been super neglected by EA (and also by the world). i'm fairly sure at this point that it's a result of (in addition to what's already been named):

  • EA's hard focus on marginal impact - systems-building work requires many things to shift before you see impact, so it's hard to justify one marginal additional person or dollar working on it
    • corollary to this is that EA is extremely focused on shifting behaviors and opinions of elites, rather than building mass movements or reaching many people or changing cultural norms or creating better structures. obviously this isn't true of all EA work but relative to e.g. anti-fascist and mutual aid movements it's a huge difference in theory of change
  • pure vibes, which i think is obvious in some of the other comments too

Today, the single most impactful thing anyone could do - to reduce existential risk, for animals, for the climate, to help humans in need, whatever - would be to stop Donald Trump's administration. If you don't trust me, pick any one topic and look at his actual actions - from blocking AI governance to repealing the most basic climate-policies to destroying the (albeit flawed) global system, to promoting arms build-ups to supporting thugs like Netanyahu and Putin. 

It's not about Trump personally, but about the policies he's enabling, and about the transition he's enabling in what was once a counterweight to the "bad guys." 

The world has adjusted over the decades to the idea of Russia being Russia and China being China and even to a few European countries having radical leaders. But the US has always been a powerful counterweight. If the US goes down this path, who will save us? 

I hear the argument that opposing Trump is not exactly neglected. The day he dies will be a global celebration on a par with the end of WW2 - but unfortunately, like that celebration, it will be marred by the realisation of the massive damage that has been caused. 

So an EA has to ask themselves not whether opposing Trump's (or Orban's, or Putin's or ...) fascism is the biggest problem, but rather, whether it is the most impactful thing they as an individual could do. 

Honestly, if you are in the privileged position to impact this in some way, I believe that it probably is. Anyone in the AI bubble who can maybe influence those cynical AI companies who give him money, focus on stopping that, the most impactful thing you can do for the climate or for AI Governance is to ensure adults take power in November. 

But if you're, like me, far from the US, not in a country where fascism is a major force, and without any particularly strong means to impact the outcome, it's probably not the right thing to be working on, not because it's not important, but because it's not neglected. So we can keep working on other good stuff, and somehow reduce the harm being done to the world over these 4 infamous years. 

 

Such a thought provoking post. Each of this has to confront this at some soul level. Thank you Alex for putting this idea down.

More from Alex_Z
Curated and popular this week
Relevant opportunities