LR

Liam Robins

133 karmaJoined

Comments
2

I have a bunch of thoughts, but I'll give just one: In order for "anti-fascism" to make sense as a guiding principle, we would need some kind of agreement about what fascism is and what alternative we're proposing. Because without a solid definition of what we're aiming for or why, we risk becoming ineffective and alienating a ton of people. Without putting too fine a point on it, organizations that call themselves antifa / anti-fascist generally attract a lot of far-leftists, communists, and anarchists, which scares off most people. There also are tends to be a lot of scope creep (e.g. saying that all cops are fascist bastards, or labelling center-right politicians like Ronald Reagan as fascists).

That's why I think it's generally better to guide yourself based on what you support rather than what you oppose. E.G. if you're worried about rule of law, you should directly advocate for rule of law. If you're worried about populist movements causing worse governance by taking power away from knowledgeable experts, then you should directly advocate for more meritocracy in government staffing decisions.

I realize that those are both wonky things to focus on, but that's kind of the point. EA's comparative advantage is that we're a small group of intelligent, committed people. We can accomplish a lot of things in the boring world of procedures, outside the limelight. When it comes to anti-fascist street actions / mutual aid, even if those tactics work (which I'm skeptical they do), EAs simply don't have the numbers for it.

Consider adopting the term o-risk.

William MacAskill has recently been writing a bunch about how if you’re a Long-Termist, it’s not enough merely to avoid the catastrophic outcomes. Even if we get a decent long-term future, it may still fall far short of the best future we could have achieved. This outcome — of a merely okay future, when we could have had a great future — would still be quite tragic.

Which got me thinking: EAs already have terms like x-risk (for existential risks, or things which could cause human extinction) and s-risk (for suffering risks, or things which could cause the widespread future suffering of conscious beings). But as far as I’m aware, there isn’t any term to describe the specific kind of risk that MacAskill is worried about here. So I propose adding the term o-risk: A risk that would make the long-term future okay, but much worse than it would optimally be.