I don't really know.
But that's a good point: Chesterton's fence is a pretty good heuristic.
Probably some people were being a bit pushy advertising their services?
The framing of your question suggests EA's role is to prescribe actions
Was I presuming this? I didn't think I was. I was just talking about how it is hard to simultaneously meet the needs of folks with very different worldviews.
In particular, I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis. Likewise, I think that many people who wish to do good in the world reject the TAI hypothesis for bad reasons that they would regret on reflection. Where do people go to correct these errors and build better models of which actions are best to take all things considered?
You can definitely run a session on this. The challenge is where do you go from there? If you take the conversation to more concrete issues you risk losing half your audience. I'm not claiming this is impossible, just that it's tricky.
I think one of the biggest gaps in AI safety is having a clear understanding of why the majority of educated people reject the TAI hypothesis
I'm curious what your explanation would be. My explanation would be is that media landscape is filled with hype; there's all these philosophical arguments you can make that are hard to evaluate; even if you know that some amount of predicted crises will come true most people don't have high confidence that they could predict which ones these would be and even if you could it'd take a massive amount of time and people's lives are pretty busy and what would they do with that knowledge anyway?
The last EAG I attended had rules restricting handing out materials.
Having just finished watching this Dwarkesh video which explained how big a deal pamphlets were when they were first invented, I'd actually go the other way and encourage it instead.
Here's my reasoning: Talks have been de-emphasised in favour of one-on-ones at EAGs. There's a lot to like about one-on-ones, but one disadvantage is that we've removed a key avenue for ideas to gain a critical mass and enter the water supply. Pamphlets could fill this gap. After all, if you see a good pamphlet, it'd be quite natural for it to come out during a conversation and for you to pull it out.
Additionally, when you have dozens of one-on-ones, things often blur together. Now, you can be disciplined and keep notes, but that's hard and often I find my phone is short of battery. If people handed out pamphlets containing their proposals or takes, then it'd be easier to review them afterwards; conversations would be much more likely to have effects that last. Two further benefits: it might be more efficient to exchange pamphlets at the start of a one-on-one and producing a pamphlet would convince people to figure figure out how to communicate their ideas clearly.
Have you thought about the possibility that EA may have resonated in a particular social context that no longer exists?
But a community that took twenty years to develop its particular structure of norms and mutual knowledge cannot be regrown in twenty years, because the conditions that shaped it no longer exist. The people are older, the context has changed, and the specific convergance of circumstances that brought those particular individuals together in that particular configuration at that particular time is gone. Communities are path-dependent in the strongest possible sense: their current state is a function of their entire history, and you can’t rerun the history.
The main challenge I see at the moment is that for half the potential audience AI is clearly the biggest thing going on at the moment and the other half sees it as clearly overhyped. And it's quite hard to construct a program or run events that will really hit it out of the park for both sides at once.
I would be keen to hear if you think you have any solutions to this birifuction.
A lot of these claims are subtly different from the ones I made (not claiming that you were necessarily asserting that I agreed with them).
Engage in the same behaviour if given power is factually untrue
I wouldn't endorse this statement either. Left and right fascism express themselves differently. So I definitely wouldn't predict the 'same behaviour'.
Anti-fascists are a wide coalition consisting of a wide array of political views
There is a wide coalition against facism, but they don't call themselves antifa. It's a much narrower group that adopts that label.
I do not think that if the right loses the next election, that the left would be equally fascist
I don't expect that either. But they may still 'lock-in' some of the backsliding which would become the new standard from which behaviour is measured, enabling continued escalation from there.
The claim I made was "'anti-facist activists' are often just as fascist as anyone on the right" and I believe that's true. The impact of an election depends on the choices of a much broader set of people.
The current adminisatration flooded mineapolis with poorly trained thugs who made it unsafe to go outside as a non-white person. I do not believe that a President AOC or whoever will take actions of equivalent damage.
The damage that an action causes in the long-term has relatively little correlation with the damage that an action causes in the short-term. I'm not claiming 'equivalently damaging short-term effects'.
I saw that this comment was downvoted before. I think this is a mistake: many people will have similar questions. Indeed, I saw multiple indications in the post that the author likely defines 'fascism' than many people in EA.
Stronger: I think it's reasonable to wonder whether the author's is somewhat 'fuzzy', even though River's phrasing was a bit too direct for my taste.
Thanks for posting this, it made me think.
Here are my thoughts:
• Authoritarianism is a real risk. I think this has been clear for a while, but I've updated upwards multiple times.
• I agree that it's possible to analyse the issue of fascism in a non-partisan way. Unfortunately, most 'anti-fascist' work focuses on only one side of the political spectrum. I think this is a mistake: 'anti-facist activists' are often just as fascist as anyone on the right and it's quite plausible that if the right loses the next election, then instead of aiming to restore frayed norms and institutions, folks on the left decide that the only option is to fight fire with fire. This is a threat in and of itself, but it would also increase the ability of the right to lean more in this direction if they win power again.
• The mutual aid suggestion comes of as really strange to me. The argument for mutual aid as a way of building the EA community feels much stronger than the argument of engaging in mutual aid as a way to fight fascism. This is especially true if you believe fascism is an urgent threat here and now rather than a possibility that we need to prepare for in case it happens at some distant, undefined point in the future.
• "Mass deportion" really feels like a distinct question from facism - it's not really fascism if the government is just enforcing standard immigration laws and there are proper procedural safeguards; on the other hand, even small scale deportations can be legitimately linked to fascism if they're being leveraged cynically to chill speech. The raw numbers aren't the active ingredient or determining factor.
I expect people to update somewhat. My split was more about where people end up falling after initial exposure to arguments on both sides.
In the past, AI didn't feel so pressing to the AI crowd, so they had more space to explore, rather than the discussion of animals and global poverty feeling like dead weight.