Currently doing local AI safety Movement Building in Australia and NZ.
As soon as you start charging a fee to be a member, people will become suspicious that you're trying to sign them up because you want their cash, rather than being purely dedicated to charity. It'll also cost you members because they'll have the choice to either spend the money or not feel fully part of the community. This effect will be worse in some countries than others.
If you don't get enough members signing up, then the organisation becomes vulnerable to capture by someone who signs up a bunch of straw members who never show up to meetings except to vote once. You can reduce this by adding rules around attendance at meetings, but then this makes it more formal and may cost you more members, especially if people are used to groups being much more casual.
Counterpoint: This seems to work all right for university clubs.
EA used to lean more into moral arguments/criticism back in the day, but most folks, even those who were part of the movement back in the day, seem to have leaned away from this.
It's hard to say why exactly, but being confrontational is unpleasant and it's not clear that it was actually more effective. OGTutzauer makes a good point that a movement trying to raise donations has more incentive to leverage guilt, whilst a movement trying to shift people's careers has more incentive to focus on being appealing to be part of.
It might also be partly due to the influence of rationalist culture norms, whilst Moral Ambition seems to have been influenced by both EA and progressivism. (My experience has been that the animal welfare folks, who tend to lean more into progressivism, are most likely to lean into confrontationalism).
Sometimes the dollar signs can blind someone and cause them not to consider obvious alternatives. And they will feel that they made the decision for reasons other than the money, but the money nonetheless caused the cognitive distortion that ultimately led to the decision.
I'm not claiming that this happened here. I don't have any way of really knowing. But it's certainly suspicious. And I don't think anything is gained by pretending that it's not.
Interesting post.
I think it did a good job of explaining why the metacrisis might be relevant from an EA standpoint. I made a similar (but different!) argument - that Less Wrong should be paying attention to the sensemaking space - back in 2021[1] and it may still be helpful for readers who want to get better sense of the scene[2].
Unfortunately, I'm with Amina. Short AI timelines are looking increasingly likely and culture change tends to take a long time, so the argument for prioritising this isn't looking as promising as it previously did[3]. It's a shame that some of these conversations didn't start happening a decade or two earlier than they did. Some of these conversations could have been great for preparing the (intellectual) soil and it could have provided motiviation for working on generally useful infrastructure at the point in time when it made sense to be doing that.
Another worry I have about the metacrisis framing is that it, by default, it seems to imply that we should think of all these threats as being on par, when that increasingly doesn't seem to actually be the case.
I felt this response leaned a bit popularist. I think it's pretty clear that conversation in sensemaking space is much less precise than EA/rationality on average. The flip side of the coin is that the sensemaking space is open to ideas that would be less likely to resonate in EA/rationality. Whether this trade-off is worth it comes down to factors like how valuable these ideas tend to be, how valuable it is to avoid incorrectly adopting confused beliefs vs. incorrectly rejecting fruitful ideas and the purpose of the movement.
FWIW, I was a lot more positive on the sensemaking space back in the day, now I'm a lot more uncertain. I think there's a lot of fruitful ideas there, but I'm not convinced that the scene has the tools that it needs to identify which ideas are or aren't fruitful.
Though certainly not as well as you are doing here!
Or at least how it was back in 2021, I haven't really followed it in a while.
Your counter-arguments make reasonable points, but they aren't strong enough (in my opinion) to outweigh the arguments you've put them up against.