This is obvious in one way, but I think forgotten in a lot of the details about these arguments: People do not actually care very much about whether Manifest invited Hanania, they care about the broader trend.
And what I mean by that is specifically that the group that argues that people like Hanania should not be invited to events like Manifest are scared of things like:
- They care about whether minorities are being excluded and made unwelcome in EA spaces.
- They care about an identity they view as very important being connected to racists
- More broadly, they are ultimately scared about the world returning to the sort of racism that led to the Holocaust, to segregation, and they are scared that if they do not act now, to stop this they will be part of maintaining the current system of discrimination and racial injustice.
- They feel like they don't belong in a place where people like Hanania are accepted
I apologize if I did not characterize the fears correctly, I am part of the other group, and my model of what motivates the people I disagree with is almost always going to be worse than my model of what motivates me. I am scared of things like:
- Making a policy that people like Hanania should never be invited to speak is pushing society in a direction that leads to things like Maoist struggle sessions, McCarthyism (I think we are currently at the level of badness that McCarthyism represented), and at an actual extreme, the thought police from 1984.
- The norms cancel culture embraces functionally involve powerful groups being allowed to silence those they dislike. This is still the case no matter what the details of the arguments for the positions are.
- Assuming a priori that we know that a certain person's policy arguments or causal model is false leads us to have stupider opinions on average.
- I don't belong in a place where adults are not be allowed to read whichever arguments they are interested in about controversial topics, and then form their own opinions, even if those opinions disagree with social orthodoxy.
The biggest point I want to make is that none of these things are arguments against each other.
Cancel culture norms might be creating a tool for power, and make minorities more welcome.
This might push society to be more like a McCarthyist or Maoist place where people are punished for thinking about the wrong questions and having the wrong friends, and at the same time it might prevent backsliding on racial justice, and lead to improvements in equality between racial groups.
Perhaps McCarthy actually made the US meaningfully safer from communist takeover. Most of the arguments that McCarthy was terrible that I recall from university seemed to just take as a given that there was no real risk of a communist takeover, but even if the odds of that were low, making those odds even lower was worth doing things that had costs elsewhere (unless, of course, you think that a communist revolution would have been a good thing).
If we are facing a situation where the policy favored by side A leads to costs that side B is very conscious of, and vice versa, it is likely that if instead of arguing with each other, we attempted to build ideas that addressed each others core concerns, we might come up with ideas that let each side get more of what they want at a smaller cost to what the other side wants.
The second point I'd like to make, is that arguing passionately, with better and better thought experiments that try to trigger the intuitions underlying your position, while completely ignoring the things that actually led the people you are arguing with to the positions they hold, is unlikely to be productive.
Engage with their actual fears if you want to convince, even though it is very hard to think yourself into a mindset that takes [ridiculous thing your conversational opponent is worried about] seriously.
Part 1
"I fear that a lot of the discourse is getting bogged down in euphemism, abstraction, and appeals to "truth-seeking," when the debate is actually: what kind of people and worldviews do we give status to and what effects does that have on related communities."
This is precisely the sort of attitude which I see as fundamentally opposed to my own view that truth seeking actually happens, and that we should be rewarding status to people and worldviews that are better at getting us closer to the truth, according to our best judgement.
It also I think is a very clear example of what I was talking about in my original post, where someone arguing for one side ignores the fears and actual argument of the other side when expressing their position. You put 'truth seeking', in quotations, because it has nothing to do with what you are claiming yourself to care about. You are caring about status shifts amongst communities, and then you are trying to say I don't actually care about 'truth seeking' -- not arguing I don't, because that is obviously ridiculous -- but insinuating that I actually want to make racists higher status and more acceptable by the way you wrote this sentence.
Obviously this does nothing to convince me, whatever impact it may have on the general audience. Which based on the four agree votes, and three disagree votes that I see right now, is that it gets people to think what they already thought about the issue.
Part 2
I suppose through trying to think through how I'd reply to your underlying fear, I found that I am not actually really sure what the bad thing that you think will happen if an open Nazi is platformed by an EA adjacent organization/venue is.
To give context to my confusion, I was imagining a thought experiment where the main platforms for sharing information about AI safety topics at a professional level was supported by an AI org. Further in this thought experiment there is a brilliant ai safety researcher, who happens to also be openly a Nazi -- in fact he went into alignment research because he thought that untrammelled AI capabilities was being driven by Jewish scientists, and he wanted to stop them from killing everyone. If this man comes up with an important alignment advance, that will actually reduce the odds of human extinction meaningfully, it seems to me transparently obvious that his alignment research should be platformed by EA adjacent organizations.
I'm confident that you will have something say about why this is a bad thought experiment that you disagree with, but I'm not quite sure what you would say, while also taking the idea seriously.
The idea that important researchers who actually make useful advances in one area might also believe stupid and terrible things in other fields is something that has happened far too often for you to say that the possibility should be ignored.
Perhaps the policy I'm advocating, of simply looking at the value of the paper in its field, and ignoring everything else would impose costs from outside observers attacking the organization doing this that are too high to justify publishing the man who has horrible beliefs, since we can't be certain that his advance actually is important ahead of time.
But I'd say in this case the outside observers are acting to damage the future of mankind, and should be viewed as enemies, not as reasonable people.
Of course their own policy probably also makes sense in act utilitarian terms.
So maybe you just are saying that a blanket policy of this sort, without ever looking at the specifics of the case, is the best act utilitarian policy, and should not be understood as saying there are not cases where your heuristic fails catastrophically.
But I feel as though the discussion I just engaged in is far too bloodless to capture what you actually think is bad about publishing a scientist who made an advance that will make the world better if it is published, and who is also an open Nazi.
Anyways the general possibility that open Nazis might be right about something very important that is relevant to us is sufficient to explain why I would not endorse a blanket ban of the sort you are describing.
(On the dog walk, I realized, what I'd forgotten, that the obvious answer was that doing this will raise the status of Nazis, which would actually be bad)