Recent events seem to have revealed a central divide within Effective Altruism.
On one side, you have the people[1] who want EA to prioritise epistemics on the basis that if we let this slip, we'll eventually end up in a situation where our decisions will end up being what's popular rather than what's effective.
On the other side, you have the people who are worried that if we are unwilling to trade-off [2] epistemics at all, we'll simply sideline ourselves and then we won't be able to have any significant impact at all.
- How should we navigate this divide?
- Do you disagree with this framing? For example, do you think that the core divide is something else?
- How should cause area play into this divide? For example, it appears to me, that those who prioritise AI Safety tend to fall into the first camp more often and those who prioritise global poverty tend to fall into the second camp. Is this a natural consequence of these prioritisation decisions or is this a mistake?
Update: A lot of people disliked the framing which seems to suggest that I haven't found the right framing here. Apologies, I should have spent more time figuring out what framing would have been most conducive to moving the discussion forwards. I'd like to suggest that someone else should post a similar question with framing that they think is better (although it might be a good idea to wait a few days or even a week).
In terms of my current thoughts on framing, I wish I had more explicitly worded this as "saving us from losing our ability to navigate" vs. "saving us from losing our ability to navigate". After reading the comments, I'm tempted to add a third possible highest priority: "preventing us from directly causing harm".
Opinions that are stupid are going to be clearly stupid.
So the thing is, racism is bad. Really bad. It caused Hitler. It caused slavery. It caused imperialism. Or at least it was closely connected.
The holocaust and the civil rights movement convinced us all that it is really, really bad.
Now the other thing is that because racism is bad, our society collectively decided to taboo and call horrible arguments that racists make and use.
The next point I want to make is this: As far as I know the science about race and intelligence is entirely about figuring out causation from purely observational studies when you have only medium sized effects.
We know from human history and animal models that both genetic variation and the cultural forces are powerful enough to create the observed differences.
So we try to figure out which one it is using these observational studies on a medium sized effect (ie way smaller than smoking and lung cancer, or stomach sleeping and SIDS). Both causal forcesnl are capable of producing in principle the observed outcomes.
You can't do it. Our powers of causal inference are insufficient. It doesn't work.
What you are left with is your prior about evolution, about culture, and about all sorts of other things. But there is no proof in either direction.
So this is the epistemic situation.
But because racism as bad, society, and to a lesser extent the scientific community, has decided to say that attributing any major causal power to biology in this particular is disproven pseudoscience.
Some people are good at noticing when the authorities around them and their social community and the people on their side are making bad arguments. These people are valuable. They notice important things. They point out when the emperor has no clothes. And they literally built the EA movement.
However, this ability to notice when someone is making a bad argument doesn't turn off just because the argument is being made for a good reason.
This is why people who are good at thinking precisely will notice that society is saying that there is no genetic basis for racial differences in behavior with way, way more confidence than is justified by the evidence presented. And because racism is a super important topic in our society, most people who think a lot will think hard about it at some point in their life.
In other words, it is very hard to have a large community of people who are willing to seriously consider that they personally are wrong about something important, and that they can improve, without having a bunch of people who also at some point in their lives at least considered very hard whether particular racist beliefs are actually true.
This is also not an issue with lizard people or flat earthers, since the evidence for the socially endorsed view is really that good in the latter case, and (so far as I have heard, I have in no way personally looked into the question of lizard people running the world, and I don't think anyone I strongly trust has either, so I should be cautious about being confident in its stupidity) the evidence for the conspiracy theory is really that bad.
This is why you'll find lots of people in your social circles who can be accused of having racist thoughts, and not very many who can be accused of having flat earth thoughts.
Also, if a flat earther wants to hang out at an ea meeting, I think they should be welcomed.