Recent events seem to have revealed a central divide within Effective Altruism.
On one side, you have the people[1] who want EA to prioritise epistemics on the basis that if we let this slip, we'll eventually end up in a situation where our decisions will end up being what's popular rather than what's effective.
On the other side, you have the people who are worried that if we are unwilling to trade-off [2] epistemics at all, we'll simply sideline ourselves and then we won't be able to have any significant impact at all.
- How should we navigate this divide?
- Do you disagree with this framing? For example, do you think that the core divide is something else?
- How should cause area play into this divide? For example, it appears to me, that those who prioritise AI Safety tend to fall into the first camp more often and those who prioritise global poverty tend to fall into the second camp. Is this a natural consequence of these prioritisation decisions or is this a mistake?
Update: A lot of people disliked the framing which seems to suggest that I haven't found the right framing here. Apologies, I should have spent more time figuring out what framing would have been most conducive to moving the discussion forwards. I'd like to suggest that someone else should post a similar question with framing that they think is better (although it might be a good idea to wait a few days or even a week).
In terms of my current thoughts on framing, I wish I had more explicitly worded this as "saving us from losing our ability to navigate" vs. "saving us from losing our ability to navigate". After reading the comments, I'm tempted to add a third possible highest priority: "preventing us from directly causing harm".
I think that some sort of general guide on “How to think about the issue of optics when so much of your philosophy/worldview is based on ignoring optics in the sake of epistemics/transparency (including embedded is-ought fallacies about how social systems ought to work), and your actions have externalities that affect the community” might be nice, if only so people don’t have to constantly reexplain/rehash this.
But generally, this is one of those things where it becomes apparent in hindsight that it would have been better to hash out these issues before the fire.
It’s too bad that Scout Mindset not only doesn’t seem to address this issue effectively, it also seems to push people more towards the is-ought fallacy of “optics shouldn’t matter that much” or “you can’t have good epistemics without full transparency/explicitness” (in my view: https://forum.effectivealtruism.org/posts/HDAXztEbjJsyHLKP7/outline-of-galef-s-scout-mindset?commentId=7aQka7YXrhp6GjBCw)
And relatedly, I think that such concerns about longterm epistemic damage are overblown. I appreciate that allowing epistemics to constantly be trampled in the name of optics is bad, but I don’t think that’s a fair characterization of what is happening. And I suspect that in the short term optics dominate due to how they are so driven by... (read more)