I've seen a lot of discussion in the EA community recently about the divide between people who think EA should focus on high-level philosophical arguments/thoughts, and those who think EA should work on making our core insights more appealing to the public at large.
This last year the topic has become increasingly salient, the big shifts from my perspective being Scott Alexander's Open EA Global post, the FTX crash, and the Wytham Abbey purchase. I quite frequently see those in the first camp, people not wanting to prioritize social capital, use the argument that epistemics in EA have declined.
To those who haven't studied philosophy, epistemics broadly refers to the idea of knowledge itself, or the study of how we gain knowledge, sort out good from bad, etc. As someone who is admittedly on the side of growing EA's social capital, when I see the argument that the community's epistemics have declined it tends to assume a number of things, namely:
- It is a simple matter to judge who has high quality epistemics
- Those with high quality epistemics usually agree on similar things
- It's a given that the path of catering to a smaller group of people with higher quality epistemics will have more impact than spreading the core EA messaging to a larger group of people with lower quality epistemics
In the spirit of changing EA forum discussion norms, I'll go ahead and say directly that my immediate reaction to this argument is something like: "You and the people who disagree with me are less intelligent than I am, the people who agree with me are smarter than you as well." In other words, it feels like whoever makes this argument is indirectly saying my epistemics are inferior to theirs.
This is especially true when someone brings up the "declining epistemics" argument to defend EA orgs from criticism, like in this comment. For instance, the author writes:
"The discussion often almost completely misses the direct, object-level, even if just at back-of-the-envelope estimate way."
I'd argue that by bemoaning the intellectual state of EA, one risks focusing entirely on the object-level when in a real utilitarian calculus, things outside the object level can matter much more than the object level itself. The Wytham Abbey purchase is a great example.
This whole split may also point to the divergence between rationalists and newer effective altruists.
My reaction is admittedly not extremely rational, well thought out, and doesn't have high quality epistemics backing it. But it's important to point out emotional reactions to the arguments we make, especially if we ever intend to convince the public of Effective Altruism's usefulness.
I don't have any great solutions to this debate, but I'd like to see less talk of epistemic decline in the EA forum, or at least have people state it more blatantly rather than dressing up their ideas in fancy language. If you think that less intelligent or thoughtful people are coming into the EA movement, I'd argue you should say so directly to help foster discussion of the actual topic.
Ultimately I agree that epistemics are important to discuss, and that the overall epistemics of discussion in EA related spaces has gone down. However I think the way this topic is being discussed and leveraged in arguments is toxic to fostering trust in our community, and assumes that high quality epistemics is a good in itself.
Hey Wil,
as someone who is likely in the "declining epistemics would be bad" camp, I will try to write this reply while mindfully attempting to be better at epistemics than I usually am.
Let's start with some points where you hit on something true:
I agree that talk about bad epistemics can come across as being unwelcoming to newcomers and considering them stupid. Coupled with the elitist vibe many people get from EA, this is not great.
I also agree that many people will read the position you describe as implying "I am smarter than you", and people making that argument should be mindful of this, and think about how to avoid giving this impression.
You cite as one of the implied assumptions:
I think it is indeed a danger that "quality epistemics" is sometimes used as a shortcut to defend things mindlessly. In an EA context, I often disagreed with arguments that defer strongly to experts in EA orgs. These arguments vaguely seem to neglect that these experts might have systematic biases qua working in those orgs. Personally, I probably sometimes use "bad epistemics" as a cached thought internally when encountering a position for which I have mostly seen arguments that I found unconvincing in the past.
Now for the parts I disagree with:
I scrolled through some of the disagreeing comments on Making Effective Altruism Enormous, and tried to examine if any have the implicit assumptions you state:
I don't think the strong version of this statement ("usually") holds true for most people in the epistemics camp, some people, including me, would probably agree that e.g. disagreeing with "it is morally better to prioritize expected impact over warm feelings" is usually not good epistemics. I.e. there are some few core tenants for which "those with high quality epistemics" usually agree on.
While many people probably think it is likely , I do not think the majority consider it "a given". I could not find a comment that argues or assumes it is obvious in the above discussion.
What I personally believe:
My vague position is that one of the core advantages of the EA community is caring about true arguments, and consequently earnest and open-minded reasoning. In so far that I would complain about bad epistemics, it is definitely not that people are dumb. Rather, I think that it is a danger that people engage a bit more in what seems like motivated reasoning in some discussions than the EA average, and seem less interested in understanding other people's position and changing their mind. These are gradual differences, I do not mean to imply that there is a camp who reasons perfectly and impartially, and another one that does not.
Without fleshing out my opinion too much (the goal of this comment is not to defend my position), I usually point to the thought experiment: "What would have happened if Eliezer Yudkowsky wrote ai safety posts on the machine learning subreddit?" to illustrate how important having an open-minded and curious community can be.
For example, in your post you posit three implicit assumptions, and later link to a single comment as justification. And tbf, that comment reads a little bit dismissive, but I don't think it actually carries the three assumptions you outline, and should not be used to represent a whole "camp", especially since this debate was very heated on both sides. It is not really visible that you try to charitably interpret the position you disagree with. And while it's good that you clearly state that something is an emotional reaction, I think it would also be good if that reaction is accompanied with a better attempt to understand the other side.
You make some great points here. I’ll admit my arguments weren’t as charitable as they should’ve been, and more motivated from heat than light.
I hope to find time to explore this in more detail and with more charity!
Your point about genuine truth seeking is certainly something I love about EA, and don’t want to see go away. It’s definitely a risk if we can’t figure out how to screen for that sort of thing.
Do you have any recommendations for screening based on epistemics?