I've seen a lot of discussion in the EA community recently about the divide between people who think EA should focus on high-level philosophical arguments/thoughts, and those who think EA should work on making our core insights more appealing to the public at large.
This last year the topic has become increasingly salient, the big shifts from my perspective being Scott Alexander's Open EA Global post, the FTX crash, and the Wytham Abbey purchase. I quite frequently see those in the first camp, people not wanting to prioritize social capital, use the argument that epistemics in EA have declined.
To those who haven't studied philosophy, epistemics broadly refers to the idea of knowledge itself, or the study of how we gain knowledge, sort out good from bad, etc. As someone who is admittedly on the side of growing EA's social capital, when I see the argument that the community's epistemics have declined it tends to assume a number of things, namely:
- It is a simple matter to judge who has high quality epistemics
- Those with high quality epistemics usually agree on similar things
- It's a given that the path of catering to a smaller group of people with higher quality epistemics will have more impact than spreading the core EA messaging to a larger group of people with lower quality epistemics
In the spirit of changing EA forum discussion norms, I'll go ahead and say directly that my immediate reaction to this argument is something like: "You and the people who disagree with me are less intelligent than I am, the people who agree with me are smarter than you as well." In other words, it feels like whoever makes this argument is indirectly saying my epistemics are inferior to theirs.
This is especially true when someone brings up the "declining epistemics" argument to defend EA orgs from criticism, like in this comment. For instance, the author writes:
"The discussion often almost completely misses the direct, object-level, even if just at back-of-the-envelope estimate way."
I'd argue that by bemoaning the intellectual state of EA, one risks focusing entirely on the object-level when in a real utilitarian calculus, things outside the object level can matter much more than the object level itself. The Wytham Abbey purchase is a great example.
This whole split may also point to the divergence between rationalists and newer effective altruists.
My reaction is admittedly not extremely rational, well thought out, and doesn't have high quality epistemics backing it. But it's important to point out emotional reactions to the arguments we make, especially if we ever intend to convince the public of Effective Altruism's usefulness.
I don't have any great solutions to this debate, but I'd like to see less talk of epistemic decline in the EA forum, or at least have people state it more blatantly rather than dressing up their ideas in fancy language. If you think that less intelligent or thoughtful people are coming into the EA movement, I'd argue you should say so directly to help foster discussion of the actual topic.
Ultimately I agree that epistemics are important to discuss, and that the overall epistemics of discussion in EA related spaces has gone down. However I think the way this topic is being discussed and leveraged in arguments is toxic to fostering trust in our community, and assumes that high quality epistemics is a good in itself.
In the spirit of communication style you advocate for... my immediate emotional reaction to this is "Eternal September has arrived".
I dislike my comment being summarized as "brings up the "declining epistemics" argument to defend EA orgs from criticism". In the blunt style you want, this is something between distortion and manipulation.
On my side, I wanted to express my view on the Wytham debate. And I wrote a comment expressing my views on the debate.
I also dislike the way my comment is straw-manned by selective quotation.
In the next bullet point to "The discussion often almost completely misses the direct, object-level, even if just at back-of-the-envelope estimate way." I do explicitly acknowledge the possible large effects of higher order factors.
What I object to is a combination of
1. ignore the object level, or discuss it in a very lazy way
2. focus on the 2nd order ... but not in a systematic way, but mostly based on saliency and emotional pull (e.g., how will this look on twitter)
Yes, it is a simple matter to judge where this leads in the limit. We have a bunch of examples how the discourse looks like when completely taken over by these considerations - e.g., political campaigns. Words have little meaning connected to physical reality, but are mostly tools in the fight for the emotional states and minds of other people.
Also: while those with high quality epistemics usually agree on similar things is a distortion making the argument personal, about people, in reality yes, good reasoning often converges to similar conclusions.
Also: It's a given that the path of catering to a smaller group of people with higher quality epistemics will have more impact than spreading the core EA messaging to a larger group of people with lower quality epistemics
No, it's not given. Just, so far, effective altruism was about using evidence and reason to figure out how to benefit others as much as possible, and acting based on that. Based on the thinking so far, It was decidedly not trying to be a mass movement, making our core insights more appealing to the public at large .
In my view, no one figured out yet how the appealing to the masses, don't need to think much version of effective altruism should look like to be actually good.
(edit: Also, I quite dislike the frame-manipulation move of shifting from "epistemic decline of the community" to "less intelligent or thoughtful people joining". You can imagine a randomized experiment where you take two groups of equally intelligent and thoughtful people, and you make them join community with different styles of epistemic culture (eg physics, and multi-level marketing). You will get very different results. While you seem to interpret a lot of things as about people (are they smart? had they studied philosophy?) I think it's often much more about norms.)
Aumann's agreement theorem is pretty vacuous because the common prior assumption never holds in important situations, e.g. everyone has different priors on AI risk.