[Epistemic status: Was originally going to be a comment, but I decided to make a post because the topic seemed to be of general interest.]
I'm not sure how I feel about the idea that people should familiarize themselves with the literature on a topic before blogging about it. Here are some perspectives I've seen on each side:
My current hunch is that it's good for people to write blog posts about things even if they aren't familiar with the literature (though it might be even better if they wrote their post, then became familiar with the literature, then edited their post accordingly and published it).
-
In machine learning, "bagging" approaches where we combine the output of lots of classifiers which are each trained on a subset of the training data often outperform a single complex model with access to all the data. This isn't necessarily an argument for the wisdom of the crowd per se, but it seems like an argument for the wisdom of a reasonably informed crowd of people coming at something from a variety of perspectives.
-
When I read about highly creative people (Turing award winner types--this might be more true in computer science than other fields), a recurring theme is the importance of reinventing things from scratch without reading the thoughts other people have already had about a topic, applying ideas from apparently unrelated fields, and more generally bringing a novel perspective to bear.
-
Even absent the benefits of originality, I think reasoning things out for oneself can be a great way to internalize them. ("What I cannot create, I do not understand" - Richard Feynman.) The argument for publishing is less clear in this case, but you could think of a forum discussion as serving a similar purpose as a recitation or discussion section in a university course, where one gets to examine a topic through conversation. You and the people who comment on your post get some social reward in the process of thinking about a topic.
However, I'm also worried that if a blog post about a topic is more accessible than a journal paper, it might end up being read by more people and factored into our collective decision-making more than it deserves on a purely epistemic basis. Of course, people are already spending much more time reading Facebook posts than journal papers--probably including people in the EA movement--so it's not clear how to think about the harms here in general. (For example, if reading the EA Forum usually funges against Facebook for people, and EA Forum posts are generally higher quality than Facebook posts, that seems like an argument for more EA Forum posts.)
In any case, you can mitigate these risks by marking your blog post with an epistemic status before publishing. It certainly seems acceptable to me to link to a relevant paper in the comments of a blog post about a topic that has been covered in the literature (e.g. "Author X [disagrees] with you"). But I think it can be even better to additionally summarize the argument that Author X makes in a way that feels personally compelling to you and take the opportunity to re-examine the argument with fresh eyes--see my previous complaints about information cascades in the context of Effective Altruism.
I agree with Huemer, and liked his post.
Regarding your question, I think one definitely should familiarize oneself with the literature before writing an EA post. While there could be benefits to not having read about other people's thoughts, I think they are on average substantially outweighed by the benefits to having read about them. Most uninformed posts are not creative, but simply miss key considerations, and therefore fail to contribute.
Posts which are written by people who haven't familiarized themselves with the literature will probably be quite high variance: lots of low-value posts, with the chance of the odd novel insight. I fear that time-pressed readers wouldn't want to take part of such a forum, because the ratio of reward to effort would be too poor. I certainly wouldn't find it enjoyable.
In fact, I'd like the average EA Forum post to more thoroughly discuss the existing literature than is the case at present.
(Upvoted)
Maybe it's possible to develop more specific guidelines here. For example, your comment implies that you think it's essential to know all the key considerations. OK... but I don't see why ignorance of known key considerations would prevent someone from pointing out a new key consideration. And if we discourage them from making them post, that could be very harmful, because as you say, it's important to know all the key considerations.
In other words, maybe it's worth differentiating the act of generating intellectual raw material, and the act of drawing conclusions.
I see your intuition, but in practice I think most uninformed posts are poor. (In principle, that could be studied empirically.) That's also my experience from academia; that lack of knowledge usually is more of a problem than an advantage.
I agree that sometimes outsiders make breakthroughs in scientific fields, but would guess that even those outsiders are usually relatively well acquainted with the field they're entering into. They've normally familiarized themselves with it. So I'm not sure whether it's right to view those cases as evidence that forum participants shouldn't familiarize themselves with the literature. I might guess that they more point to the value of intellectual diversity and a multiplicity of perspectives.
One possible synthesis comes from Turing award winner Richard Hamming's book The Art of Doing Science and Engineering. He's got chapters at the end on Creativity and Experts. The chapters are somewhat rambly and I've quoted passages below. My attempt to summarize Hamming's position: Having a deep intellectual toolkit is valuable, but experts are often overconfident and resistant to new ideas.
Chapter 25: Creativity
Chapter 26: Experts
Hamming shares a number of stories from the history of science to support his claims. He also says he has more stories which he didn't include in the chapter, and that he looked for stories which went against his position too.
A couple takeaways:
Survivorship bias regarding stories of successful contrarians - most apparent crackpots actually are crackpots.
Paradigm shifts - if an apparent crackpot is not actually a crackpot, their idea has the potential to be extremely important. So shutting down all the apparent crackpots could have quite a high cost even if most are full of nonsense. As Jerome Friedman put it regarding the invention of bagging (coincidentally mentioned in the main post):
However, even if one accepts the premise that apparent crackpots deliver surprisingly high expected value, it's still not obvious how many we want on the Forum!
"Breakthroughs" feel like the wrong thing to hope for from posts written by non-experts. A lot of the LW posts that the community now seems to consider most valuable weren't "breakthroughs". They were more like explaining a thing, such that each individual fact in the explanation was already known, but the synthesis of them into a single coherent explanation that made sense either hadn't previously been done, or had been done only within the context of an academic field buried in inferential distance. Put another way, it seems like it's possible to write good popularizations of a topic without being intimately familiar with the existing literature, if it's the right kind of topic. Though I imagine this wouldn't be much comfort to someone who is pessimistic about the epistemic value of popularizations in general.
The Huemer post kind of just felt like an argument for radical skepticism outside of one's own domain of narrow expertise, with everything that implies.
Ah, but should you familiarize yourself with the literature on familiarizing yourself with the literature before writing an EA Forum post?
Clever :)
However, I'm not sure that post follows its own advice, as it appears to be essentially a collection of anecdotes. And it's possible to marshal anecdotes on both sides, e.g. here is Claude Shannon's take:
[Edit: I just read that Shannon and Hamming, another person I cited in this thread, apparently shared an office at Bell Labs, so their opinions may not be 100% independent pieces of evidence. They also researched similar topics.]
It seems clear to me that epistemic-status disclaimers don't work for the purpose of mitigating the negative externalities of people saying wrong things, especially wrong things in domains where people naturally tend towards overconfidence (I have in mind anything that has political implications, broadly construed). This follows straightforwardly from the phenomenon of source amnesia, and anecdotally, there doesn't seem to be much correlation between how much, say, Scott Alexander (whom I'm using here because his blog is widely read) hedges in the disclaimer of any given post and how widely that post winds up being cited later on.
Interesting thought, upvoted!
Is there particular evidence for source amnesia you have in mind? The abstract for the first Wikipedia citation says:
So I guess the question is whether the epistemic status disclaimer falls into the category of source info that people will remember ("an experimenter told me X") or source info that people often forget ("Experimenter A told me X"). (Or whether it even makes sense to analyze epistemic status in the paradigm of source info at all--for example, including an epistemic status could cause readers to think "OK, these are just ideas to play with, not solid facts" when they read the post, and have the memory encoded that way, even if they aren't able to explicitly recall a post's epistemic status. And this might hold true regardless of how widespread a post is shared. Like, for all we know, certain posts get shared more because people like playing with new ideas more than they like reading established facts, but they're pretty good at knowing that playing with new ideas is what they're doing.)
I think if you fully buy into the source amnesia idea, that could be considered an argument for posting anything to the EA Forum which is above average quality relative to a typical EA information diet for that topic area--if you really believe this source amnesia thing, people end up taking Facebook posts just as seriously as papers they read on Google Scholar.
Epistemic status: During my psychology undergrad, I did a decent amount of reading on relevant topics, in particular under the broad label of the "continued influence effect" (CIE) of misinformation. My Honours thesis (adapted into this paper) also partially related to these topics. But I'm a bit rusty (my Honours was in 2017).
From this paper's abstract:
This seems to me to suggest some value in including "epistemic status" messages up front, but that this don't make it totally "safe" to make posts before having familiarised oneself with the literature and checked one's claims.
From memory, this paper reviews research on CIE, and I perceived it to be high-quality and a good intro to the topic.
Here's a couple other seemingly relevant quotes from papers I read back then:
I've been considering brushing up on this literature to write a post for the forum on how to balance risks of spreading misinformation/flawed ideas with norms among EAs and rationalists around things like just honestly contributing your views/data points to the general pool and trusting people will update on them only to the appropriate degree. Reactions to this comment with inform whether I decide investing time into that would be worthwhile.
Yeah, I should have known I'd get called out for not citing any sources. I'm honestly not sure I'd particularly believe most studies on this no matter what side they came out on; too many ways they could fail to generalize. I am pretty sure I've seen LW and SSC posts get cited as more authoritative than their epistemic-status disclaimers suggested, and that's most of why I believe this; generalizability isn't a concern here since we're talking about basically the same context. Ironically, though, I can't remember which posts. I'll keep looking for examples.
Another thought is that even if the original post had a weak epistemic status, if the original post becomes popular and gets the chance to receive widespread scrutiny, which it survives, it could be reasonable to believe its "de facto" epistemic status is higher than what's posted at the top. But yes, I guess in that case there's the risk that none of the people who scrutinized it had familiarity with relevant literature that contradicted the post.
Maybe the solution is to hire someone to do lit reviews to carefully examine posts with epistemic status disclaimers that nonetheless became popular and seem decision relevant.
If someone wants to toss off a quick idea with low confidence, it doesn't seem too important to dig deep into literature; anyone who wants to take the idea more seriously can do related research themselves and comment with their results. (Of course, better literature than no literature, but not having that background doesn't seem so bad.)
On the other hand, if someone wants to sink many hours into writing a post about ideas in which they are confident, it seems like a very good idea to be familiar with extant literature.
In particular, if you are trying to argue against expert consensus, or take a firm stance on a controversial issue, you should read very closely about the ideas you want to criticize, and perhaps even seek out an expert who disagrees with you to see how they think. Some of the lowest-value posts I see (all over the internet, not just on the Forum) are those which present a viewpoint along the lines of "experts are generally wrong, I've discovered/uncovered the truth!" but don't seriously engage with why experts believe what they believe.
Readers interested in this post may also like this post on epistemic modesty.
More thoughts re: the wisdom of the crowds: I suppose the wisdom of the crowds works best when each crowd member is in some sense an "unbiased estimator" of the quantity to be estimated. For example, suppose we ask a crowd to estimate the weight of a large object, but only a few "experts" in the crowd know that the object is hollow inside. In this case, the estimate of a randomly chosen expert could beat the average estimate of the rest of the crowd. I'm not sure how to translate this into a more general-purpose recommendation though.
My guess is that reading a bunch of EA posts is not the thing you really care about if, say, what you care about is people engaging fruitfully on EA topics with people already in the EA movement.
By way of comparison, over on LW I have the impression (that is, I think I have seen this pattern but don't want to go to the trouble of digging up example links) that there are folks trying to engage on the site who claim to have read large chunks of the Sequences but also produce low quality content, and then there are also people who haven't read a lot of the literature who manage to write things that engage well with the site or do well engaging in rationalist discussions in person.
Reading background literature seems like one way that sometimes works to make a person into the kind of person who can engage fruitfully with a community, but I don't think it always works and it's not the thing itself, hence why I think you see such differing views when you look for related thinking on the topic.
I didn't necessarily take "engage with the literature" to refer to reading previous EA posts. That would be helpful in many cases, but doesn't seem realistic until the Forum has a working search engine. However, I would like to see more people who write posts on topics like political science, computer science, international aid, or philosophy do a quick Google scholar search before posting their ideas.
site:forum.effectivealtruism.org
on Google has been working OK for me.