In 2017, I did my Honours research project on whether, and how much, fact-checking politicians’ statements influenced people’s attitudes towards those politicians, and their intentions to vote for them. (At my Australian university, “Honours” meant a research-focused, optional, selective 4th year of an undergrad degree.) With some help, I later adapted my thesis into a peer-reviewed paper: Does truth matter to voters? The effects of correcting political misinformation in an Australian sample. This was all within the domains of political psychology and cognitive science.
During that year, and in a unit I completed earlier, I learned a lot about:
- how misinformation forms
- how it can sticky
- how it can continue to influence beliefs, attitudes, and behaviours even after being corrected/retracted, and even if people do remember the corrections/retractions
- ways of counteracting, or attempting to counteract, these issues
- E.g., fact-checking, or warning people that they may be about to receive misinformation
- various related topics in the broad buckets of political psychology and how people process information, such as impacts of “falsely balanced” reporting
The research that’s been done in these areas has provided many insights that I think might be useful for various EA-aligned efforts. For some examples of such insights and how they might be relevant, see my comment on this post. These insights also seemed relevant in a small way in this comment thread, and in relation to the case for building more and better epistemic institutions in the effective altruism community.
I’ve considered writing something up about this (beyond those brief comments), but my knowledge of these topics is too rusty for that to be something I could smash out quickly and to a high standard. So I’d like to instead just publicly say I’m happy to answer questions related to those topics.
I think it’d be ideal for questions to be asked publicly, so others might benefit, but I’m also open to discussing this stuff via messages or video calls. The questions could be about anything from a super specific worry you have about your super specific project, to general thoughts on how the EA community should communicate (or whatever).
Disclaimers:
- In 2017, I probably wasn’t adequately concerned by the replication crisis, and many of the papers I was reading were from before psychology’s attention was drawn to that. So we should assume some of my “knowledge” is based on papers that wouldn’t replicate.
- I was never a “proper expert” in those topics, and I haven’t focused on them since 2017. (I ended up with First Class Honours, meaning that I could do a fully funded PhD, but decided against it at that time.) So it might be that most of what I can provide is pointing out key terms, papers, and authors relevant to what you’re interested in.
- If your question is really important, you may want to just skip to contacting an active researcher in this area or checking the literature yourself. You could perhaps use the links in my comment on this post as a starting point.
- If you think you have more or more recent expertise in these or related topics, please do make that known, and perhaps just commandeer this AMA outright!
(Due to my current task list, I might respond to things mostly from 14 May onwards. But you can obviously comment & ask things before then anyway.)
To get the ball rolling, and give examples of some insights from these areas of research and how they might be relevant to EA, here’s an adapted version of a shortform comment I wrote a while ago:
Potential downsides of EA's epistemic norms (which overall seem great to me)
This is a quick attempt to summarise some insights from psychological findings on the continued influence effect of misinformation, and related areas, which might suggest downsides to some of EA's epistemic norms. Examples of the norms I'm talking about include just honestly contributing your views/data points to the general pool and trusting people will update on them only to the appropriate degree, or clearly acknowledging counterarguments even when you believe your position is strong.
From memory, this paper reviews research on CIE, and I perceived it to be high-quality and a good intro to the topic.
From this paper's abstract:
This seems to me to suggest some value in including "epistemic status" messages up front, but that this don't make it totally "safe" to make posts before having familiarised oneself with the literature and checked one's claims. (This may suggest potential downsides to both this comment and this whole AMA, so please consider yourself both warned and warned that the warning might not be sufficient!)
Similar things also make me a bit concerned about the “better wrong than vague” norm/slogan that crops up sometimes, and also make me hesitant to optimise too much for brevity at the expense of nuance. I see value in the “better wrong than vague” idea, and in being brief at the cost of some nuance, but it seems a good idea to make tradeoffs like this with these psychological findings in mind as one factor.
Here are a couple other seemingly relevant quotes from papers I read back then (and haven’t vetted since then):
Two more examples of how these sorts of findings can be applied to matters of interest to EAs: