In 2017, I did my Honours research project on whether, and how much, fact-checking politicians’ statements influenced people’s attitudes towards those politicians, and their intentions to vote for them. (At my Australian university, “Honours” meant a research-focused, optional, selective 4th year of an undergrad degree.) With some help, I later adapted my thesis into a peer-reviewed paper: Does truth matter to voters? The effects of correcting political misinformation in an Australian sample. This was all within the domains of political psychology and cognitive science.
During that year, and in a unit I completed earlier, I learned a lot about:
- how misinformation forms
- how it can sticky
- how it can continue to influence beliefs, attitudes, and behaviours even after being corrected/retracted, and even if people do remember the corrections/retractions
- ways of counteracting, or attempting to counteract, these issues
- E.g., fact-checking, or warning people that they may be about to receive misinformation
- various related topics in the broad buckets of political psychology and how people process information, such as impacts of “falsely balanced” reporting
The research that’s been done in these areas has provided many insights that I think might be useful for various EA-aligned efforts. For some examples of such insights and how they might be relevant, see my comment on this post. These insights also seemed relevant in a small way in this comment thread, and in relation to the case for building more and better epistemic institutions in the effective altruism community.
I’ve considered writing something up about this (beyond those brief comments), but my knowledge of these topics is too rusty for that to be something I could smash out quickly and to a high standard. So I’d like to instead just publicly say I’m happy to answer questions related to those topics.
I think it’d be ideal for questions to be asked publicly, so others might benefit, but I’m also open to discussing this stuff via messages or video calls. The questions could be about anything from a super specific worry you have about your super specific project, to general thoughts on how the EA community should communicate (or whatever).
Disclaimers:
- In 2017, I probably wasn’t adequately concerned by the replication crisis, and many of the papers I was reading were from before psychology’s attention was drawn to that. So we should assume some of my “knowledge” is based on papers that wouldn’t replicate.
- I was never a “proper expert” in those topics, and I haven’t focused on them since 2017. (I ended up with First Class Honours, meaning that I could do a fully funded PhD, but decided against it at that time.) So it might be that most of what I can provide is pointing out key terms, papers, and authors relevant to what you’re interested in.
- If your question is really important, you may want to just skip to contacting an active researcher in this area or checking the literature yourself. You could perhaps use the links in my comment on this post as a starting point.
- If you think you have more or more recent expertise in these or related topics, please do make that known, and perhaps just commandeer this AMA outright!
(Due to my current task list, I might respond to things mostly from 14 May onwards. But you can obviously comment & ask things before then anyway.)
Agreed. But I don't think we could do that without changing the environment a little bit. My point is that rationality isn’t just about avoiding false beliefs (maximal skepticism), but about forming them adequately, and it’s way more costly to do that in some environments. Think about the different degrees of caution one needs when reading something in a peer-reviewed meta-analysis, in a wikipedia entry, in a newspaper, in a whatsapp message...
The core issue isn't really “statements that are false”, or people who are actually fooled by them. The problem is that, if I’m convinced I’m surrounded by lies and nonsense, I’ll keep following the same path I was before (because I have a high credence my beliefs are OK); it will just fuel my confirmatory bias. Thus, the real problem with fake news is an externality. I haven’t found any paper testing this hypothesis, though. If it is right, then most articles I’ve seen on “fake news didn’t affect political outcomes” might be wrong.
You can fool someone even without telling any sort of lies. To steal an example I once saw in LW (still trying to find the source): imagine a random sequence of 0s and 1s; now, an Agent feeds a Principal with information about the sequence, like “digit 1 in position nth”. To make a Principal believe the sequence is mainly made of 1s, all an Agent has to do is to select information, like “digit 1 in positions n, m and o”.
But why would someone hire such an agent? Well, maybe the Principal is convinced most other accessible agents are liars; it’s even worse if the Agent already knows some of the Principal's biases, and easier if Principals with similar biases are clustered in groups with similar interests and jobs - like social activists, churches, military staff and financial investors. Even to denounce this scenario does not necessarily improve things; I think, at least for some countries, political outcomes were affected by having common knowledge about statements like “military personnel support this, financial investors would never accept that”. If you can convince voters they’ll face an economic crisis or political instability by voting candidate A, they will avoid it.
My personal anecdote on how this process may work for a smart and scientifically educated: I remember having a conversation with a childhood friend, who surprised me by being a climate change denier. I tried my “rationality skills” arguing with him; to summarize it, he replied that greenhouses work by convection, which wouldn’t extrapolate to the atmosphere. I was astonished that I had ignored it so far (well, maybe it was mentioned en passant in a science class), and that he didn’t take 2 min to google it (and find out that, yes, “greenhouse” is an analogy, the problem is that CO2 deflects radiation back to Earth); but maybe I wouldn’t have done it myself if I didn’t already know that CO2 is pivotal in keeping Earth warm. However, after days of this, no happy end: our discussion basically ended with me pointing out: a) he couldn’t provide any scientific paper backing his overall thesis (even though I would be happy to pay him if he could); b) he would provide objections against “anthropic global warming”, without even caring to put a consistent credence on them - like first pointing to alternative causes for warming, and then denying the warming itself. He didn't really believe (i.e., assigned a high posterior credence) there was no warming, nor that it was a random anomaly, because these would be ungrounded, and so a target in a discussion. Since then, we barely spoke.
P.S.: I wonder if fact-checking agencies could evolve to some sort of "rating agencies"; I mean, they shouldn't only screen for false statements, but actually provide information about who is accurate - so mitigating what I've been calling the "lemons problem in news". But who rates the raters? Besides the risk of capture, I don't know how to make people actually trust the agencies in the first place.