Hide table of contents

Longer title for this question: To what extent does misinformation/disinformation (or the rise of deepfakes) pose a problem? And to what extent is it tractable?

  1. Are there good analyses of the scope of this problem? If not, does anyone want to do a shallow exploration?
  2. Are there promising interventions (e.g.certificates of some kind) that could be effective (in the important sense)?

Context and possibly relevant links: 

I’m posting this because I’m genuinely curious, and feel like I lack a lot of context on this. I haven't done any relevant research myself.

21

0
0

Reactions

0
0
New Answer
New Comment


4 Answers sorted by

This isn't a particularly deep or informed take, but my perspective on it is that the "misinformation problem" is similar to what Scott called the cowpox of doubt:

What annoys me about the people who harp on moon-hoaxing and homeopathy – without any interest in the rest of medicine or space history – is that it seems like an attempt to Other irrationality.

It’s saying “Look, over here! It’s irrational people, believing things that we can instantly dismiss as dumb. Things we feel no temptation, not one bit, to believe. It must be that they are defective and we are rational.”

But to me, the rationality movement is about Self-ing irrationality.

It is about realizing that you, yes you, might be wrong about the things that you’re most certain of, and nothing can save you except maybe extreme epistemic paranoia.

10 years ago, it was popular to hate on moon-hoaxing and homeopathy, now it's popular to hate on "misinformation". Fixating on obviously-wrong beliefs is probably counterproductive to forming correct beliefs on important and hard questions.

You mean people hate on others who fall for misinformation? I haven't noticed that so far. My impression of the misinformation discourse is ~ "Yeah, this shit is scary, today it might still be mostly easy to avoid, but we'll soon drown in an ocean of AI-generated misinformation!"

Which also doesn't seem right. I think I expect this to be in large part a technical problem that will mostly get solved because it is and probably will be such a prominent issue in the coming years, affecting many of the most profitable tech firms.

Excerpt from Deepfakes: A Grounded Threat Assessment - Center for Security and Emerging Technology (I haven't read the whole paper):

This paper examines the technical literature on deepfakes to assess the threat they pose. It draws two conclusions. First, the malicious use of crudely generated deepfakes will become easier with time as the technology commodifies. Yet the current state of deepfake detection suggests that these fakes can be kept largely at bay. 

Second, tailored deepfakes produced by technically sophisticated actors will represent the greater threat over time. Even moderately resourced campaigns can access the requisite ingredients for generating a custom deepfake. However, factors such as the need to avoid attribution, the time needed to train an ML model, and the availability of data will constrain how sophisticated actors use tailored deepfakes in practice.

Based on this assessment, the paper makes four recommendations:

  • Build a Deepfake “Zoo”: Identifying deepfakes relies on rapid access to examples of synthetic media that can be used to improve detection algorithms. Platforms, researchers, and companies should invest in the creation of a deepfake “zoo” that aggregates and makes freely available datasets of synthetic media as they appear online.
  • Encourage Better Capabilities Tracking: The technical literature around ML provides critical insight into how disinformation actors will likely use deepfakes in their operations, and the limitations they might face in doing so. However, inconsistent documentation practices among researchers hinders this analysis. Research communities, funding organizations, and academic publishers should work toward developing common standards for reporting progress in generative models.
  • Commodify Detection: Broadly distributing detection technology can inhibit the effectiveness of deepfakes. Government agencies and philanthropic organizations should distribute grants to help translate research findings in deepfake detection into user-friendly apps for analyzing media. Regular training sessions for journalists and professions likely to be targeted by these types of techniques may also limit the extent to which members of the public are duped.
  • Proliferate Radioactive Data: Recent research has shown that datasets can be made “radioactive.” ML systems trained on this kind of data generate synthetic media that can be easily identified. Stakeholders should actively encourage the “radioactive” marking of public datasets likely to train deep generative models. This would significantly lower the costs of detection for deepfakes generated by commodified tools. It would also force more sophisticated disinformation actors to source their own datasets to avoid detection

Is it tractable?

  1. One might argue that the amount of misinformation in the world is decreasing, not increasing. Maybe we're much more aware of it, which would be a good thing.
  2. Lesswrong and the EA Forum are making progress on this, no? This is one of my top ideas for how tech can help our causes
  3. Wikipedia also helps a lot, I think. There might be other such ideas (because of inadequate equilibria), so if we find them, it might be a worthy use of EA founders+funds: A relatively easy way to provide a ton of value to society in a way that is hard (or maybe impossible) to monitize.

Regarding deep fakes:

 

Scott Alexander wrote about it:

https://slatestarcodex.com/2020/01/30/book-review-human-compatible/

 

This part stuck with me:

Also, it’s hard to see why forging videos should be so much worse than forging images through Photoshop, forging documents through whatever document-forgers do, or forging text through lying. Brookings explains that deepfakes might cause nuclear war because someone might forge a video of the President ordering a nuclear strike and then commanders might believe it. But it’s unclear why this is so much more plausible than someone writing a memo saying “Please launch a nuclear strike, sincerely, the President” and commanders believing that. Other papers have highlighted the danger of creating a fake sex tape with a politician in order to discredit them, but you can already convincingly Photoshop an explicit photo of your least favorite politician, and everyone will just laugh at you.

Comments3
Sorted by Click to highlight new comments since:

One speculative, semi-vague, and perhaps hedgehoggy point that I've often come back to when thinking about this:

I think it's quite possible that many people have a set of beliefs/assumptions about democracies which cause them to grossly (albeit perhaps not ultimately) underestimate the threat of mis- and dis-information in democracies: in conversations and research presentations I've listened to, I've frequently heard people frame the issue of audiences believing misinformation/disinformation as such audiences making some mistake or irrational choice. This certainly makes sense when it comes to conspiracy theories that tell you to do personally-harmful things like not getting any vaccines or foolishly investing all of your money in some bubble. However, I feel that people in these conversations/presentations will occasionally confuse epistemic rationality (i.e., wanting to have accurate beliefs) and instrumental rationality (i.e., wanting to do--including believe--whatever maximizes one's own interests): sometimes having inaccurate beliefs is more personally beneficial than having accurate beliefs, especially for social or psychological reasons.

This stands out most strongly when it comes to democracies and voting: unlike your personal medical and financial choices, your voting behavior has effectively no "ostensible personal impact" (i.e., on who gets elected and subsequently what policies are put into place which affect you). Given this, lines of reasoning such as "voters are incentivized to have accurate beliefs because if they believe crazy things they're more likely to support policies that harm themselves" are flawed.

In reality, rather than framing the question by simply asking "why do voters have these irrational beliefs / why are they making these mistakes", I think it's important to also ask "Why would we even expect these voters to have accurate beliefs in the first place?"

Ultimately, I have more nuanced views on the potential health and future of democracy, but I think that disinformation/misinformation strikes at one of the core weak points of democracy: [setting aside the non-democratic features of democracies (e.g., non- or semi-democratic institutions within government)] democracies manage to function largely because voters are 1) delusional about the impact of their voting choices, and/or 2) motivated by psychological and social reasons--including some norms like "I shouldn't believe crazy things"--to make somewhat reasonable decisions. Mis- and dis-information, however, seem to undermine these norms.

There is a lot of thought in this post and a lot of dense context provided in the links.

 

Overall, "misinformation" seems like an extremely broad area. I find it difficult situating and absorbing the information presented in the links.

The OP has put a lot of content into deep fakes. This seems important, but it's unclear if this is the subject she is most interested, and it's unclear how it's related to "misinformation" overall.

I wish I had more knowledge about what misinformation is and how we should think about it, or its opposite, "Truth". For example, related to the ongoing invasion of Ukraine, Ukrainian aligned content has dominated western social media. This content isn't entirely truthful, yet it probably serves the principles of justice and freedom in a way that most people like.

 

Maybe a way to get more replies and engagement would be for the OP to provide a few paragraphs on what they are most interested in (maybe it is deep fakes, or maybe something else) or provide their views and concerns. 

Also potentially relevant: a skeptical talk on "media literacy" I enjoyed skimming: https://points.datasociety.net/you-think-you-want-media-literacy-do-you-7cad6af18ec2

Curated and popular this week
 ·  · 5m read
 · 
Today, Forethought and I are releasing an essay series called Better Futures, here.[1] It’s been something like eight years in the making, so I’m pretty happy it’s finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing? In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead.  Why?  Well, even if we survive, we probably just get a future that’s a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future.    That is, I think there’s more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus. The whole essay series is out today. But I’ll post summaries of each essay over the course of the next couple of weeks. And the first episode of Forethought’s video podcast is on the topic, and out now, too. The first essay is Introducing Better Futures: along with the supplement, it gives the basic case for focusing on trying to make the future wonderful, rather than just ensuring we get any ok future at all. It’s based on a simple two-factor model: that the value of the future is the product of our chance of “Surviving” and of the value of the future, if we do Survive, i.e. our “Flourishing”.  (“not-Surviving”, here, means anything that locks us into a near-0 value future in the near-term: extinction from a bio-catastrophe counts but if valueless superintelligence disempowers us without causing human extinction, that counts, too. I think this is how “existential catastrophe” is often used in practice.) The key thought is: maybe we’re closer to the “ceiling” on Survival than we are to the “ceiling” of Flourishing.  Most people (though not everyone) thinks we’re much more likely than not to Survive this century.  Metaculus puts *extinction* risk at about 4
 ·  · 4m read
 · 
Introduction Although there has been an increase over the last few years in EA work for aquatic animals, there are still significant gaps and challenges in this space. We believe there is a misconception that the existence of new organisations means that the area is 'covered'.  Our purpose in this post is to highlight the gaps and challenges in aquatic animal welfare. We argue that an ecosystem of multiple charities and approaches in the space is needed (including overlapping work on species, countries, and/or interventions). We will also explore some of the challenges that currently hinder the development of this field and offer recommendations within the 'white space' of aquatic animal welfare. Our goal is to initiate a dialogue that will lead to more robust and varied approaches. Why we need more groups working in the aquatic animal space There are not that many people working in this space Animal welfare programs have traditionally been focused on terrestrial species. However, recent years have witnessed a burgeoning interest in aquatic animal welfare within the Effective Altruism community. This could raise the question as to whether we need more charities focusing on aquatic animals, to which we want to argue that we do. Aquatic animals encompass a wide range of species from fish to crustaceans, and are subjects of increasing concern in welfare discussions. Initiatives by various organisations, including our own (Fish Welfare Initiative and Shrimp Welfare Project), have started to address their needs. However, these efforts represent only the tip of the iceberg.  The depth and breadth of aquatic animal welfare are vast, and current interventions barely scratch the surface. For example, while there is growing awareness and some actions by various charities towards the welfare of farmed fishes, the welfare needs and work on invertebrates like shrimps are still in nascent stages. Situations are vastly different between regions, species, and intervention
 ·  · 12m read
 · 
There are some moments of your life when the reality of suffering really hits home. Visiting desperately poor parts of the world for the first time. Discovering what factory farming actually looks like after a childhood surrounded by relatively idyllic rural farming. Realising too late that you shouldn’t have clicked on that video of someone experiencing a cluster headache. Or, more unexpectedly, having a baby. One of 10^20 Birth Stories This Year With my relaxed and glowing pregnant wife in her 34th week, I expect things to go smoothly. There have been a few warning signs: some slightly anomalous results in the early tests, the baby in breech position, and some bleeding. But everything still seems to be going relatively well. Then, suddenly, while walking on an idyllic French seafront, she says:  "I think my waters have broken".  "Really? It’s probably nothing, let’s just check whether that’s normal."  After a leisurely walk home, and a crash course on premature membrane rupture, we realise that, yes, her waters have definitely broken. We’re about to be among the 7–8% of parents who’ll have a premature baby. We call the hospital. They tell us to come in immediately. One slightly awkward bus journey later, and we’re at the maternity ward. No contractions yet, but the doctors tell us that they might start over the next few days. If they don’t come within the week, they’ll induce labour. They prepare a room, and ask how we want to do this, nudging towards a caesarean. We agree and I head home to prepare things for an imminent arrival. At 7am the next morning, the phone rings: she’s having the baby. With no buses running, I sprint to the hospital, take a wrong turn, and rather heroically scale a three-metre wall to avoid a detour. Bursting through the hospital wards, smelling distinctly of sweat, I find my wife there, in all green and a mesh hat, looking like a nervous child. We’re allowed to exchange an awkward “good luck” with everyone else watching. Hospita