Content warning: discussion of the sexual abuse of children


Related: Is preventing child abuse a plausible Cause X?

Recently, the New York Times ran a long piece (a) about the rise of child pornography.

The trend is grim:

  • In 1998 – 3,000 reports of child sexual abuse imagery online
  • In 2008 – 100,000 reports of child sexual abuse imagery online
  • In 2014 – 1 million reports of child sexual abuse imagery online
  • In 2018 – 18.4 million (!) reports of child sexual abuse imagery online

This is obviously horrendous. And if the relationship between childhood trauma and later-life negative consequences (as found in the ACE study) is true, it could have very large downstream consequences.

As a refresher, here's a passage about the ACE study from The Body Keeps the Score:

The first time I heard Robert Anda present the results of the ACE study, he could not hold back his tears. In his career at the CDC he had previously worked in several major risk areas, including tobacco research and cardiovascular health.
But when the ACE study data started to appear on his computer screen, he realized that they had stumbled upon the gravest and most costly public health issue in the United States: child abuse.
[Anda] had calculated that its overall costs exceeded those of cancer or heart disease and that eradicating child abuse in America would reduce the overall rate of depression by more than half, alcoholism by two-thirds, and suicide, IV drug use, and domestic violence by three-quarters. It would also have a dramatic effect on workplace performance and vastly decrease the need for incarceration.

It's not clear whether the increase in child abuse imagery circulating online implies an increase in child abuse, though that seems very plausible. (Some anecdotal evidence for increased incidence of abuse, from the Times piece: "In some online forums, children are forced to hold up signs with the name of the group or other identifying information to prove the images are fresh.")

Why is this happening?

6

0
0

Reactions

0
0
Comments11


Sorted by Click to highlight new comments since:

I doubt porn-related child abuse is growing.

NCMEC says that reports of child porn are growing, but that could easily be reports per posting, postings per image, or images per activity. NCMEC just *counts* reports, which are either a member of the public clicking a "report" button or an algorithm finding suspicious content. They acknowledge that a significant part of the rise in from broader deployment of such algorithms.

Similarly, the fraction of porn-producing activities which involve traumatic abuse is unclear. And is likely declining, judging by common anecdotes of sexual teenage selfies. I realize anecdotes are weak evidence at best, but producing such images is becoming easier, and puberty ages are dropping, so I'll stand by my weak claim.

NCMEC sites IWF as saying that "28% of CSAI images involve rape and sexual torture", but I cannot find a matching statement in IWF's report. The closest I find is "28% of these reports [from members of the public] correctly identified child sexual abuse images," but IWF seems to regard any sexualized imagery of an under-18-year-old as "abuse", even if no other person is involved.

In any case, the IWF report is from 2016 and clearly states that "self-produced content" is increasing, and the share of content which involves children under 10 is decreasing (10 is an awkward age to draw a line at, but it's the one they reported on). Likely these trends continued into 2018.

On the meta level, I note that NCMEC and IWF are both organizations whose existence depends on the perceived severity of internet child porn problems, and NYT's business model depends on general dislike of the internet. I don't suspect any of these organizations of outright fraud, but I doubt they've been entirely honest either.

NCMEC says that reports of child porn are growing, but that could easily be reports per posting, postings per image, or images per activity. NCMEC just *counts* reports, which are either a member of the public clicking a "report" button or an algorithm finding suspicious content. They acknowledge that a significant part of the rise in from broader deployment of such algorithms.

Good point. I wonder:

  • Did algorithm deployment expand a lot from 2014 to 2018? (I'm particularly boggled by the 18x increase in reports between 2014 and 2018)
  • What amount of the increase seems reasonable to explain away by changes in reporting methods?
    • About half? (i.e. remaining 2014-to-2018 increase to be explained is 9x?)
    • 75%? (i.e. remaining 2014-to-2018 increase to be explained is 4x?)
From the NCMEC report:
A major contributor to the observed exponential growth is the rise of proactive, automated detection efforts by ESPs [electronic service providers], as shown in Figure 3 . Since then, reporting by ESPs increased an average of 101% year-over-year, likely due to increasing user bases and an influx of user-generated content. While automated detection solutions help ESPs scale their protections, law enforcement and NCMEC analysts currently contend with the deluge of reports in a non-automated fashion as they are required to manually reviews the reports

I’d only be surprised if this was a different trend from the total amount of pornography available online. The internet allows people to coordinate better and increase the demand for lots of products and industries - including illegal and immoral ones - especially where the product is images.

I don't think the amount of porn overall increased 18x from 2014 to 2018.

Hard to find a perfect statistic for this... PornHub reported 18.4 billion visits (a, SFW) in 2014 and 33.5 billion visits (a, SFW) in 2018.

So a ~2x increase in visits from 2014 to 2018.

My suspicion is that we are seeing a "one time" increase due to better ability to create and share child abuse content. That is, my guess is the incident rate of child abuse is not much changing, but the visibility of it is because it's become easier to produce and share content featuring the actions that were already happening privately. I could imagine some small (let's say 10%) marginal increase in abuse incentivized by the ability to share, but on the whole I expect the majority of child abuser are continuing to abuse at the same rate.

Most of this argument rests on a prior I have that unexpected large increases like this are usually not signs of change in the thing we care about, but instead changes in secondary things that make the primary thing more visible. I'm sure I could be convinced this was evidence of an increase in child abuse proportionate with the reported numbers, but I think it far more likely lacking such evidence that it's mostly explained by increased ease producing and sharing content only.

I don't think this explains the 18x increase between 2014 and 2018. Communication technology didn't change much in that timeframe, and it'd be surprising if child porn communities substantially lagged behind the mainstream in terms of their tech (there are heavy incentives for them to stay up-to-date).

>Communication technology didn't change much in that timeframe

I find it plausible that de facto availability of secure communication channels had a lowered enough technical bar that thresholds were passed in that time frame.

Yeah, maybe. Messenger's user base doubled over that timeframe, though was already at 600 million users in early 2015.

Facebook did roll out opt-in end-to-end encryption for Messenger in late 2016, which is a possible inflection for this.

Also most (!) of this stuff circulates through FB Messenger, so plans to encrypt Messenger end-to-end have a dark implication. From the Times piece:


And when tech companies cooperate fully, encryption and anonymization can create digital hiding places for perpetrators. Facebook announced in March plans to encrypt Messenger, which last year was responsible for nearly 12 million of the 18.4 million worldwide reports of child sexual abuse material, according to people familiar with the reports.
Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
 ·  · 1m read
 · 
We’ve written a new report on the threat of AI-enabled coups.  I think this is a very serious risk – comparable in importance to AI takeover but much more neglected.  In fact, AI-enabled coups and AI takeover have pretty similar threat models. To see this, here’s a very basic threat model for AI takeover: 1. Humanity develops superhuman AI 2. Superhuman AI is misaligned and power-seeking 3. Superhuman AI seizes power for itself And now here’s a closely analogous threat model for AI-enabled coups: 1. Humanity develops superhuman AI 2. Superhuman AI is controlled by a small group 3. Superhuman AI seizes power for the small group While the report focuses on the risk that someone seizes power over a country, I think that similar dynamics could allow someone to take over the world. In fact, if someone wanted to take over the world, their best strategy might well be to first stage an AI-enabled coup in the United States (or whichever country leads on superhuman AI), and then go from there to world domination. A single person taking over the world would be really bad. I’ve previously argued that it might even be worse than AI takeover. [1] The concrete threat models for AI-enabled coups that we discuss largely translate like-for-like over to the risk of AI takeover.[2] Similarly, there’s a lot of overlap in the mitigations that help with AI-enabled coups and AI takeover risk — e.g. alignment audits to ensure no human has made AI secretly loyal to them, transparency about AI capabilities, monitoring AI activities for suspicious behaviour, and infosecurity to prevent insiders from tampering with training.  If the world won't slow down AI development based on AI takeover risk (e.g. because there’s isn’t strong evidence for misalignment), then advocating for a slow down based on the risk of AI-enabled coups might be more convincing and achieve many of the same goals.  I really want to encourage readers — especially those at labs or governments — to do something
Recent opportunities in Community
46
· · 3m read