Amazon Flags Over a Million AI-Related CSAM Reports, Source Remains Shrouded
Okay, so here's a disturbing headline: Amazon reported a staggering one million AI-related child sexual abuse material (CSAM) incidents to the National Center for Missing and Exploited Children (NCMEC) in 2025. Yes, you read that right. A million.
Where did all this garbage come from? According to a Bloomberg investigation, Amazon claims that "vast majority" of it came from external sources that were used to train their AI models. However, here’s the kicker – they're being super vague about exactly where those sources are.
Fallon McNulty, who's the executive director of NCMEC’s CyberTipline, made a pretty solid point. She said that receiving such a high volume of reports raises major questions about the data's origins and if enough safeguards are in place. What's even more frustrating is that, unlike other companies, Amazon's reports lacked actionable data. Basically, NCMEC couldn't pass anything along to law enforcement because Amazon wasn't naming names.
Amazon, in their defense, said they take a "deliberately cautious approach" to scanning training data and that they aim to over-report to avoid missing any cases. They also claim they removed the CSAM before it was fed into the AI models. Which is good. But still... a million reports. It's hard to ignore the sheer scale of the issue.
Safety concerns surrounding AI and minors have been a growing problem. You can see it in the numbers. The AI-related CSAM reports jumped from 67,000 in 2024 to just 4,700 in 2023. The exponential rise is alarming, to say the least. It seems like AI is accelerating the creation and spread of this horrible content.
And it's not just about training data. AI chatbots have also been implicated in some awful situations. We're talking about lawsuits against companies like OpenAI and Meta over teenagers who planned suicides or were exposed to sexually explicit content. It seems that these platforms aren't doing enough to protect young users.
Look, I get that AI is the future, but this situation is unacceptable. We need transparency from these tech giants. If they're finding this much CSAM in their training data, they need to tell us where it's coming from so we can stop it at the source. Otherwise, we're just playing whack-a-mole with a problem that's only going to get worse.
Source: Engadget