AI-generated child sexual abuse material rises 14% in 2025: Safety watchdog
Internet Watch Foundation identifies over 8,000 AI-made CSAM files, with majority of videos classified in most severe category under UK law
ISTANBUL
AI-generated child sexual abuse material detected online increased by 14% last year, with most videos classified as the most severe category, the Internet Watch Foundation reported Tuesday.
The safety watchdog, a British non-profit dedicated to removing child sexual abuse material (CSAM) from the internet, said it identified 8,029 AI-generated images and videos depicting realistic CSAM in 2025, noting a more than 260-fold surge in videos.
More than 3,400 of the AI-generated files were "full-motion" videos described as hyper-realistic, enabling multiple individuals to appear and interact within the footage.
Some 65% of the 3,443 videos were classified as Category A, the most severe level under UK law. By comparison, 43% of non-AI videos fell into the same category, suggesting AI is being used to produce increasingly extreme material.
"We now face a technological landscape that can generate infinite violations with unprecedented ease," Kerry Smith, the IWF's CEO, said in the report.
"Advances in technology must not come at the expense of children's safety and well-being," Smith added, noting that while AI holds positive potential, its misuse can cause serious harm to children and that the material poses significant risks.
Anadolu Agency website contains only a portion of the news stories offered to subscribers in the AA News Broadcasting System (HAS), and in summarized form. Please contact us for subscription options.
