British Technology Firms and Child Protection Officials to Examine AI's Ability to Generate Abuse Content
Tech firms and child protection organizations will receive authority to assess whether artificial intelligence systems can generate child exploitation images under new UK legislation.
Significant Increase in AI-Generated Illegal Material
The declaration coincided with revelations from a safety watchdog showing that reports of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the changes, the authorities will permit approved AI developers and child protection groups to inspect AI models – the foundational systems for chatbots and visual AI tools – and verify they have sufficient safeguards to prevent them from producing images of child exploitation.
"Ultimately about preventing exploitation before it occurs," declared the minister for AI and online safety, adding: "Experts, under strict conditions, can now detect the risk in AI systems promptly."
Addressing Legal Obstacles
The amendments have been implemented because it is illegal to create and own CSAM, meaning that AI creators and others cannot generate such content as part of a evaluation process. Until now, officials had to wait until AI-generated CSAM was published online before addressing it.
This legislation is designed to preventing that issue by enabling to halt the production of those images at their origin.
Legal Structure
The changes are being introduced by the government as modifications to the crime and policing bill, which is also establishing a ban on possessing, producing or sharing AI systems designed to create child sexual abuse material.
Real-World Consequences
This week, the official toured the London base of Childline and heard a mock-up conversation to advisors involving a account of AI-based abuse. The call depicted a adolescent requesting help after being blackmailed using a explicit deepfake of himself, constructed using AI.
"When I learn about children facing blackmail online, it is a cause of extreme frustration in me and justified anger amongst parents," he stated.
Alarming Data
A leading online safety organization reported that instances of AI-generated exploitation content – such as online pages that may contain numerous images – had more than doubled so far this year.
Instances of the most severe content – the gravest form of exploitation – rose from 2,621 images or videos to 3,086.
- Girls were overwhelmingly targeted, making up 94% of illegal AI depictions in 2025
- Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025
Sector Reaction
The law change could "represent a vital step to guarantee AI tools are secure before they are released," stated the head of the online safety organization.
"AI tools have enabled so survivors can be victimised repeatedly with just a few clicks, providing offenders the ability to create potentially limitless amounts of advanced, lifelike child sexual abuse material," she continued. "Content which further exploits survivors' suffering, and renders children, especially girls, less safe on and off line."
Counseling Session Information
The children's helpline also published details of counselling interactions where AI has been mentioned. AI-related risks discussed in the sessions include:
- Using AI to evaluate body size, body and looks
- Chatbots discouraging children from consulting safe guardians about harm
- Facing harassment online with AI-generated content
- Online blackmail using AI-faked pictures
During April and September this year, Childline conducted 367 counselling interactions where AI, chatbots and related terms were mentioned, significantly more as many as in the same period last year.
Half of the references of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, including using chatbots for assistance and AI therapy applications.