British Technology Firms and Child Safety Agencies to Examine AI's Capability to Generate Exploitation Images
Tech firms and child safety organizations will be granted authority to evaluate whether artificial intelligence tools can produce child exploitation images under recently introduced British legislation.
Significant Rise in AI-Generated Illegal Content
The declaration came as revelations from a protection watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Legal Structure
Under the amendments, the authorities will allow designated AI companies and child safety groups to inspect AI models – the underlying systems for conversational AI and visual AI tools – and verify they have sufficient safeguards to stop them from producing images of child sexual abuse.
"Fundamentally about preventing abuse before it occurs," stated the minister for AI and online safety, noting: "Experts, under strict conditions, can now detect the danger in AI systems early."
Tackling Regulatory Challenges
The amendments have been implemented because it is illegal to create and own CSAM, meaning that AI developers and other parties cannot generate such content as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was published online before addressing it.
This legislation is designed to averting that problem by enabling to stop the production of those images at source.
Legal Framework
The changes are being added by the government as revisions to the criminal justice legislation, which is also implementing a ban on owning, creating or distributing AI models developed to generate exploitative content.
Real-World Consequences
This recently, the minister toured the London headquarters of Childline and listened to a mock-up conversation to advisors involving a account of AI-based exploitation. The interaction depicted a adolescent requesting help after facing extortion using a explicit AI-generated image of themselves, created using AI.
"When I hear about young people facing blackmail online, it is a cause of extreme frustration in me and justified anger amongst parents," he stated.
Concerning Statistics
A leading online safety organization reported that instances of AI-generated abuse material – such as webpages that may contain numerous images – had significantly increased so far this year.
Instances of the most severe material – the most serious form of exploitation – increased from 2,621 visual files to 3,086.
- Girls were predominantly targeted, making up 94% of prohibited AI depictions in 2025
- Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Industry Reaction
The law change could "constitute a crucial step to ensure AI products are safe before they are released," stated the head of the online safety foundation.
"Artificial intelligence systems have made it so survivors can be targeted all over again with just a simple actions, giving criminals the ability to make potentially limitless quantities of advanced, photorealistic exploitative content," she added. "Material which further exploits victims' trauma, and makes children, particularly girls, less safe both online and offline."
Counseling Session Data
Childline also published details of counselling sessions where AI has been referenced. AI-related harms mentioned in the conversations include:
- Employing AI to rate weight, physique and looks
- Chatbots dissuading young people from talking to trusted adults about harm
- Facing harassment online with AI-generated content
- Online extortion using AI-manipulated pictures
Between April and September this year, the helpline delivered 367 counselling interactions where AI, chatbots and associated topics were discussed, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 sessions were related to psychological wellbeing and wellness, including utilizing AI assistants for assistance and AI therapeutic apps.