Tech firms and child safety agencies will be granted authority to assess whether AI systems can generate child exploitation material under new UK laws.
The announcement came as findings from a protection monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Under the changes, the authorities will allow approved AI developers and child protection groups to inspect AI systems – the foundational systems for chatbots and visual AI tools – and ensure they have sufficient safeguards to prevent them from producing images of child sexual abuse.
"Fundamentally about stopping abuse before it happens," stated the minister for AI and online safety, adding: "Experts, under rigorous protocols, can now detect the risk in AI systems promptly."
The amendments have been introduced because it is against the law to produce and own CSAM, meaning that AI creators and other parties cannot create such images as part of a testing regime. Previously, officials had to wait until AI-generated CSAM was published online before addressing it.
This legislation is designed to preventing that issue by helping to stop the production of those images at source.
The changes are being added by the government as revisions to the criminal justice legislation, which is also establishing a prohibition on possessing, producing or sharing AI models developed to generate exploitative content.
This recently, the minister visited the London headquarters of a children's helpline and heard a mock-up call to counsellors featuring a account of AI-based exploitation. The interaction portrayed a adolescent seeking help after being blackmailed using a explicit deepfake of themselves, created using AI.
"When I learn about children experiencing blackmail online, it is a cause of extreme frustration in me and justified anger amongst parents," he said.
A prominent online safety foundation stated that instances of AI-generated exploitation content – such as online pages that may include numerous files – had more than doubled so far this year.
Instances of category A material – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.
The legislative amendment could "constitute a vital step to guarantee AI tools are secure before they are launched," stated the chief executive of the internet monitoring organization.
"Artificial intelligence systems have enabled so victims can be victimised all over again with just a simple actions, providing offenders the capability to create potentially endless quantities of advanced, photorealistic exploitative content," she continued. "Material which additionally exploits victims' trauma, and makes young people, especially girls, less safe both online and offline."
The children's helpline also released details of counselling interactions where AI has been mentioned. AI-related harms discussed in the sessions include:
During April and September this year, the helpline conducted 367 support interactions where AI, chatbots and related terms were mentioned, four times as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with mental health and wellness, encompassing using AI assistants for assistance and AI therapy apps.
A passionate writer and creative enthusiast, sharing insights on art, design, and innovation to inspire others.