Tech firms and child protection agencies will be granted permission to evaluate whether AI tools can produce child exploitation images under recently introduced British legislation.
The declaration coincided with revelations from a protection monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Under the amendments, the authorities will permit designated AI developers and child safety groups to examine AI systems – the foundational systems for chatbots and image generators – and ensure they have sufficient safeguards to stop them from creating images of child sexual abuse.
"Fundamentally about preventing exploitation before it happens," stated the minister for AI and online safety, adding: "Experts, under rigorous protocols, can now detect the risk in AI models promptly."
The amendments have been implemented because it is illegal to create and possess CSAM, meaning that AI developers and other parties cannot create such images as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.
This law is designed to averting that issue by helping to halt the creation of those images at their origin.
The changes are being introduced by the government as modifications to the crime and policing bill, which is also implementing a ban on possessing, creating or sharing AI systems developed to generate exploitative content.
This week, the official toured the London base of Childline and heard a simulated conversation to advisors involving a account of AI-based abuse. The call portrayed a adolescent seeking help after being blackmailed using a sexualised AI-generated image of himself, created using AI.
"When I hear about children experiencing extortion online, it is a source of extreme frustration in me and rightful concern amongst parents," he said.
A leading internet monitoring organization stated that instances of AI-generated exploitation content – such as online pages that may include numerous files – had more than doubled so far this year.
Instances of category A content – the gravest form of abuse – rose from 2,621 images or videos to 3,086.
The legislative amendment could "represent a crucial step to ensure AI tools are secure before they are released," commented the head of the online safety organization.
"AI tools have made it so survivors can be victimised all over again with just a simple actions, giving criminals the capability to create possibly limitless quantities of sophisticated, lifelike exploitative content," she continued. "Content which additionally commodifies victims' trauma, and renders young people, particularly girls, less safe both online and offline."
The children's helpline also published information of support sessions where AI has been mentioned. AI-related harms mentioned in the conversations comprise:
During April and September this year, the helpline conducted 367 counselling sessions where AI, conversational AI and associated topics were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, encompassing utilizing AI assistants for assistance and AI therapy apps.
Lena is a mindfulness coach and writer passionate about helping others find clarity and purpose through practical advice and reflective practices.