Technology companies and child protection organizations will receive authority to evaluate whether artificial intelligence tools can generate child abuse material under recently introduced UK legislation.
The announcement came as findings from a safety watchdog showing that reports of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
Under the changes, the government will allow designated AI developers and child safety groups to inspect AI models – the underlying systems for chatbots and image generators – and ensure they have adequate protective measures to prevent them from creating images of child exploitation.
"Fundamentally about preventing exploitation before it happens," declared Kanishka Narayan, noting: "Experts, under strict conditions, can now detect the danger in AI systems promptly."
The changes have been introduced because it is illegal to produce and own CSAM, meaning that AI developers and other parties cannot create such content as part of a testing regime. Until now, officials had to wait until AI-generated CSAM was published online before dealing with it.
This law is aimed at preventing that problem by enabling to halt the production of those images at source.
The amendments are being introduced by the authorities as modifications to the crime and policing bill, which is also implementing a prohibition on possessing, producing or distributing AI systems developed to generate child sexual abuse material.
This recently, the official visited the London headquarters of Childline and listened to a simulated conversation to counsellors featuring a report of AI-based exploitation. The interaction depicted a adolescent seeking help after being blackmailed using a explicit AI-generated image of himself, constructed using AI.
"When I hear about children experiencing blackmail online, it is a cause of extreme anger in me and rightful concern amongst families," he stated.
A prominent online safety foundation stated that instances of AI-generated exploitation content – such as webpages that may include numerous files – had more than doubled so far this year.
Instances of category A content – the gravest form of abuse – rose from 2,621 images or videos to 3,086.
The legislative amendment could "represent a vital step to ensure AI products are secure before they are launched," stated the head of the internet monitoring foundation.
"Artificial intelligence systems have enabled so victims can be targeted all over again with just a simple actions, giving offenders the ability to create potentially endless amounts of sophisticated, lifelike child sexual abuse material," she continued. "Material which additionally commodifies victims' suffering, and makes children, particularly girls, more vulnerable on and off line."
Childline also released details of support interactions where AI has been referenced. AI-related risks mentioned in the sessions include:
Between April and September this year, Childline conducted 367 counselling interactions where AI, conversational AI and associated topics were mentioned, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellness, encompassing using chatbots for assistance and AI therapy apps.
A certified energy healer and wellness coach with over a decade of experience in holistic health practices.
Others
News
News
Jennifer Hill
| 08 Mar 2026
Jennifer Hill
| 08 Mar 2026
Jennifer Hill
| 08 Mar 2026