British Tech Companies and Child Protection Officials to Test AI's Capability to Create Abuse Content
Tech firms and child protection organizations will be granted permission to assess whether artificial intelligence systems can generate child abuse material under recently introduced UK laws.
Significant Rise in AI-Generated Harmful Material
The declaration coincided with revelations from a protection watchdog showing that reports of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
New Legal Structure
Under the amendments, the government will allow approved AI developers and child safety groups to inspect AI systems – the underlying systems for conversational AI and image generators – and verify they have adequate safeguards to stop them from creating depictions of child exploitation.
"Fundamentally about stopping abuse before it happens," stated Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now detect the risk in AI models early."
Tackling Regulatory Obstacles
The changes have been implemented because it is against the law to create and possess CSAM, meaning that AI creators and others cannot create such content as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was published online before addressing it.
This law is designed to preventing that issue by enabling to halt the production of those materials at source.
Legal Framework
The changes are being introduced by the authorities as modifications to the criminal justice legislation, which is also establishing a ban on possessing, producing or distributing AI models designed to create exploitative content.
Real-World Consequences
This week, the minister toured the London base of a children's helpline and listened to a simulated call to counsellors involving a report of AI-based abuse. The interaction depicted a adolescent seeking help after being blackmailed using a explicit deepfake of himself, created using AI.
"When I hear about children experiencing blackmail online, it is a source of extreme frustration in me and rightful anger amongst parents," he said.
Concerning Data
A prominent online safety foundation stated that cases of AI-generated abuse content – such as online pages that may include numerous images – had significantly increased so far this year.
Instances of category A material – the most serious form of abuse – increased from 2,621 images or videos to 3,086.
- Girls were predominantly victimized, accounting for 94% of illegal AI images in 2025
- Depictions of infants to two-year-olds increased from five in 2024 to 92 in 2025
Industry Reaction
The law change could "constitute a crucial step to guarantee AI tools are safe before they are launched," stated the chief executive of the internet monitoring foundation.
"Artificial intelligence systems have made it so survivors can be targeted repeatedly with just a simple actions, giving criminals the ability to make possibly endless quantities of advanced, photorealistic exploitative content," she added. "Content which further commodifies victims' suffering, and renders young people, especially female children, more vulnerable both online and offline."
Counseling Session Information
The children's helpline also published details of counselling interactions where AI has been mentioned. AI-related harms mentioned in the sessions comprise:
- Using AI to evaluate body size, physique and looks
- Chatbots dissuading young people from talking to safe adults about abuse
- Facing harassment online with AI-generated material
- Digital extortion using AI-faked images
Between April and September this year, the helpline delivered 367 counselling interactions where AI, chatbots and associated terms were mentioned, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 interactions were related to psychological wellbeing and wellness, encompassing using chatbots for support and AI therapeutic applications.