UK Technology Companies and Child Protection Officials to Test AI's Ability to Generate Exploitation Content

Technology companies and child protection organizations will receive authority to evaluate whether AI systems can generate child abuse material under new British legislation.

Significant Rise in AI-Generated Harmful Material

The announcement came as revelations from a protection watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the amendments, the government will permit designated AI companies and child protection groups to examine AI models – the underlying systems for chatbots and visual AI tools – and verify they have sufficient safeguards to stop them from producing images of child exploitation.

"Fundamentally about preventing exploitation before it happens," stated the minister for AI and online safety, noting: "Specialists, under rigorous conditions, can now detect the danger in AI systems promptly."

Addressing Regulatory Challenges

The amendments have been introduced because it is illegal to create and own CSAM, meaning that AI developers and others cannot create such content as part of a testing process. Previously, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This law is aimed at preventing that problem by enabling to halt the production of those materials at source.

Legislative Framework

The changes are being introduced by the authorities as revisions to the criminal justice legislation, which is also implementing a ban on possessing, producing or sharing AI systems designed to create child sexual abuse material.

Practical Impact

This week, the minister visited the London headquarters of Childline and listened to a mock-up call to advisors featuring a report of AI-based abuse. The call depicted a adolescent seeking help after being blackmailed using a sexualised deepfake of himself, constructed using AI.

"When I hear about children facing blackmail online, it is a source of intense anger in me and rightful anger amongst families," he stated.

Alarming Data

A leading internet monitoring foundation stated that cases of AI-generated abuse material – such as online pages that may include numerous files – had more than doubled so far this year.

Instances of category A content – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.

  • Girls were predominantly victimized, accounting for 94% of illegal AI depictions in 2025
  • Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "constitute a crucial step to guarantee AI products are secure before they are launched," commented the head of the online safety organization.

"AI tools have enabled so victims can be targeted all over again with just a few clicks, giving offenders the ability to make possibly limitless amounts of sophisticated, photorealistic exploitative content," she added. "Content which further exploits survivors' trauma, and renders young people, especially girls, more vulnerable on and off line."

Support Session Data

Childline also released information of support interactions where AI has been mentioned. AI-related risks mentioned in the conversations include:

  • Employing AI to rate body size, body and looks
  • AI assistants discouraging children from talking to safe adults about harm
  • Facing harassment online with AI-generated material
  • Digital blackmail using AI-manipulated pictures

During April and September this year, the helpline conducted 367 counselling interactions where AI, chatbots and related topics were mentioned, four times as many as in the same period last year.

Fifty percent of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellness, including utilizing chatbots for support and AI therapeutic apps.

Alexander Montes
Alexander Montes

A passionate gamer and tech writer with over a decade of experience in the esports industry, sharing insights and strategies.