UK Tech Companies and Child Protection Agencies to Examine AI's Capability to Create Abuse Images

Tech firms and child safety organizations will receive permission to assess whether AI systems can produce child exploitation material under recently introduced British laws.

Significant Rise in AI-Generated Illegal Content

The declaration came as findings from a protection watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the amendments, the government will allow designated AI companies and child safety organizations to examine AI systems – the foundational systems for conversational AI and visual AI tools – and verify they have sufficient protective measures to prevent them from producing images of child sexual abuse.

"Ultimately about preventing exploitation before it happens," stated Kanishka Narayan, noting: "Experts, under rigorous protocols, can now identify the danger in AI models promptly."

Tackling Regulatory Challenges

The changes have been implemented because it is illegal to create and possess CSAM, meaning that AI creators and other parties cannot create such content as part of a evaluation regime. Until now, officials had to delay action until AI-generated CSAM was published online before addressing it.

This law is aimed at preventing that problem by helping to stop the production of those materials at source.

Legislative Framework

The changes are being introduced by the authorities as modifications to the crime and policing bill, which is also implementing a prohibition on possessing, producing or distributing AI models developed to create exploitative content.

Real-World Consequences

This recently, the minister visited the London base of a children's helpline and listened to a mock-up conversation to counsellors featuring a report of AI-based abuse. The interaction portrayed a teenager requesting help after being blackmailed using a explicit AI-generated image of himself, created using AI.

"When I learn about young people experiencing blackmail online, it is a cause of extreme anger in me and rightful concern amongst parents," he said.

Concerning Statistics

A prominent online safety foundation stated that cases of AI-generated abuse material – such as online pages that may include numerous images – had significantly increased so far this year.

Instances of category A content – the most serious form of abuse – increased from 2,621 visual files to 3,086.

  • Female children were overwhelmingly targeted, making up 94% of illegal AI depictions in 2025
  • Depictions of newborns to two-year-olds increased from five in 2024 to 92 in 2025

Industry Response

The legislative amendment could "constitute a vital step to guarantee AI products are secure before they are launched," commented the head of the online safety organization.

"Artificial intelligence systems have enabled so survivors can be targeted repeatedly with just a simple actions, giving offenders the capability to create possibly endless quantities of advanced, lifelike child sexual abuse material," she added. "Content which additionally exploits survivors' trauma, and renders children, especially girls, more vulnerable on and off line."

Counseling Session Information

The children's helpline also published information of counselling sessions where AI has been referenced. AI-related risks discussed in the sessions comprise:

  • Using AI to rate body size, physique and looks
  • AI assistants dissuading young people from consulting safe adults about harm
  • Facing harassment online with AI-generated content
  • Online blackmail using AI-manipulated images

Between April and September this year, the helpline conducted 367 support sessions where AI, conversational AI and associated topics were mentioned, significantly more as many as in the same period last year.

Half of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellness, including using AI assistants for assistance and AI therapeutic applications.

Scott Booth
Scott Booth

A fintech expert with over a decade in blockchain technology and digital asset management.