🔗 Share this article British Tech Firms and Child Safety Agencies to Test AI's Capability to Generate Exploitation Content Technology companies and child protection agencies will be granted authority to evaluate whether artificial intelligence systems can generate child abuse material under recently introduced British legislation. Substantial Increase in AI-Generated Illegal Content The declaration coincided with revelations from a safety watchdog showing that reports of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025. New Regulatory Structure Under the changes, the government will allow designated AI developers and child safety organizations to inspect AI models – the foundational technology for conversational AI and image generators – and ensure they have adequate safeguards to prevent them from creating images of child sexual abuse. "Ultimately about preventing abuse before it happens," declared Kanishka Narayan, adding: "Specialists, under rigorous protocols, can now detect the risk in AI systems early." Tackling Legal Obstacles The changes have been implemented because it is against the law to create and own CSAM, meaning that AI creators and other parties cannot create such images as part of a testing process. Previously, officials had to wait until AI-generated CSAM was published online before dealing with it. This legislation is aimed at preventing that issue by enabling to stop the creation of those materials at their origin. Legal Structure The amendments are being added by the authorities as modifications to the criminal justice legislation, which is also implementing a prohibition on possessing, creating or distributing AI systems designed to create child sexual abuse material. Practical Consequences This week, the minister visited the London base of Childline and heard a simulated call to advisors involving a account of AI-based abuse. The call depicted a teenager seeking help after facing extortion using a sexualised AI-generated image of himself, created using AI. "When I hear about young people facing extortion online, it is a cause of intense anger in me and rightful anger amongst families," he stated. Alarming Data A prominent online safety organization reported that instances of AI-generated abuse material – such as webpages that may include multiple files – had more than doubled so far this year. Cases of category A content – the gravest form of exploitation – increased from 2,621 visual files to 3,086. Female children were predominantly targeted, accounting for 94% of prohibited AI images in 2025 Depictions of infants to toddlers rose from five in 2024 to 92 in 2025 Sector Reaction The law change could "constitute a vital step to guarantee AI products are safe before they are launched," stated the head of the online safety foundation. "Artificial intelligence systems have made it so victims can be victimised all over again with just a few clicks, giving criminals the capability to create possibly endless quantities of sophisticated, photorealistic exploitative content," she added. "Content which further exploits survivors' trauma, and makes children, especially girls, more vulnerable both online and offline." Counseling Session Information Childline also published information of support interactions where AI has been mentioned. AI-related harms mentioned in the sessions include: Employing AI to evaluate body size, body and appearance Chatbots dissuading children from talking to safe adults about harm Being bullied online with AI-generated content Digital extortion using AI-faked pictures Between April and September this year, Childline delivered 367 counselling sessions where AI, chatbots and associated topics were mentioned, four times as many as in the same period last year. Fifty percent of the mentions of AI in the 2025 sessions were related to mental health and wellbeing, encompassing using AI assistants for assistance and AI therapeutic applications.