
At least 25 people have been arrested in a worldwide operation targeting artificial intelligence-generated child abuse images, according to Europol, the European Union’s law enforcement agency.
The suspects belonged to a criminal network involved in distributing completely AI-generated images of minors. According to Europol, this operation, named “Cumberland,” is among the first to target such AI-created child sexual abuse material (CSAM). The agency noted that the absence of specific national legislation against these crimes created “exceptionally challenging” conditions for investigators.
The arrests occurred simultaneously on Wednesday, February 26, in an operation led by Danish law enforcement with involvement from authorities in at least 18 other countries. Europol stated the operation remains active, with additional arrests anticipated in the coming weeks.
Beyond the initial arrests, investigators have identified 272 suspects, conducted 33 house searches, and seized 173 electronic devices. The primary suspect, a Danish national arrested in November 2024, allegedly “ran an online platform where he distributed the AI-generated material he produced.” Users worldwide could access the platform and view abusive content after making a “symbolic online payment” to receive a password.
Europol emphasized that online child sexual exploitation remains one of the top priorities for EU law enforcement agencies, which face “an ever-growing volume of illegal content.” The agency stressed that even when content is entirely artificial without depicting real victims, as in Operation Cumberland, “AI-generated CSAM still contributes to the objectification and sexualisation of children.”
Catherine De Bolle, Europol’s executive director, highlighted the accessibility of this technology: “These artificially generated images are so easily created that they can be produced by individuals with criminal intent, even without substantial technical knowledge.” She indicated that law enforcement would need to develop “new investigative methods and tools” to address these emerging challenges.
The Internet Watch Foundation (IWF) warns that AI-generated sexual abuse images of children are increasing and becoming more common on the open web. Research conducted by the charity last year discovered 3,512 AI child sexual abuse and exploitation images on a single dark website over a one-month period. Compared to the previous year, the most severe category images (Category A) had increased by 10%.
Experts caution that AI-generated child sexual abuse material can appear extremely realistic, making it difficult to distinguish from authentic images.
Be the first to leave a comment