We addressed the issue that highlights the darker side of artificial intelligence use in this article, which explains, based on data collected by the Meter association, how technology can be used to abuse and deceive minors. We are revisiting the topic because, according to Meter, a clear threat is posed by the UnlucidAI platform, which is capable of virtually stripping images of minors, creating child pornography content using artificial intelligence, and it is not just a matter of the dark web: the content would also be disseminated on the ordinary web, through encrypted groups managed by paedophiles. These environments host communities that exchange illegal content, circumventing traditional controls.
Unlike similar applications already reported in the past by Meter, such as Bikini Off, the novelty of UnlucidAI is that it is used to manipulate images of minors and infants.
Although UnlucidAI states in its terms of use that it prohibits the generation of images depicting child abuse, it does not technically prevent the creation of such content. Users are thus able to produce and share synthetic child pornography, taking advantage of the absence of effective blocks and real controls on image generation.
UnlucidAI is neither a start-up nor a traditionally structured company. It does not even have corporate transparency. Rather, it is a web platform that offers artificial intelligence tools for generating, editing and animating images and videos, presenting itself as ‘AI uncensored’ and allowing users to create content without filters or particular technical limitations. In its public materials and on its official website, UnlucidAI generally targets ‘dreamers’ and digital creatives and does not highlight any management staff, board of founders, clear corporate information or who actually runs the platform.
In a statement, Meter condemned the platform described by child pornographers as “a tool for creating uncensored content” that uses generative artificial intelligence to manipulate photos of children, digitally reconstructing their bodies in a false but realistic way, exposing them.
Meter has also recently presented the first dossier analysing how artificial intelligence is exploited to generate child sexual abuse material (CSAM), alter images and facilitate online grooming, manipulating minors.
The use of AI for child pornography in Italy has grown significantly since December 2024, with almost three thousand cases confirmed in just six months. The production and exchange of new content generated or manipulated by AI is facilitated by the fact that these technologies are accessible even to users without particular technical skills. The material is distributed on encrypted channels and platforms such as Signal and Telegram, or on areas of the ordinary web managed by illegal communities, and therefore no longer only on the dark web. These environments are extremely difficult to control because they also exploit chatbots, which are used both to distribute images and to interact with minors for the purpose of grooming.
92.2% of teenagers have had contact with chatbots, and over 50% would not be able to recognise a deepfake video.
Meter calls for immediate action by Italian and European institutions, as well as direct intervention by hosting platforms, social networks and search engines to identify and block any attempt to spread this criminal tool.
“It’s not just about fighting a digital service, it’s a cultural, ethical and legal battle. Those who develop or distribute these technologies are complicit in very serious crimes,” says Fortunato Di Noto, founder of Meter. “We are facing a crime without contact.” (photo by sebastiaan stam on Unsplash)
ALL RIGHTS RESERVED ©