OUR PARTNERS

Abusers Exploit AI “Deepfakes” for Child Sextortion Blackmail


29 June, 2024

Regulating Explicit AI-Generated Content in the Digital Age: A Crucial Battleground

The emergence of artificial intelligence (AI) has brought about revolutionary changes across various industries. However, the same technology has also introduced unprecedented challenges, as evidenced by the disturbing use of AI in generating “deepfakes” that facilitate sextortion and other forms of abuse.

A harrowing trend has now come to light, where child abusers weaponize AI video generator tools to create deepfakes of their victims, coercing them into producing self-abusive material—a vicious cycle that, tragically, can persist for years. The production of such simulated abuse imagery is illegal in the UK, a standpoint both Labour and Conservatives support, pushing forward the agenda to outlaw all explicit AI-generated images of real individuals. Nevertheless, the international community lacks consensus on the regulation of AI technologies, and the fundamental problem remains: generating such explicit content is merely a button click away, deeply embedded in the AI image generation’s very foundation.

In December, Stanford University researchers made a grim discovery within Laion-5B, one of the most substantive training sets for AI image generators. Among the five billion images, they identified hundreds, possibly even thousands, of instances containing child sexual abuse material (CSAM). Given the vast number of images, manually reviewing them is nearly impossible within a lifetime, so the researchers resorted to automatic scanning, pairing questionable images with law enforcement records and directing potential matches to authorities.

Upon discovery, the creators of Laion swiftly retracted the dataset from being downloaded, emphasizing that they, in fact, never disseminated such images directly—the dataset comprised URLs pointing to images hosted on external internet sites. At the time of Stanford’s research, roughly one-third of these links were inactive, leaving the actual count of CSAM images uncertain. However, the repercussions of their prior presence are irreversible, with the training data eternally etched in the AI’s neural networks.

The challenge extends beyond Laion. Open-source datasets like this one, put together by volunteers and made widely accessible, have been commonly utilized for training AI tools, a notable example being the image generator Stable Diffusion from 2022. Contrarily, OpenAI exercises a more guarded approach, providing only limited information about the sources for Dall-E 3 images and claiming to have filtered out explicitly explicit content—a claim that users must take at face value.

The difficulty in guaranteeing untainted datasets underscores OpenAI’s stance on restricting access to proprietary technology, like Dall-E 3, which cannot be downloaded for personal hardware use but must be run through the company’s systems. Filtering user requests and generated images, companies like OpenAI and Google implement additional safeguards to prevent misappropriation of their platforms.

AI safety specialists advocate for robust control mechanisms beyond mere training prohibitions for AI models. Models devoid of exposure to explicit material may fail to recognize or correctly report it when encountered in the real world. As explained by Kirsty Innes, director of tech policy at Labour Together, there is a need to maintain the openness of AI development, which may hold key solutions for mitigating future risks.

In the immediate horizon, proposed bans have targeted specifically designed “nudification” tools, with policy recommendations being relatively narrow. Nevertheless, the battle against explicit AI images opens broader queries akin to other complex AI dilemmas: How do we impose limits on systems whose workings elude full comprehension?

To remain informed and engage in discussions pertaining to AI technologies, subscribe to ai-headlines.co. Get the latest ai news & AI tools to understand their potential and the pressing need for ethical oversight. Our commitment to keeping our readers informed of developments in AI images generator applications, artificial intelligence generated images, and ai text generator advancements underscores the relevance of responsible AI conduct in the digital ecosystem.