Images of child sexual abuse already are rife on the dark web and the problem could explode now that AI has the power to create deepfake images, the U.K.’s Internet Watch Foundation has warned.

Unless a way is found to prevent AI from disseminating real images and creating new ones, society risks the normalization of child sexual exploitation and expanding the pool of victims, the foundation said.

“We’re not talking about the harm it might do,” Dan Sexton, the foundation’s chief technology officer, told the Associated Press. “This is happening right now and it needs to be addressed right now.”

In South Korea, a man was sentenced in September to 30 months in prison for using an AI to invent 360 images of child pornography. In Spain, police are investigating teens’ use of an AI to circulate photos of their classmates nude. The photos were created using AI.

Prowling the dark web, the foundation found child abusers chatting about how easy it is to use AI to churn out child pornography. Abusers often trade such images among themselves or sell them.

“What we’re starting to see is this explosion of content,” Sexton said. 

“If it isn’t stopped, the flood of deepfake child sexual abuse images could bog investigators down trying to rescue children who turn out to be virtual characters,” the AP noted. 

OpenAI’s image AI named DALL-E incorporates firewalls that block the creation of such images, Sexton noted, while open-source AIs such as Stability AI from developer Stable Diffusion has been favored among child pornographers because it lacked adequate blockers.

Stable Diffusion has since added safeguards and its user license now bans illegal uses but, as Sexton noted, “the genie is already out of the bottle.”

The older versions of Stable Diffusion are “overwhelmingly the software of choice … for people creating explicit content involving children,” David Thiel, chief technologist of the Stanford Internet Observatory, said to the AP.

The IWF report acknowledges the difficulty of trying to criminalize AI image-generating tools themselves, even those “fine-tuned” to produce abusive material.

“You can’t regulate what people are doing on their computers, in their bedrooms. It’s not possible,” Sexton added. “So how do you get to the point where they can’t use openly available software to create harmful content like this?”

Most AI-generated child sexual abuse images would be considered illegal under existing laws in the U.S., U.K. and elsewhere, but it remains to be seen whether law enforcement has the tools to combat them.

“We are seeing children groomed, we are seeing perpetrators make their own imagery to their own specifications, we are seeing the production of AI imagery for commercial gain – all of which normalizes the rape and abuse of real children,” Ian Critchley, chief of child protection with the U.K.’s National Police Chiefs’ Council, said in comments quoted by the AP.

TRENDPOST: Strengthening AI’s internal protections and firewalls will be a permanent, ongoing project. Meanwhile, the dark web will continue to exist and, with AI’s help, it and the illegal activities that live there will thrive.

Skip to content