أخبار العالم

Dark web users cite Grok as tool for making ‘criminal imagery’ of kids, UK watchdog says


A British organization dedicated to stopping child sexual abuse online said Wednesday that its researchers observed dark web users sharing “criminal imagery” that the users said was created by Elon Musk’s artificial intelligence tool Grok.

The images, which the group said included topless pictures of minor girls, appear to be more extreme than recent reports that Grok had created images of children in revealing clothing and sexualized scenarios.

The Internet Watch Foundation, which for years has warned about AI-generated images of child sexual abuse, said in a statement that the images had spread onto a dark web forum where users talked about Grok’s capabilities. It said the images were unlawful and that it was unacceptable for Musk’s company xAI to release such software.

“Following reports that the AI chatbot Grok has generated sexual imagery of children, we can confirm our analysts have discovered criminal imagery of children aged between 11 and 13 which appears to have been created using the tool,” Ngaire Alexander, head of hotline at the Internet Watch Foundation, said in the statement.

Because child abuse material is unlawful to make or possess, people who are interested in trading or selling it often use software designed to mask their identities or communications in setups that are sometimes called the dark web.

Like the U.S.-based National Center for Missing & Exploited Children, the Internet Watch Foundation is one of a handful of organizations in the world that partners with law enforcement to work to take down child abuse material in dark and open web spaces.

Groups like the Internet Watch Foundation can, under strict protocols, assess suspected child sexual abuse material and refer it to law enforcement and platforms for removal.

xAI did not immediately respond to a request for comment on Wednesday.

The statement comes as xAI faces a torrent of criticism from government regulators around the world in connection to images produced by its Grok software over the past several days. That followed a Reuters report on Friday that Grok had created a flood of deepfake images sexualizing children and nonconsenting adults on X, Musk’s social media app.

In December, Grok released an update that seemingly facilitated and kicked off what has now become a trend on X, of asking the chatbot to remove clothing from other users’ photos.

Typically, major creators of generative AI systems have attempted to add guardrails to prevent users’ from sexualizing photos of identifiable people, but users have found ways to make such material using workaround, smaller platforms and some open source models.

Elon Musk and xAI have stood apart among major AI players by openly embracing sex on their AI platforms, creating sexually explicit chat modes with the chatbots.

Child sexual abuse material (CSAM) has been one of the most serious concerns and struggles among creators of generative AI in recent years, with mainstream AI creators struggling to weed out CSAM from image-training data for their models, and working to impose adequate guardrails on their systems to prevent the creation of new CSAM.

On Saturday, Musk wrote, “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” in response to another user’s post defending Grok from criticism over the controversy. Grok’s terms of use specifically forbid the sexualization or exploitation of children.

Ofcom, the British regulator, said in a statement on Monday that it was aware of concerns raised in the media and by victims about a feature on X that produces undressed images of people and sexualized images of children. “We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK. Based on their response we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation,” Ofcom said.

The U.S. Justice Department said in a statement Wednesday, in response to questions about Grok producing sexualized imagery of people, that the issue was a priority, though it did not mention Grok by name.

“The Department of Justice takes AI-generated child sex abuse material extremely seriously and will aggressively prosecute any producer or possessor of CSAM,” a spokesperson said. “We continue to explore ways to optimize enforcement in this space to protect children and hold accountable individuals who exploit technology to harm our most vulnerable.”

Alexander, from the Internet Watch Foundation, said abuse material from Grok was spreading.

“The imagery we have seen so far is not on X itself, but a dark web forum where users claim they have used Grok Imagine to create the imagery, which includes sexualised and topless imagery of girls,” she said in her statement.

She said the imagery traced to Grok “would be considered Category C imagery under UK law,” the third most-serious type of imagery. She added that a user on the dark web forum was then observed using “the Grok imagery as a jumping off point to create much more extreme, Category A, video using a different AI tool.” She did not name the different tool.

“The harms are rippling out,” she said. “There is no excuse for releasing products to the global public which can be used to abuse and hurt people, especially children.”

She added: “We are extremely concerned about the ease and speed with which people can apparently generate photo-realistic child sexual abuse material. Tools like Grok now risk bringing sexual AI imagery of children into the mainstream. That is unacceptable.”