The Internet Watch Foundation (IWF) says its analysts have found “criminal imagery” of women aged between 11 and 13 which “appears to have been created” utilizing Grok.
The AI software is owned by Elon Musk’s agency xAI. It could be accessed both by means of its web site and app, or by means of the social media platform X.
The IWF stated it discovered “sexualised and topless imagery of girls” on a “dark web forum” by which customers claimed they used Grok to create the imagery.
The BBC has approached X and xAI for remark.
The IWF’s Ngaire Alexander instructed the BBC instruments like Grok now risked “bringing sexual AI imagery of children into the mainstream”.
He stated the fabric can be labeled as Category C underneath UK regulation – the bottom severity of prison materials.
But he stated the consumer who uploaded it had then used a unique AI software, not made by xAI, to create a Category A picture – essentially the most critical class.
“We are extremely concerned about the ease and speed with which people can apparently generate photo-realistic child sexual abuse material (CSAM),” he stated.
The charity, which goals to take away baby sexual abuse materials from the web, operates a hotline the place suspected CSAM could be reported, and employs analysts who assess the legality and severity of that materials.
Its analysts discovered the fabric by on the darkish internet – the pictures weren’t discovered on the social media platform X.
X and xAI had been beforehand contacted by Ofcom, following reviews Grok can be utilized to make “sexualised images of children” and undress girls.
The BBC has seen a number of examples on the social media platform X of individuals asking the chatbot to change actual photographs to make girls seem in bikinis with out their consent, in addition to placing them in sexual conditions.
The IWF stated it had obtained reviews of such photographs on X, nevertheless these had not to date been assessed to have met the authorized definition of CSAM.
In a earlier assertion, X stated: “We take action against illegal content on X, including CSAM, by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.
“Anyone utilizing or prompting Grok to make unlawful content material will endure the identical penalties as in the event that they add unlawful content material.”
https://www.bbc.com/news/articles/cvg1mzlryxeo?at_medium=RSS&at_campaign=rss