Woman felt ‘dehumanised’ after Musk’s Grok AI used to digitally take away her garments | EUROtoday

Get real time updates directly on you device, subscribe now.

A girl has informed the BBC she felt “dehumanised and reduced into a sexual stereotype” after Elon Musk’s AI Grok was used to digitally take away her clothes.

The BBC has seen a number of examples on the social media platform X of individuals asking the chatbot to undress ladies to make them seem in bikinis with out their consent, in addition to placing them in sexual conditions.

XAI, the corporate behind Grok, didn’t reply to a request for remark, apart from with an automatically-generated reply stating “legacy media lies”.

Samantha Smith shared a submit on X about her picture being altered, which was met with feedback from those that had skilled the identical – earlier than others requested Grok to generate extra of her.

“Women are not consenting to this,” she stated.

“While it wasn’t me that was in states of undress, it looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me.”

A Home Office spokesperson stated it was legislating to ban nudification instruments, and below a brand new felony offence, anybody who provided such tech would “face a prison sentence and substantial fines”.

The regulator Ofcom stated tech companies should “assess the risk” of individuals within the UK viewing unlawful content material on their platforms, however didn’t verify whether or not it was presently investigating X or Grok in relation to AI photographs.

Grok is a free AI assistant – with some paid for premium options – which responds to X customers’ prompts once they tag it in a submit.

It is commonly used to offer response or extra context to different posters’ remarks, however individuals on X are additionally in a position to edit an uploaded picture by means of its AI picture modifying characteristic.

It has been criticised for permitting customers to generate images and movies with nudity and sexualised content material, and it was beforehand accused of constructing a sexually specific clip of Taylor Swift.

Clare McGlynn, a legislation professor at Durham University, stated X or Grok “could prevent these forms of abuse if they wanted to”, including they “appear to enjoy impunity”.

“The platform has been allowing the creation and distribution of these images for months without taking any action and we have yet to see any challenge by regulators,” she stated.

XAI’s personal acceptable use coverage prohibits “depicting likenesses of persons in a pornographic manner”.

In an announcement to the BBC, Ofcom stated it was unlawful to “create or share non-consensual intimate images or child sexual abuse material” and confirmed this included sexual deepfakes created with AI.

It stated platforms corresponding to X have been required to take “appropriate steps” to “reduce the risk” of UK customers encountering unlawful content material on their platforms, and take it down shortly once they grow to be conscious of it.

Additional reporting by Chris Vallance.

https://www.bbc.com/news/articles/c98p1r4e6m8o?at_medium=RSS&at_campaign=rss