A woman says she felt “dehumanised and reduced into a sexual stereotype” after an image-editing feature linked to Elon Musk’s Grok AI was used to digitally remove her clothing.
The BBC has seen multiple posts on X in which users ask Grok to alter photos of women so they appear in bikinis or sexualised scenarios without their consent.
XAI, the company behind Grok, did not provide a substantive response to questions, replying only with an automated message stating “legacy media lies”.
Samantha Smith, whose photo was edited, wrote on X about her experience and was contacted by others who said similar things had happened to them. Some users then asked Grok to generate more altered images of her.
“Women are not consenting to this,” she said. “While it wasn’t me that was in states of undress, it looked like me and it felt like me and it felt as violating as if someone had actually posted a nude or a bikini picture of me.”
A Home Office spokesperson said the government was legislating to ban nudification tools. Under a proposed new criminal offence, anyone supplying such technology would “face a prison sentence and substantial fines”.
UK media regulator Ofcom said technology platforms must “assess the risk” of people in the UK viewing illegal content, but did not say whether it was currently investigating X or Grok over AI-generated images.
Grok is a free AI assistant, with some premium features, that responds to prompts from X users who tag it in posts. As well as generating text responses, users can employ its image-editing function to manipulate uploaded photos.
The tool has faced criticism for enabling the creation of sexualised and nude images. It has also previously been accused of generating a sexually explicit clip of singer Taylor Swift.
Clare McGlynn, a law professor at Durham University, said X and Grok “could prevent these forms of abuse if they wanted to”, arguing they “appear to enjoy impunity”.
“The platform has been allowing the creation and distribution of these images for months without taking any action and we have yet to see any challenge by regulators,” she said.
XAI’s acceptable use policy forbids “depicting likenesses of persons in a pornographic manner”.
In a statement to the BBC, Ofcom said it is illegal to “create or share non-consensual intimate images or child sexual abuse material”, and confirmed this includes sexual deepfakes made with AI.
The regulator said platforms such as X are required to take “appropriate steps” to “reduce the risk” of UK users encountering illegal content and to remove such material quickly once they are made aware of it.
Additional reporting by Chris Vallance.
