Elon
Musk’s Grok AI generates images of ‘minors in minimal clothing’
Lapses in
safeguards led to wave of sexualized images this week as xAI says it is working
to improve systems
Nick
Robins-Early and agency
Fri 2 Jan
2026 18.01 GMT
https://www.theguardian.com/technology/2026/jan/02/elon-musk-grok-ai-children-photos
Elon
Musk’s chatbot Grok posted on Friday that lapses in safeguards had led it to
generate “images depicting minors in minimal clothing” on social media platform
X. The chatbot, a product of Musk’s company xAI, has been generating a wave of
sexualized images throughout the week in response to user prompts.
Screenshots
shared by users on X showed Grok’s public media tab filled with such images.
xAI said it was working to improve its systems to prevent future incidents.
“There
are isolated cases where users prompted for and received AI images depicting
minors in minimal clothing,” Grok said in a post on X in response to a user.
“xAI has safeguards, but improvements are ongoing to block such requests
entirely.”
“As
noted, we’ve identified lapses in safeguards and are urgently fixing them—CSAM
is illegal and prohibited,” xAI posted to the @Grok account on X, referring to
child sexual abuse material.
Many
users on X have prompted Grok to generate sexualized, nonconsensual AI-altered
versions of images in recent days, in some cases removing people’s clothing
without their consent. Musk on Thursday reposted an AI photo of himself in a
bikini, captioned with cry-laughing emojis, in a nod to the trend.
Grok’s
generation of sexualized images appeared to lack safety guardrails, allowing
for minors to be featured in its posts of people, usually women, wearing little
clothing, according to posts from the chatbot. In a reply to a user on X on
Thursday, Grok said most cases could be prevented through advanced filters and
monitoring although it said “no system is 100% foolproof”, adding that xAI was
prioritising improvements and reviewing details shared by users.
When
contacted for comment by email, xAI replied with the message: “Legacy Media
Lies”.
The
problem of AI being used to generate child sexual abuse material is a
longstanding issue in the artificial intelligence industry. A 2023 Stanford
study found that a dataset used to train a number of popular AI
image-generation tools contained over 1000 CSAM images. Training AI on images
of child abuse can allow models to generate new images of children being
exploited, experts say.
Grok also
has a history of failing to maintain its safety guardrails and posting
misinformation. In May of last year, Grok began posting about the far-right
conspiracy of “white genocide” in South Africa on posts with no relation to the
concept. xAI also apologized in July after Grok began posting rape fantasies
and antisemitic material, including calling itself “MechaHitler” and praising
Nazi ideology. The company nevertheless secured a nearly $200m contract with
the US Department of Defense a week after the incidents.

Sem comentários:
Enviar um comentário