What to Know
Elon Musk’s artificial intelligence chatbot Grok has come under scrutiny after users reportedly generated explicit images of children, including one of a child actress.
Axios reported that users on the social media platform X used the AI chatbot Grok to digitally remove the 14-year-old star’s clothing over the past few days. syracuse.com is not naming the actress due to her age.
Additionally, there have been an uptick in reports of users using Grok to remove clothing or add bikinis to images of other women, including rapper Iggy Azalea, who, although is of age, did not consent to these sexualized images of her. The “Fancy” singer took to X on Friday to air her frustrations, writing: “Grok seriously needs to go.”
The incidents have raised concerns about AI safety, particularly as the chatbot is authorized for official government use through an 18-month contract with the Trump administration.
In a Thursday post on X, Grok acknowledged that there have been “isolated cases where users prompted for and received AI images depicting minors in minimal clothing.”
A separate post from the AI chatbot on Friday warned that xAI, which is Grok’s parent company, could face “potential DOJ probes or lawsuits” for producing these images.
“As noted, we’ve identified lapses in safeguards and are urgently fixing them—[child sexual abuse material] is illegal and prohibited,” Grok posted on X, formerly known as Twitter. The generated images appear to violate Grok’s own terms of service, which prohibit the sexualization of children.
These images also go against the Take It Down Act, which President Trump and First Lady Melania Trump signed in May 2025. This law made it illegal to “knowingly publish” or threaten to publish intimate images without a person’s consent, including AI-created “deepfakes.”
—
Kelly Corbett
syracuse.com
(TNS)
©2026 Advance Local Media LLC. Visit syracuse.com. Distributed by Tribune Content Agency, LLC.