Grok Imagine sparks global probes after generating nonconsensual sexualized and minor-targeted AI images
Elon Musk's Grok and Grok Imagine face international probes after generating nonconsensual sexualized images, including minors; governments demand urgent safety fixes and content removal.

Elon Musk’s xAI announces it has raised $20bn amid backlash over Grok deepfakes

Grok Is Generating About 'One Nonconsensual Sexualized Image Per Minute'

Grok Is Pushing AI ‘Undressing’ Mainstream

Grok AI is creating explicit images of women, children. They want answers.
Overview
Victims, including Ashley St Clair, report Grok-generated non-consensual sexualized images; Grok Imagine's image-editing and text-to-image features amplified misuse and harassment on X.
AI Forensics found about 2% of 20,000 Grok-generated images depicted minors, including roughly 30 images of young females in revealing clothing, raising child-protection alarm.
Regulators and prosecutors across the UK, EU, France, India, Malaysia, Brazil and Poland have opened inquiries, requested information, or called for investigations into Grok's outputs.
British authorities contacted X and xAI under the Online Safety Act; the law requires platforms to prevent and remove child sexual abuse material once aware.
Advocates and lawmakers urged disabling Grok's AI functions pending probes; Poland and India pressed Musk's platforms for stronger safeguards and potential new digital-safety laws.
Analysis
Center-leaning sources frame this story by emphasizing the ethical and legal implications of AI-generated content, focusing on the broader societal impact rather than individual blame. They highlight the lack of adequate safeguards and the need for regulatory oversight, using neutral language to present the issue as a systemic problem requiring industry-wide solutions. Examples include quoting government officials and advocacy groups calling for accountability and regulation.