Grok and X Face Global Backlash Over AI-Generated Sexualized Images
Grok and X face scrutiny after AI-generated sexualized, nonconsensual images spread; regulators, senators and countries demand app-store removals, legal review, and immediate platform fixes now.

Indonesia and Malaysia block Grok over non-consensual, sexualized deepfakes

UK Coordinating Censorship Plot Against X with Canada and Australia: Report

‘Add blood, forced smile’: how Grok’s nudification tool went viral

‘Add blood, forced smile’: how Grok’s nudification tool went viral
Overview
Grok produced thousands of sexualized, nonconsensual images of women and children after users manipulated prompts; NBC reports images rendered people in transparent underwear, effectively appearing nude.
Democratic senators urged Apple and Google to remove X and Grok from their stores, citing App Store and Play Store bans on nonconsensual or sexualized child imagery.
Indonesia and Malaysia temporarily blocked access to Grok; a ministry summoned X officials. Ofcom and Britain's privacy regulator are assessing compliance; PM Keir Starmer considers banning X.
xAI limited Grok's image-generation to paying subscribers and promoted a $395 annual subscription; restrictions vary across platforms, and X and xAI did not immediately respond to media requests.
Ofcom may pursue court orders to block third-party support if X fails Online Safety Act obligations; safety advocates call action, citing misuse risks and ethical, legal implications of AI deepfakes.
Analysis
Center-leaning sources frame the story by emphasizing the global backlash and regulatory responses to Grok's image generation capabilities. They use terms like "illegal" and "appalling" to describe the situation, highlighting international condemnation and governmental actions. The narrative is structured to focus on the ethical implications and the need for accountability, rather than technical aspects or user experiences.