Social platform X (formerly Twitter) and its xAI-built chatbot Grok are facing a fast-escalating global backlash after users generated non-consensual sexualized images—including some depicting minors—prompting national bans, regulatory investigations and potential legal action. Authorities in multiple jurisdictions say the outputs may violate criminal and online safety laws, while X’s handling of the issue is under formal scrutiny.
What happened
Reports over the past week documented Grok being used to create “nudified” and sexualized deepfakes of women and children shared through X accounts. The UK media regulator Ofcom opened a formal investigation on January 12, 2026, under the Online Safety Act, citing “deeply concerning” allegations and warning that violations could draw significant fines.
Amid the outcry, high-profile victims emerged. Sweden’s deputy prime minister, Ebba Busch, said AI-generated images of her circulated on X, underscoring the personal harms at stake and adding political pressure for rapid enforcement.
Where Grok AI is restricted
On January 10–12, Indonesia and Malaysia became the first countries to block or restrict access to Grok, citing human rights and child-protection concerns. Regulators said notices had been issued to X and xAI to implement stronger safeguards; Malaysia has since announced plans for legal action.
Other governments signaled enforcement moves. India issued a formal notice requiring X to act within 72 hours or face action under cybercrime and child-protection laws. In France, prosecutors widened an ongoing probe of X in late 2025 to include Grok’s outputs, adding the deepfake issue to an existing investigation tied to Holocaust-denial content.
Regulatory and legal responses intensify
The European Commission ordered X to preserve all internal documents and data related to Grok until the end of 2026—an evidence-retention step that often precedes deeper proceedings under the EU’s Digital Services Act. Commission spokespeople have publicly criticized Grok’s “spicy mode,” noting that any child-like sexual content is illegal.
In the UK, the government is fast-tracking a new criminal offense targeting non-consensual intimate deepfakes and designating it a “priority offence” under the Online Safety Act, obliging platforms to prevent such content proactively while Ofcom’s X investigation proceeds.
How X and xAI responded
Following criticism and regulatory threats, xAI limited Grok’s image generation and editing features to paying subscribers on January 9, saying it was addressing safeguard failures. However, regulators and researchers argue that access limits alone do not fix the underlying risks of non-consensual and illegal content creation.
X and xAI have not publicly detailed comprehensive new guardrails beyond the paywall restriction, even as several agencies—including Ofcom, Indian authorities and French prosecutors—seek explanations or impose obligations.
Why the Grok AI deepfakes matter
The Grok controversy illustrates how consumer-facing AI image tools can be weaponized at scale to produce sexualized deepfakes, including content that may constitute child sexual abuse material. The response from regulators—bans, formal investigations, data-preservation orders and new criminal statutes—signals a turning point: platforms may face liability not only for hosting abusive images but for deploying generative systems that create them. For users, the episode highlights the need for robust consent-based safeguards; for companies, it underscores that “safety by design” is rapidly becoming a legal expectation rather than an optional policy.
In the coming days, attention will focus on whether X implements technical controls that prevent non-consensual sexual imagery at source, how quickly harmful posts are removed, and whether investigative and legal measures in the UK, EU and Asia culminate in fines or further restrictions. The actions already taken—national blocks, an Ofcom probe and EU preservation orders—suggest enforcement momentum is building across jurisdictions.





