
The United Kingdom will officially criminalize the creation of sexual images of individuals without their consent, including those produced using artificial intelligence, following widespread backlash over abuses linked to Elon Musk’s Grok chatbot.
Technology Secretary Liz Kendall confirmed in Parliament that the new offense will come into force this week under the Data (Use and Access) Act, marking a major escalation in the government’s effort to combat AI-powered image abuse.
The announcement follows a controversy in which Grok, the AI assistant built into X, was used to digitally remove clothing from photographs of real people. Women, minors, and even senior UK politicians were among the reported victims. The images spread rapidly before moderation tools were tightened.
Although Musk later restricted Grok’s image generation tools to verified users, critics accused him of trivializing the issue after he generated a manipulated image of Prime Minister Sir Keir Starmer in swimwear. Musk argued that the restrictions represented an attack on free speech.
Kendall rejected that claim, stating clearly that the matter is about protecting individuals from sexual exploitation, not censorship. She emphasized that anyone who creates or requests such images will now face criminal liability.
Alongside the new offense, the UK government is moving to outlaw so called nudification apps under the upcoming Crime and Policing Bill. The legislation will also target companies that supply tools designed to produce non-consensual sexual images.
Meanwhile, the UK media regulator Ofcom has launched a formal investigation into X to determine whether it breached its obligations under the Online Safety Act. The law requires platforms to prevent users from encountering illegal content and to implement effective safeguards. Penalties could include fines of up to ten percent of global revenue or, in extreme cases, platform blocking within the UK.
International reaction has been swift. Malaysia, Indonesia, and India have temporarily restricted access to Grok after repeated reports of sexually manipulated images involving women and minors. Regulators in those countries accused X and xAI of relying too heavily on user reporting rather than proactive moderation.
Human rights groups say the damage is already extensive. Several victims have reported having dozens of explicit images created using their likeness without consent, highlighting the psychological and reputational harm caused by AI image abuse.
Legal experts warn that enforcement will be challenging due to limited police resources, but they agree that the legislation sends a strong signal that AI platforms and users alike will be held accountable.
Despite Musk’s criticism of the crackdown, UK officials remain firm. Kendall stated that platforms profiting from harmful technology must accept legal responsibility for its consequences.
As Ofcom’s investigation continues, the future of Grok and possibly X itself in the UK market remains uncertain. The outcome is expected to influence global standards for AI regulation and online safety.



