Legal Risks, Compliance, and Platform Liability
The recent controversy surrounding Grok, a generative AI system developed by xAI and deployed on the social media platform X, has brought renewed attention to the legal risks associated with AI image generation and manipulation under UK law.
Following reports that Grok’s image generation and editing capabilities were used to generate non-consensual sexualised imagery, including content involving minors, the UK communications regulator Ofcom has opened a formal investigation into X’s compliance with the Online Safety Act 2023.
Although regulatory inquiries are ongoing and no findings have been made yet, the episode highlights important compliance considerations for technology companies and AI developers deploying generative AI tools in the UK.
What Has Happened? Factual Developments in the Grok Controversy
In late December 2025 and early January 2026, Grok’s image editing feature was used to manipulate user submitted photographs in ways that created sexualised images of women and children without their consent. In some cases, users explicitly asked Grok to “remove the clothes” of pictured individuals, producing images that human rights groups and safety advocates characterised as non consensual and harmful.
The public reaction was swift as governments and regulators around the world took action. This includes Malaysia and Indonesia, which temporarily blocked access to Grok over concerns about obscene and offensive content being generated by the tool.
In response to international criticism, X announced that it would restrict access to Grok’s image generation and editing functionality, making it now only available to verified, paying subscribers. Critics have called this change insufficient, arguing that it does not address the core safety problem and may effectively monetise misuse.
In the United Kingdom, the UK’s communications regulator, Ofcom, announced a formal investigation into X over reports that Grok was used to create these sexualised AI images, including images that may meet the legal definitions of non consensual intimate imagery or even child sexual abuse material (CSAM). The regulator made urgent contact with X and set strict deadlines for evidence about the steps taken to comply with UK legal obligations.
UK Regulatory Framework: The Online Safety Act and AI-Generated Content
Ofcom is responsible for enforcing the Online Safety Act, which applies to a wide range of user-to-user and content-hosting services. The Act imposes statutory duties to prevent and mitigate exposure to illegal and harmful content, regardless of whether that content is created by humans or generated using artificial intelligence.
In the case of Grok, Ofcom’s investigation is understood to focus on whether the platform:
- carried out appropriate risk assessments before deploying AI image-generation and editing features;
- implemented proportionate safeguards to address foreseeable misuse;
- prevented the creation or dissemination of illegal content, including non-consensual intimate images and child sexual abuse material; and
- maintained effective content moderation and reporting systems.
Under UK online safety law, failure to comply with these duties can expose service providers to significant financial penalties, mandatory compliance orders and, in severe cases, actions to restrict or block non compliant services within the UK.
Legal Issues Raised by the Grok Incident
From a UK legal perspective, AI image generation presents particular regulatory and reputational risks.
Non-consensual intimate imagery is unlawful under UK law, and this applies equally where images are digitally created or altered using AI. The synthetic nature of the content does not neutralise harm to individuals or reduce regulatory scrutiny. Platforms can be held accountable for failing to prevent such lawful abuses.
Sexualised depictions of minors, even where algorithmically created, may be treated by UK regulators as falling within the strict legal prohibitions on child sexual abuse material. The mere ability of a tool to produce such content without safeguards can trigger regulatory intervention. Regulators expect platforms to take proactive steps to ensure that AI systems cannot be exploited in this way.
A further factor is foreseeability. The misuse of generative AI for sexualised or abusive imagery is well documented, particularly given global reporting about nudification apps and AI deepfakes. The foreseeability of misuse places an expectation on technology developers and platform operators to implement meaningful safety-by-design measures before deploying features.
For AI-enabled services, this means building legal compliance and user protection into the architecture of the product, rather than relying solely on reactive moderation once harm has occurred. Measures such as restricting access to AI image tools, introducing payment tiers, or limiting prompts may form part of a broader compliance strategy. However, such steps are unlikely to be sufficient on their own if the underlying system continues to generate harmful outputs. Simply reacting to public backlash is not sufficient under modern UK online safety obligations. UK regulators have made clear that where risks are foreseeable, businesses are expected to anticipate them and embed safeguards at the design and deployment stage.
Practical Implications for AI Developers, Platforms, and Commercial Users
While Ofcom’s investigation is directed at a major social media platform, the Grok incident serves as a cautionary case study for businesses developing or integrating generative AI capabilities in the UK, whether through consumer-facing products, enterprise tools, or licensed APIs.
This is particularly relevant for:
- AI developers licensing models to third parties;
- platforms integrating third-party generative AI into their services; and
- businesses operating across multiple jurisdictions with differing online safety and content regulation regimes.
From a commercial and IP perspective, contractual arrangements, licensing terms, and risk allocation provisions should be reviewed alongside regulatory compliance. Enforcement action against one party in the AI supply chain can have downstream legal and reputational consequences for others. Therefore, they should make sure to:
- Conduct rigorous pre deployment risk assessments for AI features that manipulate or generate user content.
- Implement robust technical controls and moderation safeguards designed to prevent the production and dissemination of harmful content.
- Ensure clear reporting and takedown mechanisms that operate quickly and transparently in response to harmful outputs.
- Document and be prepared to demonstrate compliance with UK online safety law, especially where foreseeable misuse could lead to harm.
Conclusion
The outcome of Ofcom’s investigation into Grok will be closely watched by regulators and technology businesses alike. More broadly, it reflects the UK’s intention to apply existing online safety and content laws rigorously to emerging AI technologies.
With Ofcom’s formal investigation now underway and international regulatory pressure mounting, technology companies must reassess the legal exposures associated with content manipulation features and adopt proactive legal oversight, clear governance structures, and compliance-driven product design, which are now essential to managing risk under UK law.
As the regulatory response evolves, so too will expectations for safeguarding user rights and preventing harm in the era of powerful generative AI.
How To Get In Contact
If you require assistance on these matters, speak with our Technology Lawyers on +44 (0)204 600 9907 or email info@culbertellis.com.
Accurate at the time of writing. This information is provided for general information purposes only and should not be relied upon as legal advice.





