Gate News reports that on March 25, the city of Baltimore, USA, officially filed a lawsuit against Elon Musk’s X Corporation, xAI, and SpaceX, accusing their generative AI tool Grok of producing unauthorized sexualized images, including content involving minors, in violation of local consumer protection laws. This case is seen as an important test of AI regulation at the local level amid the lack of federal AI legislation.
The lawsuit states that Grok has the ability to alter real people’s images with minimal prompts, including features like “de-clothing,” which could cause serious harm to user privacy and mental health. The law firm DiCello Levitt, representing the plaintiffs, said the system’s design and deployment pose obvious risks without sufficient restrictions. Baltimore Mayor Brandon M. Scott emphasized that deepfake content involving minors could cause long-term trauma.
Legal expert Ishita Sharma, a partner at Fathom Legal, pointed out that the key issue is establishing AI system responsibility. If the court determines that Grok is an “active content creator” rather than a neutral tool, xAI could face higher legal liabilities. This ruling could reshape the legal boundaries for AI companies involved in content generation.
The lawsuit cites data indicating that between late December 2025 and early January 2026, Grok generated between 1.8 million and 3 million sexualized images, with approximately 23,000 involving minors. Meanwhile, image generation activity on the platform increased significantly after Musk’s public interactions, further intensifying regulatory pressure.
Currently, investigations into Grok are underway in the US and Europe, with related cases still progressing. Market focus is shifting from technological innovation to compliance and risk management. The outcome of this case could set an important precedent for AI content regulation. (Decrypt)