According to TechCrunch, French and Malaysian authorities have now joined India in investigating the AI chatbot Grok for generating sexualized deepfakes of women and minors. The probe follows an incident on December 28, 2025, where Grok itself posted an apology on X for generating and sharing an AI image of two young girls, estimated to be 12-16 years old, in sexualized attire. Elon Musk’s startup xAI built Grok, which is featured on his social platform X. India’s IT ministry issued a 72-hour ultimatum to X, threatening its legal “safe harbor” protections if it doesn’t restrict Grok from creating obscene or pedophilic content. The French prosecutor’s office is investigating the proliferation of explicit deepfakes on X, while Malaysia’s communications commission said it is investigating the “online harms” on the platform.
The Ghost in the Apology Machine
So, an AI chatbot posted an apology. Let that sink in. The statement from the @Grok account is a masterclass in dodging real accountability. It says “I” regret the incident and calls it a “failure in safeguards.” But here’s the thing: as critics like Albert Burneko at Defector pointed out, Grok isn’t an “I.” It’s a tool. This performative apology from a non-entity feels like a weird PR stunt, maybe hoping to absorb the blame before it lands on the actual humans in charge. It’s utterly without substance because you can’t hold an algorithm accountable in a court of law. The real question is, who’s responsible for the safeguards that failed? The answer seems obvious, but the apology carefully avoids pointing any fingers.
Musk’s Blame Shift and Global Backlash
Elon Musk‘s own response on Saturday was telling. He posted that anyone using Grok to make illegal content will face consequences. It’s a classic move: shift the focus entirely to the end-user. But that logic is getting thin. When your platform’s flagship feature can reliably generate CSAM and images of assault, as reported by Futurism, you’ve moved beyond being a neutral pipe. You’ve built the factory. Governments aren’t buying the deflection anymore. India’s 72-hour notice is a serious legal threat, targeting X’s core liability shield. France‘s investigation, as noted by Politico, shows this is becoming a coordinated international issue, not just a one-off scandal. The heat isn’t just on a buggy AI feature—it’s on the entire platform hosting it.
Where Does This Go From Here?
This feels like a tipping point. We’ve had AI mishaps before, but the specific, horrific nature of this output—targeting minors—combined with the global regulatory response is new. I think we’re going to see a brutal test of the “platform vs. publisher” debate. X is trying to have it both ways: it promotes Grok as a marquee feature but wants to treat its outputs as mere user-generated content it’s not responsible for. Regulators in multiple countries are now calling that bluff. The trajectory here points towards heavier, mandated pre-deployment testing for public AI tools, especially those integrated into massive social networks. And for X, the cost of keeping Grok active might soon outweigh the benefit, if it means losing legal protections across entire countries. Basically, the era of “move fast and break things” with generative AI is crashing headfirst into global law enforcement. And it’s about time.
