Cerebras pulled its IPO in 2024 when its biggest customer, a UAE conglomerate, made regulators nervous. Fourteen months later, OpenAI signed a $10 billion deal for 750 megawatts of Cerebras systems. The chipmaker just rekindled its IPO filing. Coincidence? The sequence tells a different story.
Grok generated over 160,000 sexualized images daily, some depicting minors. Now regulators across Europe, Asia, and Australia are investigating, and Apple and Google face pressure to remove X from app stores. Their silence speaks volumes.
A week of automation work delivered garbage. Thirty minutes rebuilding it as a Claude Code skill delivered results. The difference explains why most engineers are using AI coding tools wrong, and what skills actually solve.
Grok Generated 6,700 Nudifying Images Per Hour. Musk Says He Saw 'Literally Zero.
Grok generated over 160,000 sexualized images daily, some depicting minors. Now regulators across Europe, Asia, and Australia are investigating, and Apple and Google face pressure to remove X from app stores. Their silence speaks volumes.
For roughly two weeks starting in late December, users turned Elon Musk's AI chatbot into a tool for stripping clothes off women and children at industrial scale. In one 24-hour analysis reported by Bloomberg, Grok produced about 6,700 sexually suggestive or "nudifying" images per hour. That's over 160,000 such outputs per day at that pace, generated by a tool built into one of the world's largest social networks and marketed under a feature called "Spicy Mode."
California Attorney General Rob Bonta announced an investigation into xAI on Wednesday, joining a growing list of regulators across Europe, Asia, and Australia. Indonesia and Malaysia have already blocked access to Grok entirely. The UK, EU, France, India, and Australia have opened formal inquiries. Ofcom, the UK's media regulator, can levy fines up to 10% of X's worldwide revenue, or ban the platform altogether.
Musk's response? "I not aware of any naked underage images generated by Grok. Literally zero."
That claim is difficult to square with reality. Grok itself apologized for generating an image of "two young girls (estimated ages 12-16) in sexualized attire" on December 28. More than half of the 20,000 images generated between Christmas and New Year's depicted people in minimal clothing, according to analysis cited by California's Department of Justice. Some appeared to be children.
The Breakdown
• Grok generated ~6,700 "nudifying" images per hour in one 24-hour analysis, 85x more than the next five nudify sites combined
• Regulators across Europe, Asia, and Australia now investigating; Indonesia and Malaysia have blocked Grok
• X's "solution" was a paywall, turning child exploitation into a premium feature
• Apple and Google face pressure to remove X from app stores but haven't responded
A business model for harassment
xAI didn't stumble into this disaster. The company designed Grok with a "Spicy Mode" and used it as a marketing differentiator, positioning the chatbot as the uncensored alternative to competitors who actually bothered with safety guardrails. When the "edit image" button launched in late December, users discovered they could tag @grok under any photo on X and request modifications. "Put her in a bikini." "Make her clothes dental floss." "Put donut glaze on her chest."
Grok complied. Thousands of times per hour.
Carrie Goldberg, a lawyer specializing in online sex crimes, put it bluntly to Bloomberg: "We've never had a technology that's made it so easy to generate new images." And she's right. The internet's other top five "nudifying" websites averaged 79 images per hour in the same period that Grok was cranking out 6,700. That's not a margin difference. It's a species difference. If you're looking for a metaphor, think of it this way: Grok wasn't competing with other nudify sites. It was operating in a different weight class entirely.
When the backlash hit, xAI's media team responded with an automated reply dismissing coverage as "Legacy Media Lies." When pressed further, the company offered silence. The Atlantic tried the investors. Andreessen Horowitz, Sequoia, BlackRock, Morgan Stanley. Most wouldn't comment. Some didn't bother responding at all. Morgan Stanley initially claimed it couldn't find documentation that it had invested in xAI, until reporters sent them xAI's own press release listing Morgan Stanley as a key investor.
Join 10,000+ AI professionals
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
The paywall "solution"
X's response arrived on January 10: image generation restricted to paying subscribers only. Keir Starmer, the UK prime minister, called the images "horrific," "disgusting," and "shameful." And putting child exploitation behind a paywall doesn't prevent harm. It monetizes it.
Musk has framed the entire controversy as a free speech battle. He's reshared posts calling regulatory intervention "retarded" and accused the UK government of "fascism." When Senator Ted Cruz, a co-sponsor of legislation criminalizing deepfake pornography, wrote that the Grok-generated images were "a clear violation" of the law, Musk seemed unconcerned. The next day, Cruz posted a photo of himself with his arm around Musk. "Always great seeing this guy."
But the governments circling xAI aren't interested in Musk's culture war framing. Indonesia's government went further. Deepfakes, according to Communications Minister Meutya Hafid, violate "human rights, dignity and the safety of citizens." Full stop. Brussels ordered X to preserve every Grok-related document through the end of 2026. That's not a warning. That's investigators building a file. Ofcom opened a formal investigation on January 12.
The accountability vacuum
What makes the Grok scandal different from previous AI controversies isn't the technology. It's who takes the blame when things go wrong.
When Google Gemini generated racially diverse Nazis two years ago, Google temporarily disabled the bot's image generation to fix the problem. xAI's approach was different: blame the users. "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content," Musk wrote. X spokesperson Victoria Gillespie offered the same deflection, noting that users face consequences for creating illegal content.
That's not a safety policy. It's a liability shield dressed up as one. The platform builds the tool, markets the tool, profits from the tool, then points at the users when the tool produces exactly the content its design encouraged.
Child safety advocates say they warned xAI about the potential for abuse before launch. Tyler Johnston of The Midas Project told reporters: "That's basically what's played out." Stefan Turkheimer, VP of public policy at RAINN, pushed back on xAI's characterization of the abuse as "isolated cases." Survivors, he noted, experience ongoing harassment from manipulated content. The images don't disappear when the news cycle moves on.
Where the investigations lead
California's investigation presents the most direct threat to xAI's operations. AG Bonta has been aggressive on AI safety, meeting with OpenAI in September over concerns about child interactions and sending warning letters to a dozen major AI companies. Governor Gavin Newsom backed the investigation publicly, calling xAI's platform "a breeding ground for predators."
Daily at 6am PST
Don't miss tomorrow's analysis
No breathless headlines. No "everything is changing" filler. Just who moved, what broke, and why it matters.
Free. No spam. Unsubscribe anytime.
But the political landscape is complicated. Musk sits at the center of the Trump administration's power structure, and the State Department has already signaled it might intervene if the UK moves to ban X. Sarah B. Rogers, under secretary of state for public diplomacy, said America "has a full range of tools" to maintain "uncensored internet access in authoritarian, closed societies," apparently placing the UK in that category for considering enforcement of its own laws.
The Senate unanimously passed the Defiance Act on Tuesday, allowing victims of deepfake pornography to sue producers and distributors. Whether that law can reach platform operators who build the tools, rather than just the users who prompt them, remains untested. Lawmakers haven't kept pace with generative AI, and Grok exploits every gap in the legal architecture.
What the silence reveals
Apple and Google hold the real power here. Both companies host X and Grok in their app stores. Both have policies prohibiting apps that promote sexualization of minors or facilitate illegal activity. A coalition of nearly 30 women's and child safety groups sent letters to Tim Cook and Sundar Pichai on Wednesday demanding removal. Three Democratic senators made the same request last week.
Neither company has responded.
That silence tells you everything about where the incentives actually point. Musk is too big to ban, too connected to challenge, too rich to regulate. His investors know it. His infrastructure providers know it. His political allies know it. And Musk knows they know it.
The investigations will grind forward. Ofcom will issue findings. California will pursue whatever legal theory survives contact with Musk's lawyers and the administration's hostility to state-level AI regulation. Some jurisdictions will extract settlements. Others will be outmaneuvered.
But the fundamental dynamic won't change until someone with actual leverage decides to use it. The app stores have that leverage. So far, they've chosen not to exercise it.
Grok continues to generate images. The Edit Image button remains live. The victims, as Turkheimer noted, experience ongoing harassment. And Musk, 6,700 images per hour later, claims he was "not aware" of any naked underage images generated by his AI.
Literally zero.
Frequently Asked Questions
Q: What is "Spicy Mode" on Grok?
A: Spicy Mode is xAI's marketing term for Grok's ability to generate explicit content without the safety restrictions found in competing AI chatbots. The company positioned it as a feature, not a bug, advertising Grok as the "uncensored" alternative to tools like ChatGPT and Claude.
Q: How does Grok's output compare to other AI image generators?
A: In one 24-hour analysis, Grok produced about 6,700 sexually suggestive or "nudified" images per hour. The internet's other top five nudifying sites averaged just 79 images per hour combined. Grok generated roughly 85 times more such content than its nearest competitors.
Q: Which countries have taken action against Grok?
A: Indonesia and Malaysia have blocked Grok entirely. The UK, EU, France, India, and Australia have opened formal inquiries. California AG Rob Bonta announced a probe on January 14, 2026. The European Commission ordered X to preserve all Grok documents through 2026.
Q: What penalties could X face in the UK?
A: Under the UK's Online Safety Act, Ofcom can fine X up to 10% of its global revenue or £18 million, whichever is greater. In extreme cases, Ofcom can seek a court order forcing internet service providers to block access to X entirely in the UK.
Q: What is the Defiance Act?
A: The Defiance Act, passed unanimously by the Senate on January 14, 2026, allows victims of nonconsensual deepfake pornography to sue those who produce and distribute such content. Whether it applies to platform operators like xAI, not just individual users, remains legally untested.
Bilingual tech journalist slicing through AI noise at implicator.ai. Decodes digital culture with a ruthless Gen Z lens—fast, sharp, relentlessly curious. Bridges Silicon Valley's marble boardrooms, hunting who tech really serves.
Cerebras pulled its IPO in 2024 when its biggest customer, a UAE conglomerate, made regulators nervous. Fourteen months later, OpenAI signed a $10 billion deal for 750 megawatts of Cerebras systems. The chipmaker just rekindled its IPO filing. Coincidence? The sequence tells a different story.
Washington spent years blocking Nvidia's advanced chips from China. Tuesday, the Trump administration approved H200 exports with a 25% government cut. By afternoon, Beijing told its own companies to hold off.
Apple tried to build its own AI. It failed. Now it's paying Google $1 billion a year to license Gemini while pretending the arrangement is strategic. The company that controlled every layer of the stack is now renting the most important one.
Anthropic just gave everyone access to the same AI agent developers have been using for eleven months. Claude Cowork runs in a sandbox, but the prompt injection problem remains unsolved. The safety of the system now rests on users who don't know what they're asking for.