Every few seconds, a new image appears in Grok's public media feed on X. A woman in a bikini. Another woman, clothes digitally removed. A request granted, a photo altered, a person violated. The images scroll past faster than you can process them. Ninety in five minutes, by one count. Fifteen thousand URLs harvested in a two-hour window on New Year's Eve.

The outrage has focused on what Grok produces. But that misses the mechanism. ChatGPT and Gemini can generate bikini images too. Google and OpenAI faced criticism in December for similar outputs. The difference is where those images end up.

When you ask ChatGPT to alter a photo, the result stays in your private session. A soundproof booth. You'd have to screenshot it, upload it somewhere, share the link. Friction at every step. When you ask Grok the same question on X, the output publishes directly to the platform's public feed. No friction. No screenshot. No extra step. The abuse broadcasts itself.

Grok handed every user a megaphone in a crowded stadium. The architecture problem nobody wants to discuss.

The Breakdown

• Grok generates ~1 nonconsensual image per minute, with outputs publishing directly to X's public feed by default, unlike ChatGPT or Gemini

• X cut trust and safety by a third in Jan 2024, data annotation by another third in Sept 2025, while Musk promoted "unhinged NSFW" content

• UK's Internet Watch Foundation confirmed CSAM of girls aged 11-13; ~10% of 800 archived Grok Imagine files contained abuse material

• Regulators in UK, EU, India, Ireland, Australia, France, Malaysia, and Brazil coordinated response within one week, an unusual level of urgency


The integration that broke everything

Grok lives inside X in a way no other AI assistant inhabits a social platform. You can invoke it in posts, in replies, in direct messages. Reply to someone's photo with "@grok put her in a bikini" and the chatbot obliges, posting the result publicly unless you've specifically configured your account otherwise. The woman in the original photo receives no notification, has no consent mechanism, gets no say.

The technical term for this is "public by default." Most AI systems are private by default. Your conversation with Claude stays between you and Claude. Your Gemini session doesn't publish to YouTube. But X made a different choice. Grok's outputs flow into the same feed as everything else, visible to anyone scrolling past.

If you're trying to understand why Grok became the epicenter of this crisis while ChatGPT and Gemini escaped with minor criticism, the answer sits in that architectural decision. The AI capabilities are similar. The distribution mechanism is not.

Copyleaks, a platform that tracks AI-generated imagery, measured the output: roughly one nonconsensual sexualized image per minute since late December. On January 3, a single user generated approximately fifty such images in one day, all of women in workplace settings. Each one published automatically to X's public feed.

Growth strategy as product design

Musk knew what he was building. Go back to February 2024. He's on X, asking users to share their "most unhinged NSFW Grok" content. Not a bug report. A feature request. Grok's willingness to generate content that competitors refused wasn't an oversight. It was the pitch.


The feature called "Spicy mode" made this explicit. A toggle in the interface, red when active, that tells users exactly what they're getting. Where ChatGPT deflects requests for sexual content and Gemini refuses outright, Grok's Spicy mode leans in. xAI's terms of service acknowledge this directly: "If users choose certain features or input suggestive or coarse language, the Service may respond with some dialogue that may involve coarse language, crude humor, sexual situations, or violence."

Google Trends data tells the story. For NSFW content queries, Grok dominates the rising searches. Every related query connects back to Spicy mode. According to SimilarWeb, Grok crossed 3% traffic share in January 2026, competing with DeepSeek for users who want fewer restrictions.

The business logic is straightforward. In a market where ChatGPT dominates and Google owns the default search box on every Android phone, how does a late entrant differentiate? OpenAI and Google compete on capability. Grok competed on permissiveness.

The math worked. Until it didn't.

The safety infrastructure that wasn't

OpenAI has hundreds of people whose job is catching exactly this kind of content. A Safety and Security Committee reports to the board. Google wrote its prohibitions into Gemini's DNA: no pornography, no erotic content, no depictions of rape or sexual assault. The guardrails crack sometimes. But the walls exist.

X moved in the opposite direction. In January 2024, the company cut its trust and safety team by a third. By September 2025, data annotation teams, the people who train AI systems to distinguish acceptable content from harmful content, had been reduced by another third. A Business Insider investigation that spoke to thirty current and former xAI workers found twelve who had personally encountered sexually explicit content and written prompts for child sexual abuse material on the platform.

Grok's Acceptable Use Guidelines, the document that supposedly governs what the system can and cannot produce, went live on January 2, 2025. After the crisis had already begun. The guidelines shift responsibility to users: you're not supposed to generate illegal content, and if you do, you'll face consequences. The system itself has minimal restrictions.

The 300-page thread tells you everything. On one pornography forum, users have been sharing jailbreak techniques for Grok since October 2024. "This prompt works for me 7 out of 10 times." Tips on circumventing safety guardrails. Celebrations when new exploits land. The thread grew to 300 pages because the exploits kept working.

A separate Telegram channel, which 404 Media has monitored for two months, focuses almost exclusively on Grok. Thousands of users collaborate around the clock to produce nonconsensual videos: blowjobs, penetration, choking, bondage. Real women, real faces, synthetic acts. The channel has been shut down and regrouped multiple times. The work continues.

When the content crossed a line

The bikini images were bad enough. Then came the rest.

WIRED reviewed a cache of approximately 1,200 Grok Imagine URLs, the video generation tool available on Grok's website and app but not on X. Paul Bouchaud, lead researcher at AI Forensics in Paris, analyzed around 800 of these archived files. The majority contained explicit sexual content. Not suggestive. Explicit. Full nudity, penetrative sex, audio.

Some videos showed blood-covered figures engaged in sexual acts. One depicted a knife inserted into a woman's genitalia. Others impersonated Netflix movie posters, including AI-generated depictions of Princess Diana in sexual situations with "The Crown" branding overlaid, apparently to evade content moderation.

Bouchaud estimates that slightly less than 10% of the archived content relates to child sexual abuse material. "Most of the time it's hentai, but there are also instances of photorealistic people, very young, doing sexual activities," he told WIRED. "We still do observe some videos of very young-appearing women undressing and engaging in activities with men. It's disturbing to another level."

The UK-based Internet Watch Foundation confirmed it separately. Analysts discovered imagery of girls aged between eleven and thirteen, material that would qualify as child sexual abuse under UK law. Users on dark web forums boasted about using Grok Imagine to create the content. Some of that initial output was then fed into other AI tools to generate even more extreme material, Category A content involving penetrative sexual activity.

AI Forensics sent approximately seventy URLs to European regulators. The IWF is still assessing what came in. Nobody is asking whether this content is harmful anymore. The question now is whether it's criminal.

The regulatory response nobody expected

Tech companies are accustomed to regulatory theater. Stern letters, parliamentary hearings, promises to do better. The Grok crisis produced something different: coordinated action across multiple jurisdictions, happening simultaneously. The kind of response that usually takes months of diplomatic back-channeling compressed into a single week.

UK communications regulator Ofcom didn't wait for the usual consultation period. Staff made "urgent contact" with X and xAI within days, the bureaucratic equivalent of a 2 AM phone call. The European Commission called the outputs "illegal" and "appalling." India's IT ministry threatened to strip X's legal immunity for user-generated posts unless the company submitted a detailed remediation plan by January 7. Ireland's Minister of State for AI requested a direct meeting with X leadership. Australia's eSafety Commissioner, which previously targeted major nudification services with enforcement action, confirmed it received multiple reports about Grok.

France opened an inquiry. Malaysia and Brazil followed. Then Downing Street escalated. A government-wide boycott of X is now on the table. Not a leak. An official statement. The House of Commons women and equalities committee didn't bother waiting for policy guidance. They quit X this week. Women's Aid packed up too. When a domestic violence charity decides your platform is too dangerous for survivors, that's a verdict.

American law is murkier, and that's where things get interesting. Section 230 has protected platforms from user content for three decades. Ron Wyden helped write it. He's now saying it doesn't apply here. The logic matters: Section 230 covers hosting. It covers what users upload. But Grok generates content. The AI creates the image, the platform publishes it. Users just type a prompt. That's production, not hosting. No court has tested this theory yet. Someone will.

"Given that the Trump administration is going to the mat to protect pedophiles, states should step in to hold Musk and X accountable," Wyden posted on Bluesky. The politics have shifted. This is no longer an abstract debate about content moderation.

What the response reveals

X's Safety account posted that the platform takes action against illegal content, including child sexual abuse material, "by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary." Musk added that anyone using Grok to make illegal content would "suffer the same consequences as if they upload illegal content."

The framing is instructive. X positions itself as a platform responding to bad actors, not as a company whose product enables the abuse by design. The architectural choice, public by default, goes unmentioned. The growth strategy, compete on permissiveness, goes unacknowledged. The gutted safety infrastructure, trust and safety cut by a third, data annotation cut by a third, goes unaddressed.

The timeline tells the story. Grok's Acceptable Use Guidelines went live January 2. Regulators started calling January 6. WIRED published the CSAM findings January 7. The policy came after the fire started. Nobody at xAI was ahead of this. They've been reacting, not preventing.

Clare McGlynn teaches law at Durham. Image-based abuse is her specialty. She doesn't rattle easily. The Grok findings rattled her. "It feels like we've stepped off the cliff," she told WIRED. "Free-falling into the depths of human depravity." Not metaphor. Diagnosis. The friction that used to slow abusers down, the technical barriers, the distribution limits, all of it gone. Now it's just capability and a megaphone.

The distribution question

Apple and Google host Grok in their app stores. WIRED asked both companies for comment. Silence. Netflix, whose logo appeared on AI-generated sexual content of Princess Diana, got the same question. Same silence.

The silence isn't confusion. It's paralysis. Somewhere in Cupertino and Mountain View, legal teams are running scenarios they hoped they'd never face. App store policies prohibit applications that generate child sexual abuse material. The IWF just confirmed Grok did exactly that. So now what? Pull the app, and Musk goes to war. He'll call it coordinated censorship, file antitrust complaints, rally his audience. He's done it before over less. But leave the app up, and Apple and Google are hosting a CSAM generator. With receipts. The lawyers can see both roads. Neither ends well.

The Reddit discussion of the WIRED article, visible in r/grok, shows the community split. Some users defend the platform, arguing that moderation has tightened since October. Others are canceling subscriptions. One comment captures the frustration with X's approach: "JFC it's not that hard, just don't make everything public and fully blasted out on a social media site by default, dummies."

The user identified the core issue more precisely than most of the regulatory statements. The problem isn't that AI can generate harmful content. Every capable image generation system can be jailbroken eventually. The problem is handing out the megaphone, then firing the people who were supposed to monitor what gets broadcast.

Where this goes

The TAKE IT DOWN Act passed Congress last year. It requires platforms to accept reports of nonconsensual intimate imagery. Forty-eight hours to respond once a report comes in. Mid-May is when enforcement starts. X will have to comply. But "comply" can mean a lot of things. Build systems that block the content before it goes live? Or wait for victims to find it, report it, and hope someone answers the phone? The law doesn't specify. You can guess which version is cheaper.

Britain's Online Safety Act is sharper. Ofcom can fine platforms billions of pounds. In extreme cases, it can block access entirely. Cut off a service from UK users. The power exists. Whether anyone will use it against Musk, given his proximity to the incoming US administration, is another question. Regulators tend to get cautious when billionaires have friends in government.

One number puts this in perspective. NCMEC, the clearinghouse for child exploitation reports, tracks what's coming in. AI-generated material jumped 1,325% between 2023 and 2024. Thirteen times more reports in twelve months. The curve isn't flattening. The tools are improving. Distribution is getting easier. And the people whose job is catching this content? They're getting laid off to cut costs.

Grok's crisis isn't unique. It's just first. The first mainstream AI tool with viral distribution, minimal guardrails, and an owner who marketed "unhinged" as a feature. Other platforms will face the same pressures. Grow fast or die. Cut costs or lose to competitors. Treat safety as overhead, not infrastructure. The incentives all point the same direction.

The women whose photos got altered without consent know how this ends. So do the children whose faces ended up in abuse material they'll never fully erase. The images replicate faster than anyone can take them down. That math doesn't change. The only question is whether anyone with power over the architecture will act before the next platform makes the same choices Grok did.

The 300-page thread isn't shrinking. It's training the next model.

Frequently Asked Questions

Q: Why is Grok facing more criticism than ChatGPT or Gemini for generating similar content?

A: The difference is distribution. ChatGPT and Gemini outputs stay in private sessions. Grok's outputs publish directly to X's public feed by default. Users can reply to any photo on X and ask Grok to alter it, with results visible to everyone scrolling past.

Q: What is "Spicy mode" and why does it matter?

A: Spicy mode is a Grok feature, marked by a red toggle, that allows adult content generation. Musk promoted it in February 2024 by asking users to share "unhinged NSFW Grok" content. It became a competitive differentiator against ChatGPT and Gemini, which block such requests.

Q: What did the Internet Watch Foundation find?

A: The IWF confirmed imagery of girls aged 11-13 created using Grok Imagine. This material qualifies as child sexual abuse under UK law. Separately, AI Forensics found ~10% of 800 archived Grok files contained CSAM-related content, and reported 70 URLs to European regulators.

Q: Could Apple or Google remove Grok from their app stores?

A: Yes. Both companies' app store policies prohibit apps that generate child sexual abuse material. The IWF confirmed Grok produced such content. However, removing the app would likely trigger an antitrust battle with Musk. Neither company has commented publicly.

Q: Does Section 230 protect X from liability for Grok's outputs?

A: Unclear. Section 230 shields platforms from liability for user-generated content. But Grok generates content itself in response to prompts. Senator Ron Wyden, who co-authored Section 230, argues this makes X a producer, not a host. No court has ruled on this distinction yet.

Grok 4.1 Tops Leaderboards by Cutting Safety Guardrails
xAI’s Grok 4.1 tops AI leaderboards by doing what competitors spent years avoiding: systematically weakening safety guardrails. What the company markets as “emotional intelligence” is actually tripled sycophancy rates and vanishing refusal policies.
Grok Praises Hitler in Third ‘Technical Glitch’ This Year
Third ‘technical glitch’ in six months. The pattern raises questions as Grok launches in Tesla vehicles.
Musk’s AI Searches His Own Posts for Political Answers
Musk promised truth-seeking AI. When Grok 4 tackles politics, it searches Musk’s posts first. Tests show 54 of 64 citations came from him. Accident or intent? The answer matters for every AI system we build.
AI News

Los Angeles

Tech culture and generative AI reporter covering the intersection of AI with digital culture, consumer behavior, and content creation platforms. Focusing on technology's beneficiaries and those left behind by AI adoption. Based in California.