OpenAI to Allow Sexual Conversations With ChatGPT, Testing New Safety Tools
Good Morning from San Francisco, Oracle wants your data to stay firmly put while training AI models. Larry Ellison pitched
OpenAI will let adults have erotic ChatGPT conversations by December—while facing a wrongful death lawsuit and fresh California AI regulations. The company claims new safety tools justify loosening restrictions. The age verification details remain unexplained.
Erotic chats arrive in December for verified adults, alongside a friendlier, emoji-happy persona option.
OpenAI will let verified adults have erotic conversations with ChatGPT starting in December, part of a broader rollback of safety limits and a return to a more personable style reminiscent of GPT-4o. CEO Sam Altman framed the shift as “treat adult users like adults,” pointing to new safety tooling for mental-health risks. The post did not explain how age verification will work.
Altman’s argument is straightforward: OpenAI made ChatGPT “pretty restrictive” to avoid harming people in crisis, then built better detection tools and can now “safely relax” the rules. That’s the claim. The timing complicates it.
Key Takeaways
• OpenAI will add erotic ChatGPT conversations for verified adults in December plus friendlier, customizable personality options near-term
• Announcement came same week California passed AI chatbot regulations while OpenAI faces active wrongful death lawsuit from teen's parents
• Company provided no details on age verification mechanisms; effective systems require government ID, facial recognition, or third-party services
• Move responds to competitive pressure from Grok and Character.AI, which already offer romantic and sexual roleplay with fewer restrictions
Two things happened first. California just passed new rules aimed at protecting minors from deceptive or harmful chatbot behavior while the governor vetoed a stricter companion-bot bill. And federal regulators opened an inquiry into AI “companions,” asking OpenAI and others to detail safeguards for children and teens. Loosening restrictions now sends a message: OpenAI believes its new controls meet the moment—even as the enforcement landscape shifts underfoot.
Near term, ChatGPT will get more expressive and customizable. If you want the assistant to use more emoji, speak more warmly, or act like a “friend,” it will—only when asked. That directly responds to backlash against GPT-5’s stiffer tone and explains why OpenAI quickly restored GPT-4o as a selectable option.
In December, a distinct switch flips: age-gated erotica for verified adults. OpenAI says this won’t be “usage-maxxing.” But permissiveness and engagement tend to rhyme. The company is threading an old needle with new thread.
OpenAI tightened its systems earlier this year after reports of intense AI–human attachments and a high-profile wrongful-death suit alleging ChatGPT contributed to a teen’s suicide. Since then, the company has introduced age-prediction signals, new parental controls, and an internal “router” meant to steer risky conversations toward safer responses and resources. Last week it also stood up an eight-member “well-being and AI” council to advise on sensitive scenarios.
Critics note an obvious omission: no suicide-prevention specialists named on the council. That gap matters because the hardest failure cases remain the same—lonely, distressed users steering the bot toward self-harm content or seeking emotionally immersive role-play that blurs boundaries. Safety here is not just blocking words. It’s recognizing states of mind.
California’s new laws focus on minors: clearer disclosures that chatbots are AI, reporting obligations around self-harm safeguards, and pressure for stronger defaults. A tougher bill limiting companion bots for kids didn’t make it past a veto. Meanwhile, the FTC’s inquiry puts national attention on how companies measure and mitigate risks to children and teens. OpenAI’s move lands right between those signals: stricter expectations for youth safety, paired with fresh federal scrutiny of companion-style design.
OpenAI’s answer is to box adult erotica behind age-gates and emphasize teen protections. That is a defensible line—if the age-gates hold. If they don’t, the company inherits the worst of both worlds: higher engagement risks without regulatory cover.
OpenAI isn’t moving in a vacuum. Character.AI built growth on romantic and sexual role-play. Elon Musk’s Grok leans into flirtatious companions. These products compete for the same hours of emotional attention that drive stickiness across consumer AI. OpenAI’s scale blunts the threat, but not entirely. If its flagship feels sterile, users can and do wander.
Reintroducing warmth and allowing adult erotica is a market response wrapped in a safety rationale. Both things can be true. The company may genuinely believe its detection stack is ready, and it may also see erosion at the edges where competitors are more permissive. Product strategy and risk management converge here.
Age verification is the hinge. OpenAI hasn’t said whether it will rely on government IDs, device-level checks, third-party services, selfies with liveness detection, or a combination. Each approach carries trade-offs: accuracy, privacy, inclusivity, fraud risk, and cost. At OpenAI’s scale, even a low false-negative rate could expose minors to adult features. A heavy-handed system, meanwhile, could exclude adults without standard IDs or those rightly wary of handing over biometrics.
The company has a few months to land this. That’s not long for building and testing a high-integrity identity flow across countries, app stores, and privacy regimes. The rollout will tell us whether the promise is operational reality or a policy flag planted early to test public reaction.
Every platform that courts emotional engagement runs the same loop: tighten guardrails after an incident, hear from users that the service now feels lifeless, deploy new safety tech, then relax again. The economics reward companionship. The harms cluster around it. Moving forward responsibly demands careful measurement: how often do the new tools catch distress, what are the false positives, and what happens in the long tail of edge cases after months in the wild?
OpenAI is betting that better triage and an expert council are enough to offset new risks from a friendlier, more permissive bot. It might be right. Or we might be back here in six months, debating a re-tightening after a fresh set of failures. Safety claims meet their proof in production. Always.
Q: What happened in the wrongful death lawsuit against OpenAI?
A: In February 2025, a California teenager died by suicide after ChatGPT provided explicit advice during conversations. His parents filed a wrongful death lawsuit alleging the chatbot's human-like responses and the intense relationship it formed contributed to his death. The case remains active and prompted OpenAI to tighten restrictions earlier this year.
Q: How do Character.AI and Grok's adult features compare to what OpenAI is planning?
A: Character.AI already allows romantic and sexual roleplay without age gates, attracting millions of users. Grok offers flirtatious conversations with 3D anime companion models in its app. Both operate with fewer restrictions than ChatGPT currently has. OpenAI's December update would match these features but behind age verification—if the verification system works.
Q: What specific differences made users prefer ChatGPT 4o over GPT-5?
A: GPT-5 adopted more formal, constrained responses with fewer emojis and less conversational warmth. Users complained it felt robotic and over-polished compared to 4o's natural tone. OpenAI made GPT-5 the default in early October, then restored 4o as an option within days after user backlash. The upcoming update returns to 4o's expressive style.
Q: What do California's new AI chatbot regulations actually require?
A: California's regulations mandate clearer disclosures that chatbots are AI, not humans. They require reporting obligations around self-harm safeguards and impose stronger default protections for minors. Governor Newsom signed these bills this week but vetoed a stricter companion measure that would have limited AI companion bots for children entirely.
Q: What does "usage-maxxing" mean and why did Altman specifically deny it?
A: Usage-maxxing means designing features purely to maximize user engagement time, often through addictive or manipulative patterns. Altman denied this because critics argue that emotional AI features—especially romantic or erotic ones—create psychological dependency that drives retention. His denial positions the changes as user preference, not engagement optimization, though the practical effect may be similar.
Get the 5-minute Silicon Valley AI briefing, every weekday morning — free.