OpenAI Rolls Out Age Detection for ChatGPT Before Adult Mode Launch
OpenAI's ChatGPT now predicts user age through behavioral signals before "adult mode" launch. Privacy experts warn about accuracy and surveillance.
OpenAI's ChatGPT now predicts user age through behavioral signals before "adult mode" launch. Privacy experts warn about accuracy and surveillance.
You open ChatGPT at 11 p.m. to ask about a calculus problem. Three months on the platform. You told the truth about your age when you signed up, but you're 34, and OpenAI's new algorithm is watching how you behave and making its own call about whether to believe you.
The company announced Tuesday that ChatGPT will start predicting whether users are minors, using behavioral signals instead of trusting the birthdate people enter at sign-up. OpenAI frames this as child safety. Critics see something else: a nervous company building legal cover before rolling out adult content.
ChatGPT has 800 million weekly users, and a slice of them are minors who've used the chatbot in ways that ended badly. At least one teenage suicide led to a wrongful death lawsuit. OpenAI responded the way companies respond: mental health advisory councils, parental controls, safety features bolted on after the damage surfaced. Now they're trying something different. Get ahead of the problem before the next feature ships.
That feature is "adult mode." Simo, who runs OpenAI's application side, said in December that it would ship in early 2026. Altman has talked about allowing mature content for verified adults. But you can't verify adults until you figure out who isn't one.
Key Takeaways
• OpenAI's age prediction uses behavioral signals like login times and usage patterns, not just stated age
• Adults incorrectly flagged as minors must verify via selfie and government ID through Persona
• Privacy experts warn misclassifications are "inevitable" and accuracy data hasn't been disclosed
• The system paves the way for "adult mode" with NSFW content, expected Q1 2026
OpenAI's age prediction model doesn't ask for ID upfront. It watches how you use ChatGPT and makes an inference. Account age matters, as does the time of day you're active. Usage patterns over time feed the algorithm. Your stated age counts too, but it's just one signal among many.
"Deploying age prediction helps us learn which signals improve accuracy," OpenAI said in its announcement. Translation: they're training the system on live users and adjusting as they go.
Get flagged as a minor and ChatGPT starts filtering. The stuff that gets blocked: violence, self-harm content, sexual roleplay, those viral challenges that send kids to the ER, diet content that probably shouldn't exist in the first place. OpenAI says this comes from adolescent psychology research, how teenagers process risk differently, how peer pressure hits harder when you're 15.
If you're an adult who gets flagged as a minor? Persona, the third-party ID service, asks for a selfie. Hold your phone at eye level. Tilt your head. Blink when it tells you to. The system matches your face against a government ID, checks your birthdate, then supposedly deletes everything within a week. OpenAI says this escape hatch will "always" be there.
The company defaults to the safer experience when it can't determine age with confidence. That's the cautious choice. It's also the one that will generate the most friction for legitimate adult users who happen to exhibit teenage-coded behavior, whatever that means to an algorithm.
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
Aliya Bhatia, a senior policy analyst at the Center for Democracy and Technology, put it bluntly in an interview with Decrypt. OpenAI's approach "raises tough questions about the accuracy of the tool's predictions and how OpenAI is going to deal with inevitable misclassifications."
"Inevitable" is the word doing work there. Who signs up for new platforms first? Kids do. That's always been true, and it means some of ChatGPT's oldest accounts belong to people who were minors back when they created them. NIST looked at age verification accuracy in 2024 and found the obvious: image quality matters, demographics matter, and the closer someone is to 18, the harder the call. Distinguishing a 17-year-old from an 18-year-old is basically guesswork. Distinguishing either from someone who's 30 is not.
CDT polled teachers and students last school year and the gap barely existed: 85% of teachers reported using AI, 86% of students said the same. Half of those students? Using it for homework. Bhatia told Decrypt: "Just because a person uses ChatGPT to ask for tips to do math homework doesn't make them under 18." Fair enough. A high school junior working on calculus and a 35-year-old tutor helping clients type similar prompts. Both log in after dinner. Both have newish accounts. The behavioral signals don't cleanly sort them. They correlate with age, sure, but imperfectly, and the company admits it's still calibrating.
What OpenAI hasn't said: how many users they expect to misclassify, what bias testing they've done, how the system performs across different demographics. The blog post mentioned continuous improvement. Error rates didn't come up.
J.B. Branch, a big tech accountability advocate at Public Citizen, offered a more cynical read to Decrypt. "These companies are getting sued left and right for a variety of harms that have been unleashed on teens," Branch said. "This is part of their attempt to minimize that risk as much as possible."
The lawsuit pressure is real, and OpenAI looks cornered. Multiple wrongful death suits, one of them centered on a teenage boy who killed himself. Character.AI got dragged into similar cases. Musk's Grok caught heat over interactions with minors. Last September the FTC sent compulsory orders to OpenAI, Alphabet, Meta, and xAI, all of them getting the same questions: how do you handle child safety, what restrictions exist, how do you flag harmful interactions.
Research from ParentsTogether Action and the Heat Initiative that same month documented hundreds of cases where AI companion bots engaged in grooming behavior and sexualized roleplay with users posing as children. The regulatory and litigation environment has turned hostile. Companies need to show they have protocols in place, that they're screening people out.
Age prediction is that protocol. Whether it works accurately matters less, in a legal defense, than whether it exists. "We need to have some way to show that we have protocols in place," as Branch characterized the company mindset.
OpenAI isn't the first to try this. Roblox rolled out similar Persona-based age checks late last year, adding verification requirements and limiting who could talk to whom. That came after years of criticism over predatory behavior on the platform, reports that wouldn't stop.
Anna Washenko, covering this for Engadget, put it plainly: "Considering how well a similar change has been going over at Roblox, another platform with a shaky history around protecting minors, it seems probable that underage users will find ways to circumvent the existing tools if they want to use ChatGPT as adults."
Age gates have a structural problem. They screen out people who weren't trying to get through. Teenagers with motivation will find workarounds, and those workarounds aren't hard. Fake IDs. A parent's login credentials. The friction lands on rule-followers while the kids who actually want unrestricted access just route around it.
OpenAI's system adds a new layer: behavioral prediction can be gamed by changing how you use the product. Log in during business hours instead of midnight. Avoid asking about homework. Project professionalism in your prompts. It's teaching to a test. Once teenagers know the grading rubric, they'll optimize for it. The signals are learnable, and teenagers who want unrestricted access will learn them.
Daily at 6am PST
No breathless headlines. No "everything is changing" filler. Just who moved, what broke, and why it matters.
Free. No spam. Unsubscribe anytime.
All of this machinery exists to enable something OpenAI hasn't shipped yet. Simo said adult mode would debut in Q1 2026. Altman has discussed allowing mature content, which current ChatGPT policies prohibit except in limited creative contexts. The company wants to open the door to NSFW material for users who verify their age.
You can see the business logic. OpenAI's annualized revenue crossed $20 billion in 2025, up from $6 billion in 2024. The company announced last week it would begin testing ads on ChatGPT's free tier. Cornered by lawsuits on one side, emboldened by growth on the other, OpenAI is moving fast. Content restrictions limit engagement for adult users who want features competitors might offer.
Age prediction is the gate that allows the expansion. Without a way to sort minors from adults, OpenAI couldn't responsibly introduce explicit content, or at least couldn't claim to have tried. With the gate in place, even an imperfect one, the legal exposure drops.
OpenAI is building infrastructure to offer more while claiming it protects the vulnerable. Maybe both things are true. The question privacy advocates keep pressing: does the protection work well enough to justify the surveillance it requires? The behavioral profiling, the data collection, the third-party verification that routes your selfie and government ID through someone else's servers.
OpenAI didn't say how many users it expects the system to affect. It didn't disclose error rates or demographic breakdowns. It didn't explain what happens to the behavioral data it collects, beyond saying Persona deletes ID verification materials after seven days.
Bhatia's concern was practical. "Users need to know more about what's going to happen in those circumstances and should be able to access their assigned age and change it easily when it's wrong."
The company promised continuous improvement. It assembled expert councils and cited academic research. It partnered with the American Psychological Association, ConnectSafely, and the Global Physicians Network. The boxes are checked.
The EU rollout will happen in coming weeks, delayed to account for regional requirements, meaning GDPR and whatever additional scrutiny European regulators apply to behavioral profiling systems that estimate sensitive personal attributes.
Eight hundred million users. An unknown number of misclassifications. Adult mode coming soon.
OpenAI is betting that the upside of expanded content outweighs the friction costs and the inevitable false positives. Parents might appreciate the parental controls, the quiet hours settings, the distress notifications. Privacy advocates will keep asking about accuracy and transparency and the data that makes all of this run.
The company has protocols now. That's the point.
Q: What behavioral signals does OpenAI's age prediction use?
A: The system analyzes account age, typical login times, usage patterns over time, and the user's stated age at sign-up. OpenAI says it combines these signals to estimate whether an account belongs to someone under 18, though specific weightings haven't been disclosed.
Q: What happens if I'm an adult incorrectly flagged as a minor?
A: You can verify your age through Persona, a third-party service. The process requires a live selfie and a government-issued ID. Persona confirms your birthdate, then deletes your data within seven days. OpenAI says this pathway will always be available.
Q: What content gets restricted for users flagged as under 18?
A: ChatGPT blocks graphic violence, self-harm depictions, sexual or romantic roleplay, viral challenges that could encourage risky behavior, and content promoting extreme beauty standards or unhealthy dieting. Users can still access educational and creative features.
Q: When will OpenAI's "adult mode" launch?
A: Fidji Simo, OpenAI's CEO of applications, said in December 2025 that adult mode would arrive in early 2026. The feature will allow verified adults to access mature content that current ChatGPT policies prohibit.
Q: Is the age prediction system available worldwide?
A: The system launched globally on January 21, 2026, except for the European Union. The EU rollout will follow in coming weeks to account for GDPR requirements and regional regulations around behavioral profiling.



Get the 5-minute Silicon Valley AI briefing, every weekday morning — free.