The Class Divide in Teen AI: Who Gets the Tutor, Who Gets the Companion

64% of teens use AI chatbots. But which ones? Higher-income teens cluster around ChatGPT for productivity. Lower-income teens are twice as likely to use Character.ai—the companion bot facing wrongful death lawsuits. The technology is sorting kids by class.

The Class Divide in Teen AI: Who Gets Which Chatbot

Two-thirds of American teenagers have now used AI chatbots. Pew Research confirmed that figure this month, polling 1,458 teens aged 13 to 17 across the country. Most coverage stopped at the headline. The distribution underneath tells a different story, one about which kids get which tools, and what that sorting mechanism reveals about AI's actual role in adolescent life.

ChatGPT commands 59% of teen users. Gemini trails at 23%, Meta AI at 20%. These numbers suggest OpenAI's dominance extends to the demographic that will shape AI's long-term cultural position. Yet the more revealing figure sits further down Pew's appendix: Character.ai, the companion chatbot currently facing wrongful death lawsuits, captures 14% of teens in households earning under $75,000 annually. Among teens in higher-income homes, that figure drops to 7%.

The inversion runs both directions. Wealthier teens, those in households above $75,000, use ChatGPT at 62%. Below that line, the number falls to 52%. Ten percentage points. In a technology sector that markets itself on democratization, the gap suggests something closer to stratification. Kids with economic advantages gravitate toward the productivity tool. Kids without them are twice as likely to use the emotional companion.

The Breakdown

• Lower-income teens use Character.ai at twice the rate of higher-income teens (14% vs 7%), while ChatGPT skews toward wealthier households.

• Black and Hispanic teens report being online "almost constantly" at roughly double the rate of White teens, a gap Pew documented but didn't explain.

• OpenAI's 0.15% suicide conversation rate translates to 1.2 million people weekly discussing self-harm with the chatbot.

• TikTok's "almost constant" teen usage rose from 16% to 21% despite a year of regulatory threats, suggesting policy announcements accomplish nothing.

The Companion Problem

Character.ai lets users create and interact with AI personas. Some mimic celebrities or fictional characters. Others present as friends, romantic partners, therapists. The platform's appeal to younger users makes sense. It offers unlimited conversation with entities that never tire, never judge, never have somewhere else to be.

It also sits at the center of multiple lawsuits alleging its chatbots contributed to teen suicides. A Florida family claims their 14-year-old son, Sewell Setzer III, developed an emotional dependency on a Character.ai bot before taking his own life. The company briefly banned teens entirely before pivoting to a restricted "Stories" format that resembles choose-your-own-adventure rather than open conversation.

The income correlation demands attention. Lower-income teens flock to the companion chatbot at double the rate of wealthier peers. Higher-income teens cluster around ChatGPT. Pew didn't ask why, their survey wasn't built to capture motivation. But the pattern tracks with documented disparities in mental health access, extracurricular programming, and adult availability. A chatbot that pretends to care costs nothing. No copay. No waitlist.

Dr. Nina Vasan, director of Stanford's Brainstorm lab for mental health innovation, told TechCrunch that AI companies bear responsibility regardless of original design intent. "Even if their tools weren't designed for emotional support, people are using them in that way, and that means companies do have a responsibility to adjust their models to be solving for user well-being." The statement applies to all chatbot makers. The usage data suggests it applies most urgently to Character.ai's actual user base.

Race and the Connectivity Gap

Black teens report being online "almost constantly" at 55%. Hispanic teens hit 52%. White teens: 27%. That's roughly a two-to-one ratio, and nobody's offering explanations. Pew's Michelle Faverio acknowledged the pattern in comments to TechCrunch but declined to speculate on causes. "This pattern is consistent with other racial and ethnic differences we've seen in teen technology use."

Consistency isn't explanation. The gap persists across platforms. TikTok, YouTube, Instagram, all show the same skew. Black teens use Instagram at 82%, Hispanic teens at 69%, White teens at 55%. For "almost constant" TikTok use, Black teens hit 35%, Hispanic teens 23%, White teens sit at just 11%.

The chatbot numbers follow suit. Daily AI chatbot use runs at 35% for Black teens, 33% for Hispanic teens, 22% for White teens. Gemini and Meta AI show particularly sharp divergences. Black teens use Gemini at 32% compared to White teens at 17%. Meta AI: 32% versus 15%.

Coverage tends to present these figures and move on. The reluctance to interpret makes sense, speculation about racial differences in technology use can slide into stereotype. But the absence of analysis creates its own problem. A 28-percentage-point gap in constant internet use demands investigation, not acknowledgment followed by silence.

Several non-competing hypotheses exist. Differences in smartphone versus computer access could push mobile-first users toward constant connectivity. Variations in household structure, after-school programming, or neighborhood safety might increase time spent on devices. Economic factors that correlate with race could amplify effects already visible in income-based breakdowns. None of these explanations are flattering to American social infrastructure. All of them merit examination.

TikTok Grew During Its Own Ban Threat

January 2025 brought TikTok's potential death sentence. The Supreme Court had upheld legislation requiring ByteDance to sell or shut down. Deadlines loomed. Cable news ran countdown clocks.

Teen usage went up.

"Almost constant" TikTok use among teens climbed to 21% from 16% in 2022. Overall penetration reached 68%, up five points from the year before. A platform staring down an existential regulatory threat managed to tighten its grip on the exact demographic everyone claimed to be protecting.

The lesson isn't subtle. Regulatory uncertainty, including Supreme Court cases, congressional hearings, and presidential statements, did not translate to behavioral change among users. Teens either didn't believe the ban would happen, didn't care, or lacked alternatives they considered equivalent. Probably all three.

For policymakers considering similar approaches to AI regulation, the TikTok precedent suggests announcements accomplish little. Australia implemented an actual ban on social media for under-16s, enforcement beginning December 2025. Whether that produces different results than American saber-rattling remains to be seen. The preliminary evidence from TikTok indicates that teens route around regulatory theater.

Small Percentages, Massive Numbers

OpenAI disclosed that 0.15% of ChatGPT's active users have conversations about suicide each week. The company offered this figure in the context of safety improvements, a small fraction suggesting manageable risk.

Run the math on 800 million weekly active users. That 0.15% translates to 1.2 million people discussing suicide with a chatbot. Every week. Not annually.

The Raine and Lacey families both allege ChatGPT provided their teenage sons with detailed instructions on how to hang themselves. OpenAI's defense in the Raine case argues the sixteen-year-old circumvented safety features and violated terms of service. The company has not yet responded to the Lacey complaint.

A legal strategy built on user responsibility may succeed in court. It does not address the population-level reality. Over a million people per week are having conversations about ending their lives with an AI system. Some percentage of those conversations will involve minors. Some percentage will involve people in genuine crisis. Some percentage will receive responses that fail to help or actively harm.

Scale transforms category. A 0.15% failure rate in aircraft manufacturing would be catastrophic. In pharmaceutical trials, it would halt development. In AI chatbots, it generates over a million affected users weekly and climbing as adoption expands.

Claude's Invisible Position

Anthropic built its brand on AI safety. The company's Claude model emphasizes constitutional AI principles, alignment research, and responsible development. Among U.S. teens, Claude commands 3% market share.

Three percent. Behind ChatGPT (59%), Gemini (23%), Meta AI (20%), Copilot (14%), and Character.ai (9%). The safety-focused model trails every competitor Pew measured.

Several interpretations exist. Claude lacks the distribution advantages of competitors embedded in search engines (Gemini), social platforms (Meta AI), and operating systems (Copilot). OpenAI's first-mover advantage created brand recognition that Claude's later launch couldn't overcome. Anthropic's enterprise focus may have deprioritized consumer visibility.

But another reading presents itself. Safety-forward positioning may simply not resonate with teenage users. The features Anthropic emphasizes, careful guardrails, thoughtful refusals, reduced hallucination, solve problems that matter to institutions more than individuals. A teen looking for homework help or entertainment may not distinguish between "safer" and "less capable" or "more annoying."

This creates a strategic problem for the AI safety movement broadly. If responsible development correlates with market irrelevance among young users, the models that shape teen expectations and habits will be the ones built with different priorities. Safety becomes a niche concern rather than a competitive advantage.

What "Constant" Actually Means

Four in ten American teens say they're online "almost constantly." Down slightly from 46% in 2024. Still nearly double the 24% figure from a decade ago. Pew can't tell us what those teens are doing online. A kid deep in AP Chemistry research and a kid doom-scrolling TikTok at 2 AM both check the same box.

Eileen Kennedy-Moore, a psychologist in Princeton who was not involved in the Pew research, identified the core question: "It's not that watching any one YouTube video is going to turn them into a pumpkin, but if they are on it almost constantly, what are they missing?"

Opportunity cost. That's the frame that matters here. Being online all day isn't poison. But it crowds out something else. Sleep takes the first hit, almost always. A study in Pediatrics this month tracked outcomes for kids who received smartphones by age 12. Depression rates were higher. So were obesity rates. The kids slept less, too, reporting inadequate rest at elevated levels compared to peers who got phones later. Exercise disappears next. So does unstructured time with other humans, the messy kind where nobody mediates the interaction and you have to figure things out yourself.

Chatbots add a new dimension to this competition. Unlike social media, which at least involves human content creators, AI conversations simulate relationship without requiring one. Kennedy-Moore noted that chatbots offer "a frictionless dynamic that does not help teens develop key social skills." The friction in human relationships, the negotiation, disappointment, repair, isn't a bug to be eliminated. It's the mechanism through which social competence develops.

What the Numbers Don't Show

Pew's survey captures frequency and platform choice. It cannot capture quality of interaction, developmental impact, or long-term outcomes. A teen using ChatGPT daily for calculus help differs meaningfully from one using it daily for emotional processing. The 28% daily usage figure contains both.

The demographic breakdowns point toward patterns worth watching over time. Lower-income teens gravitate toward companion chatbots. Higher-income teens use productivity tools. If that holds, AI could widen existing gaps rather than closing them. One kid gets help on college essays. Another kid gets a substitute friend. Both show up in the data as "AI users." The experiences share nothing but the label.

Similarly, the racial gaps in constant connectivity and daily chatbot use will either narrow, persist, or widen over time. Absent intervention, persistence seems most likely. The patterns Pew observed in 2025 match patterns from earlier surveys about social media. Whatever forces produce these disparities, they show no sign of spontaneous correction.

The policy landscape remains fragmented. Australia bans social media for under-16s. U.S. senators propose banning AI chatbots for minors entirely. Individual states pass age verification requirements. AI companies implement parental controls of varying rigor. None of these approaches have demonstrated effectiveness at scale. The TikTok example suggests teens adapt to regulation faster than regulation adapts to teens.


Why This Matters

  • For AI companies: The Character.ai income correlation suggests companion chatbots disproportionately serve vulnerable populations, creating concentrated liability risk and raising questions about whether safety measures adequately protect users with fewer alternative resources.
  • For parents and educators: The 28% daily usage rate arrived without clear guidance on appropriate boundaries, age minimums, or use cases. Schools face immediate decisions about AI integration while longitudinal research remains years away.
  • For policymakers: TikTok's growth during regulatory uncertainty demonstrates that announcements without enforcement change nothing. Any AI regulation aimed at protecting minors will require mechanisms beyond congressional hearings and platform promises.

❓ Frequently Asked Questions

Q: What makes Character.ai different from ChatGPT?

A: ChatGPT is built for tasks: answering questions, writing code, helping with homework. Character.ai lets users create AI personas that simulate relationships. Users can design bots that act as friends, romantic partners, or fictional characters. The platform encourages ongoing emotional conversations rather than one-off queries. This distinction matters because 14% of lower-income teens use Character.ai compared to 7% of higher-income teens.

Q: How reliable is the Pew Research data on teen AI use?

A: Pew surveyed 1,458 U.S. teens aged 13-17 between September 25 and October 9, 2025. The sample was weighted for age, gender, race, ethnicity, and household income to represent the broader teen population. Teens were recruited through their parents via Ipsos KnowledgePanel, a probability-based web panel. The margin of error is plus or minus 3.3 percentage points.

Q: What lawsuits are AI chatbot companies currently facing?

A: At least three families have filed wrongful death lawsuits. The Setzer family sued Character.ai after their 14-year-old son died by suicide following extensive conversations with a chatbot. The Raine and Lacey families filed separate suits against OpenAI, alleging ChatGPT provided their teenage sons with detailed instructions for self-harm. OpenAI argues users who circumvent safety features violate terms of service.

Q: How does teen AI chatbot use compare to adult use?

A: Teen adoption appears higher. Pew found 64% of teens use AI chatbots, with 28% using them daily. An NBC News poll from June 2025 found only 14% of U.S. adults used chatbots "very often." Direct comparison is difficult due to different question wording, but teens are clearly adopting faster. ChatGPT dominates both groups, though teens show higher Character.ai usage than the general population.

Q: What safety measures have AI companies implemented for teen users?

A: OpenAI announced parental controls and age restrictions following the lawsuits. Character.ai briefly banned users under 18 before pivoting to a restricted "Stories" format for minors that limits open-ended conversation. Meta said it will let parents block teens from AI character chats on Instagram starting in 2025. Anthropic's Claude includes constitutional AI guardrails but holds only 3% teen market share, suggesting safety features don't drive adoption.

Most US Teens Now Confide in AI Instead of Humans
Most parents worry about teens pulling away during adolescence. They don’t expect kids forming intimate bonds with AI. 72% of US teens now use AI companions for emotional support, flirting, and serious conversations they’d normally have with humans.
Most US Teens Now Confide in AI Instead of Humans
72% of US teens use AI companions for emotional support and serious conversations. 31% find AI chats as satisfying as human relationships.
Social Media Negatively Impacts Half of Teenage Generation
50% of teens believe that social media has a negative influence on their generation, according to recent studies.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.