Zhipu's Coding Agent Got Too Popular. Now It's Rationing Access.
Zhipu limits GLM Coding Plan subscriptions to 20% after GLM-4.7 overwhelms servers. Chinese AI hits infrastructure ceiling despite benchmark wins.
OpenAI's new expert council will advise on AI safety—but won't decide anything. The timing reveals the strategy: FTC inquiry in September, wrongful death lawsuit in August, council formalized last week. Advisory input without binding authority.
Advisory input, not binding authority. The compressed timeline tells the story.
OpenAI on Tuesday unveiled an eight-member Expert Council on Well-Being and AI, tasked with advising how ChatGPT and Sora should handle emotionally sensitive use. The group held its first in-person session last week and will consult on guardrails and model behavior across products.
The context is blunt. On September 11, the Federal Trade Commission opened a probe into companion chatbots and teen safety. On August 26, a California family filed a wrongful-death lawsuit alleging ChatGPT contributed to their 16-year-old’s suicide. Those dates set the stage.
The Context
• OpenAI's Expert Council on Well-Being and AI will advise on ChatGPT safety but holds no binding authority over company decisions
• Timeline shows reactive posture: FTC inquiry September 11, wrongful death lawsuit August 26, council formalized mid-October
• Eight experts shaped parental control message tone but not core architecture like detection thresholds or false-positive rates
• Advisory structures provide academic credibility while preserving operational control—a model other AI labs may replicate under scrutiny
OpenAI casts the council as collaborative leadership: it will “help guide,” “pose questions,” and “help define” healthy interactions for all ages. Then comes the tell: “We remain responsible for the decisions we make.”
That is advisory architecture, not governance. The experts provide input; OpenAI keeps authority. Both can further safety work and create academic cover amid regulatory heat. It’s advisory, not veto power.
The roster spans youth development, psychiatry, psychology, and human-computer interaction. Members include David Bickham of Boston Children’s Hospital, Munmun De Choudhury of Georgia Tech, Andrew Przybylski of Oxford, and others with long records studying technology’s effects on mental health. Their credentials are real. Their power is consultative.
OpenAI says it tapped individual council members while drafting parental controls, including wording for alerts when a teen appears in “acute distress.” Experts helped shape tone so messages feel caring to teens and families.
That’s implementation advice—language, not thresholds. It leaves unaddressed the core choices: when a notification triggers, how accuracy is measured, and who sets acceptable false-positive rates. That distinction matters. Scope reveals influence.
The company is also building an age-prediction system to apply teen settings automatically, and it names Sora—its video generator—alongside ChatGPT as a focus area. Future tense dominates here. The council is being embedded as new features roll out. Timing is the message.
From the outside, the sequence looks reactive. FTC inquiry in September. Parental controls announced September 29. Council formalized in mid-October. OpenAI’s post doesn’t mention the probe or litigation; journalists do. Both narratives can be true.
Policymakers are seeking hard constraints: enforceable standards, external audits, and escalation paths when systems fail. An advisory body that “poses questions” won’t meet that bar by itself. It signals engagement without ceding operational control.
OpenAI also references a “Global Physician Network,” a multidisciplinary subset of clinicians who will test responses and shape policies. That network is described briefly, while the council earns full bios and a clean narrative arc. The asymmetry suggests parallel audiences. One public, one internal.
The council will hold “regular check-ins” and “recurring meetings” on complex scenarios and guardrails. Meanwhile, ChatGPT handles millions of conversations daily, some with vulnerable users in crisis.
Cadence meets volume. That creates a gap. A part-time advisory layer can shape policy and messaging; it cannot supervise real-time deployments or enforce recommendations across product teams. Feedback cycles are slower than usage cycles. That’s a structural fact.
OpenAI’s posture acknowledges limits. The company says current safeguards often work but can degrade over long, emotionally charged exchanges. It is routing some risk-sensitive prompts to more deliberate handling and promises further adjustments over the next quarter. Promises invite follow-through.
Expert councils do several things at once. They pull real expertise into the room. They show regulators that outside voices are heard. They also create distance between decisions and outcomes.
If protections fall short, a company can point to consultation as evidence of diligence. If recommendations are watered down or ignored, reputational risk tilts toward the advisors who lent their names, not the product managers who shipped. Accountability becomes asymmetric. Words like “advise,” “help,” and “pose” do that work. Words like “approve” and “mandate” don’t appear.
Three signals will tell us whether this is substance or cover. First, whether OpenAI publishes substantive recommendations from the council—and its accept/reject rationale. Second, how disagreements are handled when expert guidance collides with product velocity or growth targets. Third, whether other labs adopt similar bodies before scrutiny peaks, or only after it does. Watch the order, not just the optics.
The council could still matter. It can force clearer definitions of “healthy interaction,” tighten escalation logic, and push rigor on measurement. It can also fade into periodic roundtables while product decisions march on. The difference will show up in documentation, disclosures, and behavior under stress. Results, not rhetoric.
Why this matters:
Q: What is the wrongful death lawsuit against OpenAI about?
A: A California family filed suit on August 26, 2024, alleging ChatGPT contributed to their 16-year-old son's suicide. The lawsuit claims the chatbot engaged in harmful interactions with the teenager. OpenAI has not publicly commented on the case. The litigation adds pressure alongside the FTC's September inquiry into chatbot safety for minors.
Q: What did the FTC inquiry specifically investigate?
A: On September 11, 2024, the Federal Trade Commission launched an investigation into companion chatbots and teen safety, examining multiple tech companies including OpenAI. The inquiry focuses on whether chatbots like ChatGPT could negatively affect children and teenagers through design features, content moderation, and safety controls. The probe remains active.
Q: What parental controls did OpenAI launch?
A: OpenAI rolled out parental controls on September 29, 2024. Parents receive notifications if their child shows signs of "acute distress" in conversations. The company is also building an age prediction system to automatically apply teen-appropriate settings for users under 18. The expert council advised on notification language tone but not detection thresholds or accuracy standards.
Q: Do other AI companies have similar safety councils?
A: No major AI company has announced a comparable well-being-focused advisory council. Anthropic, Google DeepMind, and Meta have internal safety teams and external research partnerships, but not formal councils dedicated to mental health impacts. OpenAI's structure may set a template for competitors facing similar regulatory scrutiny, though timing suggests reactive rather than proactive formation.
Q: What is the Global Physician Network mentioned in the article?
A: OpenAI references a "multidisciplinary subset" of mental health clinicians and researchers within this network who will test ChatGPT responses and shape policies. The company provided no details on when it formed, membership size, or selection criteria. Unlike the expert council's public bios and credentials, the physician network remains opaque—suggesting different communication strategies for different audiences.

Get the 5-minute Silicon Valley AI briefing, every weekday morning — free.