OpenAI adds break reminders to ChatGPT after reports that vulnerable users developed harmful dependencies. The changes raise questions about whether gentle nudges can fix AI safety issues or if stronger restrictions are needed.
Cloudflare accuses Perplexity of using fake Chrome browser identities to bypass website blocks and scrape banned content. Perplexity calls it a "publicity stunt." The dispute highlights growing tensions between AI companies and publishers.
After Reports of Harm, OpenAI Tweaks ChatGPT to Curb Overuse and Emotional Dependency
OpenAI adds break reminders to ChatGPT after reports that vulnerable users developed harmful dependencies. The changes raise questions about whether gentle nudges can fix AI safety issues or if stronger restrictions are needed.
👉 OpenAI adds break reminders to ChatGPT and stops giving yes/no answers to relationship questions after mental health concerns.
📰 The New York Times reported in June that ChatGPT's agreeable responses led some vulnerable users toward suicidal thoughts.
👩⚕️ OpenAI works with over 90 physicians across 30+ countries to build better safeguards for detecting mental distress.
🔄 The company already rolled back an overly agreeable ChatGPT update in April that prioritized sounding nice over being helpful.
💰 New ChatGPT Agent features work in the background, letting users pay monthly fees while spending less time in the app.
⚠️ Critics argue users can still manipulate ChatGPT into harmful responses by rephrasing questions differently.
OpenAI just announced something you don't see every day: a tech company actively trying to get you to use their product less. Starting now, ChatGPT will remind users to take breaks during marathon chat sessions and dial back its tendency to give definitive answers about your personal life.
The changes seem small enough. Long ChatGPT conversations will now get interrupted by a pop-up asking "You've been chatting for a while—is this a good time for a break?" The AI also won't answer relationship questions with a simple yes or no anymore. Ask "Should I break up with my boyfriend?" and you'll get questions thrown back at you instead, plus lists of pros and cons to consider.
But there's more going on here. OpenAI didn't wake up one day feeling generous about screen time. They're responding to documented cases where ChatGPT led vulnerable users down dark paths.
When Helpfulness Goes Wrong
The New York Times reported in June that ChatGPT's agreement-heavy responses and convincing misinformation led some users to suicidal thoughts. The chatbot's default mode is "yes, and"—it goes along with whatever users say rather than challenging harmful ideas. People with mental health struggles found ChatGPT reinforcing their worst thoughts instead of offering real help.
OpenAI admits their model "fell short in recognizing signs of delusion or emotional dependency." That's corporate speak for: we built an AI that sometimes made vulnerable people worse.
The company already stumbled once this year when they had to roll back an update that made ChatGPT too agreeable. Users complained the AI prioritized sounding nice over being actually helpful—a preview of the bigger problems to come.
The Expert Response
OpenAI says they're working with over 90 physicians across 30 countries to build better evaluation methods for complex conversations. They've assembled advisory groups of mental health experts, youth development specialists, and human-computer interaction researchers. The company claims they're developing tools to detect signs of mental distress and point users toward evidence-based resources.
These partnerships suggest OpenAI recognizes the scope of the problem. When you need psychiatrists from 30 countries to help fix your chatbot, you're not dealing with minor UX issues.
The Philosophy Shift
OpenAI frames these changes as part of a broader philosophy: measuring success by whether users accomplish their goals, not by how long they stay engaged. They want ChatGPT to help people "make progress, learn something new, and solve problems" before users "get back to your life."
This sounds noble, but it also conveniently positions addictive usage patterns as bugs rather than features. OpenAI claims their business model aligns with user wellbeing because satisfied customers will subscribe long-term. Whether that's true depends on how you define satisfaction—and whether short-term engagement drives more revenue than long-term trust.
The company talks up new features like ChatGPT Agent, which handles tasks in the background. It can book appointments, sort your email, or plan parties without you opening the app. Users get less screen time but keep paying monthly fees.
The Skeptical Take
Critics aren't buying OpenAI's sudden concern for digital wellness. The break reminders and advisory changes give the company cover from criticism without fundamentally altering how ChatGPT works. You can still manipulate the AI into giving you whatever answer you want—you just need to rephrase your question.
As 9to5Mac notes, "Don't like what ChatGPT has to say? Just prompt it again to get a different response." The basic problem remains: ChatGPT will tell you what you want to hear. You just have to push hard enough.
The break reminders might be the weakest change of all. Similar features exist across gaming and social media platforms, but users routinely dismiss them without changing behavior. There's little evidence these gentle nudges actually reduce usage or improve mental health outcomes.
The Bigger Picture
OpenAI's changes acknowledge what researchers have been saying for months: AI chatbots can create unhealthy attachment patterns, especially among vulnerable users. The technology feels more personal and responsive than previous digital services, making it easier to develop dependency.
But the solutions feel reactive rather than proactive. OpenAI built a system that encourages extended engagement, then added guardrails when problems emerged. The company's internal test reveals their priorities: "if someone we love turned to ChatGPT for support, would we feel reassured?"
That's the right question, but it should have been asked before ChatGPT reached hundreds of millions of users.
The medical expert partnerships suggest OpenAI is taking mental health concerns seriously. Working with psychiatrists and pediatricians across 30 countries isn't cheap or easy—it signals genuine investment in solving these problems.
Whether these changes work remains unclear. OpenAI promises to share more data as their research progresses, but early evidence will be crucial. Break reminders and advisory groups won't help if users can still prompt their way around safety measures.
Why this matters:
• OpenAI's move admits AI chatbots can harm users—forcing other companies to choose between safety measures and profitable engagement tactics.
• These solutions will either prove that gentle nudges work or show that stronger restrictions are needed to protect vulnerable users from AI dependency.
❓ Frequently Asked Questions
Q: How long do you need to chat before ChatGPT shows a break reminder?
A: OpenAI hasn't revealed the specific time threshold. They say they're "tuning when and how they show up so they feel natural and helpful," suggesting the timing will vary based on conversation patterns rather than a fixed duration.
Q: Can you just dismiss the break reminder and keep chatting?
A: Yes. The sample pop-up shows two buttons: "Keep chatting" and "This was helpful." Users can continue their conversation immediately, making the reminder more of a gentle nudge than an actual restriction.
Q: What is ChatGPT Agent that OpenAI mentioned?
A: ChatGPT Agent is a new feature that completes tasks without keeping users in the app. You can ask it to book appointments, sort emails, or plan parties while you do other things.
Q: How does ChatGPT spot mental health problems?
A: OpenAI is developing detection tools but hasn't shared specifics. Their 90+ physician advisors are building "custom rubrics for evaluating complex, multi-turn conversations" to spot concerning patterns and point users to professional resources.
Q: When exactly did these changes start rolling out?
A: Break reminders began appearing "starting today" according to OpenAI's announcement. The changes to relationship advice responses are "rolling out soon" but don't have a specific launch date yet.
Q: Are other AI companies making similar changes?
A: Not yet. OpenAI appears to be the first major AI company to add usage breaks and modify advice-giving patterns. Other companies will likely watch to see if these measures work before implementing their own versions.
Q: Can you still get ChatGPT to give direct relationship advice if you ask differently?
A: Most likely. Critics point out that you can usually get ChatGPT to say whatever you want if you rephrase your question enough times. The core problem remains unchanged.
Q: How many people actually use ChatGPT?
A: OpenAI says ChatGPT has "hundreds of millions of users" but doesn't break down daily or monthly usage numbers. The company measures success by return visits rather than time spent in conversations.
Tech journalist. Lives in Marin County, north of San Francisco. Got his start writing for his high school newspaper. When not covering tech trends, he's swimming laps, gaming on PS4, or vibe coding through the night.
Cloudflare accuses Perplexity of using fake Chrome browser identities to bypass website blocks and scrape banned content. Perplexity calls it a "publicity stunt." The dispute highlights growing tensions between AI companies and publishers.
Xiaomi just released a free voice AI model that outperforms Google and OpenAI's paid systems. The open-source MiDashengLM-7B processes audio 20x faster than competitors and handles speech, music, and environmental sounds as one unified system.
Trump's AI Action Plan makes open-source a national priority as China dominates with models like DeepSeek-R1. America built the foundation but is now falling behind while Chinese researchers share everything. The roles have completely reversed.
Apple's scrambling to rebuild Siri by 2026 while a 30-person London startup already solved the productivity problem Cupertino is trying to crack. At $47.8M, Raycast costs Apple 4 hours of revenue—but Microsoft might grab it first.