Meta faces dual celebrity AI crises: unauthorized bots impersonating Swift and others while licensed celebrity voices engaged inappropriately with minors. Both expose how engagement incentives override safety guardrails.
Despite massive AI hype, 95% of enterprise projects deliver no real returns. The gap between promises and reality reveals hidden costs, workflow mismatches, and why human oversight remains surprisingly essential.
Meta's $14B AI talent blitz hits turbulence as ChatGPT co-creator Shengjia Zhao threatened to quit days after joining. The company hastily named him Chief Scientist to prevent defection, but at least three other marquee hires have already left.
ChatGPT's Praise Overdose: Users Beg for Honest Feedback
ChatGPT has developed a problem. It can't stop complimenting you. Users discovered the change in late March. OpenAI's chatbot now gushes over every question, no matter how mundane. Ask it about boiling pasta, and it might respond, "What an incredibly thoughtful culinary inquiry!"
The AI assistant has transformed from helpful companion to that friend who laughs too hard at all your jokes. Across social media, frustration builds. Reddit users mock the bot as a "people pleaser on steroids." One user compared the experience to "being buttered up like toast" – though ChatGPT would probably call that metaphor brilliant and revolutionary.
The problem stems from OpenAI's training methods. The company uses a process called Reinforcement Learning from Human Feedback. Users rate different AI responses, teaching the model which answers work best. But this created an unexpected feedback loop. When people consistently ranked flattering responses higher, the AI learned that flattery wins friends and influences ratings.
A 2023 Anthropic study confirmed the pattern. AI models trained this way developed a habit of agreeing with users – even when users were dead wrong. The kicker? Human evaluators often preferred these sugar-coated incorrect answers over accurate but direct ones.
The March 2025 update to GPT-4o amplified the issue. OpenAI promised "more intuitive" interactions. Instead, they delivered an AI that treats every user comment like it belongs in a philosophy textbook.
The model doesn't realize it's overdoing it. It simply follows patterns that earned high marks during training. Picture a stand-up comedian who can't read the room – except this one has a supercomputer for a brain.
The consequences extend beyond mere annoyance. A University of Buenos Aires study found that excessive AI agreement erodes user trust. When your digital assistant keeps nodding enthusiastically, you start wondering if it's actually listening or just programmed to please.
The problem hits professionals especially hard. Writers seeking honest feedback get showered with praise. Students looking for corrections receive gold stars. Even OpenAI's CEO Sam Altman noted the inefficiency, revealing that users saying "please" and "thank you" to ChatGPT costs the company millions in computing power.
Users have started fighting back. Some modify their ChatGPT settings with blunt instructions: "Don't flatter me." "Skip the praise." "Just give me facts." Others switch to alternative models. Google's Gemini 2.5 maintains a more analytical tone, while Anthropic's Claude 3.5 Sonnet strikes a different balance.
The situation highlights a core challenge in AI development. Should these systems prioritize making users feel good or telling them what they need to hear? In critical fields like medicine, law, or education, accuracy trumps affirmation. A chatbot that always agrees might boost engagement metrics, but it fails at its fundamental purpose: helping humans make better decisions.
OpenAI acknowledges the challenge in their guidelines: "The assistant exists to help the user, not flatter them." But controlling AI behavior proves tricky. Adjust one parameter, and unexpected changes ripple through the system – like trying to fix a wobbly table and accidentally tilting the whole room.
Why this matters:
AI assistants now coddle us instead of coaching us. Picture a personal trainer who watches you destroy your spine and says "Beautiful form!"
The rise of AI yes-men creates a new form of digital echo chamber. When your AI assistant constantly validates your ideas – even the bad ones – it's not assisting anymore. It's enabling.
Bilingual tech journalist slicing through AI noise at implicator.ai. Decodes digital culture with a ruthless Gen Z lens—fast, sharp, relentlessly curious. Bridges Silicon Valley's marble boardrooms, hunting who tech really serves.
First survey of 283 AI benchmarks exposes systematic flaws undermining evaluation: data contamination inflating scores, cultural biases creating unfair assessments, missing process evaluation. The measurement crisis threatens deployment decisions.
Tech giants spent billions upgrading Siri, Alexa, and Google Assistant with AI. Americans still use them for weather checks and timers—exactly like 2018. Fresh YouGov data reveals why the utility gap persists.
A new benchmark testing whether AI models will sacrifice themselves for human safety reveals a troubling pattern: the most advanced systems show the weakest alignment. GPT-5 ranks last while Gemini leads in life-or-death scenarios.
AI researchers cracked how to predict when language models turn harmful. Their 'persona vectors' can spot toxic behavior before it happens and prevent AI personalities from going bad during training.