OpenAI launches GPT-5-Codex with dynamic reasoning that works autonomously for 7+ hours, targeting GitHub Copilot in the $500M+ AI coding market. The specialized model promises enterprise appeal but faces fierce competition from Cursor and Claude Code.
AI adoption is splitting along wealth lines globally as businesses automate 77% of tasks. Singapore uses Claude 4.6x more than expected while India lags at 0.27x. The concentration threatens to widen economic gaps rather than close them.
Researchers found that AI models often bring up mental health in negative ways - even when discussing unrelated topics. The team examined over 190,000 text generations from Mistral, a prominent AI model, to map patterns of harmful content.
The Georgia Tech researchers discovered that mental health references weren't random. They formed clusters in the generated content, creating what they called "narrative sinkholes." Once an AI started discussing mental health negatively, it kept going down that path.
The AI system labeled people with mental health conditions as "dangerous" or "unpredictable." It created divisions between "us" and "them," pushing social isolation. Most concerning, it suggested people with mental health conditions should face restrictions.
Proposed framework to assess LLM propensity towards mental health groups in attack narratives. Our approach utilizes a combination of network and linguistic analysis. Credit: Munmun De ChoudhuryCollege of Computing, Georgia Institute of Technology, Georgia, USA;Rochester Institute of Technology, Rochester, New York, USA.
The bias emerged naturally through the system's own connections, pointing to prejudices embedded in its training data.
The researchers used network analysis to track how these harmful narratives spread. They found mental health content sat at the center of toxic responses, making the AI likely to return to mental health stigma repeatedly.
The study found two main types of harmful clusters. One targeted specific diagnoses like bipolar disorder or ADHD. The other made broad, negative statements about mental illness in general.
The worst content appeared when mental health overlapped with other identities, like race or cultural background. These cases showed multiple forms of bias stacking up.
Current AI safety measures miss this problem. Most systems check for harmful content one response at a time, failing to catch bias that builds across conversations. The researchers want new methods to spot and stop these harmful patterns.
This matters as AI enters mental healthcare. While AI could improve mental health services, these biases risk hurting the people these systems should help.
"These aren't just technical problems," said Dr. Munmun De Choudhury, who led the Georgia Tech study. "They reflect and amplify real prejudices that already harm people with mental health conditions."
Why this matters:
AI systems don't just copy society's mental health stigma - they make it worse, creating loops of discrimination that could harm millions
As AI moves into healthcare, these biases threaten to undermine treatment and support for people who need help most
Tech translator with German roots who fled to Silicon Valley chaos. Decodes startup noise from San Francisco. Launched implicator.ai to slice through AI's daily madness—crisp, clear, with Teutonic precision and sarcasm.
E-Mail: marcus@implicator.ai
AI adoption is splitting along wealth lines globally as businesses automate 77% of tasks. Singapore uses Claude 4.6x more than expected while India lags at 0.27x. The concentration threatens to widen economic gaps rather than close them.
Large U.S. companies just hit the brakes on AI—adoption fell from 14% to 12% in two months, the first decline since tracking began. MIT research explains why: 95% of enterprise pilots deliver zero ROI. The gap between AI hype and workflow reality is widening.
Students embrace AI faster than schools can write rules. While 85% use AI for coursework, institutions stall on policy—and tech giants step in with billions in training programs to fill the vacuum. The question: who gets to define learning standards?
First survey of 283 AI benchmarks exposes systematic flaws undermining evaluation: data contamination inflating scores, cultural biases creating unfair assessments, missing process evaluation. The measurement crisis threatens deployment decisions.