DeepMind's newest AI system has found ways to speed up calculations that stumped mathematicians since 1969. The tool is already saving money in Google's data centers - and might reshape how we write computer code. Its first real-world results point to an unexpected shift in algorithm design.
OpenAI just added a specialized coding model to ChatGPT, but that's not the whole story. The company also opened up about its safety testing - a move that could reshape how AI firms handle transparency. Here's what changed and why it matters.
OpenAI rolls out faster coding assistant, opens up about safety testing
OpenAI just added a specialized coding model to ChatGPT, but that's not the whole story. The company also opened up about its safety testing - a move that could reshape how AI firms handle transparency. Here's what changed and why it matters.
OpenAI added its specialized coding model GPT-4.1 to ChatGPT today, marking a shift toward purpose-built AI tools. The company also launched a public safety tracking system, responding to growing demands for transparency in AI development.
The new model excels at coding tasks and instruction following, offering paid users a faster alternative to OpenAI's general-purpose models. It arrives as tech companies race to dominate the AI coding space, with Google updating Gemini for GitHub integration and OpenAI reportedly eyeing a $3 billion acquisition of coding tool Windsurf.
"GPT-4.1 doesn't introduce new ways of interacting with the model," said Johannes Heidecke, OpenAI's Head of Safety Systems. "This means that the safety considerations, while substantial, are different from frontier models."
Access and rollout plans
Plus, Pro, and Team subscribers can now access GPT-4.1 through ChatGPT's model picker. Enterprise and education users will get access in coming weeks. Free users won't get GPT-4.1 but will receive its smaller sibling, GPT-4.1 mini, as a fallback option when they hit usage limits.
Safety concerns prompt transparency push
The timing matters. OpenAI released GPT-4.1 through its developer API in April, facing criticism for not publishing a safety report. Critics said this showed a concerning shift toward prioritizing products over safety research.
OpenAI's response came Wednesday with its new Safety Evaluations Hub. The public webpage shows how its models perform on tests for hallucinations, security vulnerabilities, and harmful content.
"We will update the hub periodically as part of our ongoing company-wide effort to communicate more proactively about safety," OpenAI wrote. The hub offers a snapshot of safety metrics rather than comprehensive data.
A new approach to model releases
This marks a change in OpenAI's approach to model releases. Previously, the company published safety data only when launching new models. Now it promises regular updates on model performance and safety metrics.
The move follows controversy over OpenAI's testing of its o1 model. Heidecke told CNBC the company tested near-final versions but skipped evaluations on minor updates that wouldn't affect the model's capabilities. He admitted OpenAI could have explained this better.
GPT-4.1's release shows OpenAI's rapid development pace. It replaced GPT-4.5, which debuted just three months ago in February. Each iteration brings specific improvements rather than across-the-board upgrades.
Competition heats up in AI coding tools
The focus on coding capabilities comes as tech companies battle for developer mindshare. Google's Gemini now connects directly to GitHub projects. OpenAI's potential Windsurf acquisition would give it ownership of a popular coding tool, strengthening its position in the developer market.
Other companies are making similar moves. Meta's research team announced new molecular discovery work Wednesday, partnering with the Rothschild Foundation Hospital. They released an open dataset, emphasizing their commitment to accessible research.
The industry's rapid changes affect how companies approach AI safety and transparency. OpenAI's new safety hub suggests a middle ground between fast product releases and public accountability.
SoftBank's recent commitment to spend $3 billion yearly on OpenAI's technology shows the financial stakes. Companies must balance innovation speed with safety concerns while competing for market share and investment.
Why this matters:
OpenAI's shift to specialized models signals a new phase in AI development: Instead of all-purpose tools, we're seeing AI assistants built for specific tasks like coding
The Safety Evaluations Hub creates a public standard for AI transparency, pushing other companies to share more about their testing methods and results
DeepMind's newest AI system has found ways to speed up calculations that stumped mathematicians since 1969. The tool is already saving money in Google's data centers - and might reshape how we write computer code. Its first real-world results point to an unexpected shift in algorithm design.
Notion doubles its business pricing as it rolls out AI features across its platform. The $30 monthly fee now includes meeting transcription and enterprise search - a move that could reshape how companies handle their growing stack of AI subscriptions.
A groundbreaking court case in Arizona blurs the line between technology and testimony, as AI enables a deceased victim to address his killer. Legal experts now grapple with profound questions about justice, closure, and the boundaries of courtroom innovation.
OpenAI's latest executive hire signals a major shift in how the world's leading AI company operates. As Fidji Simo steps in to lead applications, Sam Altman's role is changing - and it reveals deeper changes in OpenAI's strategy that few saw coming.