The Trump administration wants PJM to hold an emergency power auction where only data center operators can bid. Fifteen-year contracts, no exit clauses. The grid operator wasn't invited to the announcement.
The Thinking Machines exodus isn't about one CTO firing. More employees resigned after Thursday's all-hands meeting. The $50B funding round is stalling. A source reveals the deeper problem: the founders never agreed on what to build.
OpenAI added its specialized coding model GPT-4.1 to ChatGPT today, marking a shift toward purpose-built AI tools. The company also launched a public safety tracking system, responding to growing demands for transparency in AI development.
The new model excels at coding tasks and instruction following, offering paid users a faster alternative to OpenAI's general-purpose models. It arrives as tech companies race to dominate the AI coding space, with Google updating Gemini for GitHub integration and OpenAI reportedly eyeing a $3 billion acquisition of coding tool Windsurf.
"GPT-4.1 doesn't introduce new ways of interacting with the model," said Johannes Heidecke, OpenAI's Head of Safety Systems. "This means that the safety considerations, while substantial, are different from frontier models."
Access and rollout plans
Plus, Pro, and Team subscribers can now access GPT-4.1 through ChatGPT's model picker. Enterprise and education users will get access in coming weeks. Free users won't get GPT-4.1 but will receive its smaller sibling, GPT-4.1 mini, as a fallback option when they hit usage limits.
Safety concerns prompt transparency push
The timing matters. OpenAI released GPT-4.1 through its developer API in April, facing criticism for not publishing a safety report. Critics said this showed a concerning shift toward prioritizing products over safety research.
OpenAI's response came Wednesday with its new Safety Evaluations Hub. The public webpage shows how its models perform on tests for hallucinations, security vulnerabilities, and harmful content.
"We will update the hub periodically as part of our ongoing company-wide effort to communicate more proactively about safety," OpenAI wrote. The hub offers a snapshot of safety metrics rather than comprehensive data.
A new approach to model releases
This marks a change in OpenAI's approach to model releases. Previously, the company published safety data only when launching new models. Now it promises regular updates on model performance and safety metrics.
The move follows controversy over OpenAI's testing of its o1 model. Heidecke told CNBC the company tested near-final versions but skipped evaluations on minor updates that wouldn't affect the model's capabilities. He admitted OpenAI could have explained this better.
GPT-4.1's release shows OpenAI's rapid development pace. It replaced GPT-4.5, which debuted just three months ago in February. Each iteration brings specific improvements rather than across-the-board upgrades.
Competition heats up in AI coding tools
The focus on coding capabilities comes as tech companies battle for developer mindshare. Google's Gemini now connects directly to GitHub projects. OpenAI's potential Windsurf acquisition would give it ownership of a popular coding tool, strengthening its position in the developer market.
Other companies are making similar moves. Meta's research team announced new molecular discovery work Wednesday, partnering with the Rothschild Foundation Hospital. They released an open dataset, emphasizing their commitment to accessible research.
The industry's rapid changes affect how companies approach AI safety and transparency. OpenAI's new safety hub suggests a middle ground between fast product releases and public accountability.
SoftBank's recent commitment to spend $3 billion yearly on OpenAI's technology shows the financial stakes. Companies must balance innovation speed with safety concerns while competing for market share and investment.
Why this matters:
OpenAI's shift to specialized models signals a new phase in AI development: Instead of all-purpose tools, we're seeing AI assistants built for specific tasks like coding
The Safety Evaluations Hub creates a public standard for AI transparency, pushing other companies to share more about their testing methods and results
Bilingual tech journalist slicing through AI noise at implicator.ai. Decodes digital culture with a ruthless Gen Z lens—fast, sharp, relentlessly curious. Bridges Silicon Valley's marble boardrooms, hunting who tech really serves.
The Thinking Machines exodus isn't about one CTO firing. More employees resigned after Thursday's all-hands meeting. The $50B funding round is stalling. A source reveals the deeper problem: the founders never agreed on what to build.
Wikipedia turns 25 and announces enterprise deals with Microsoft, Meta, and Amazon. The checks are finally arriving, but the nonprofit still won't say how much.
AWS just became the first buyer of American-mined copper in over a decade. The source? Bacteria eating rock in an Arizona desert. As AI data centers consume 47 tonnes of copper per megawatt, Amazon is securing supply chains before the squeeze hits.
TSMC's Q4 profit jumped 35% to a record $16 billion. CEO C.C. Wei declared "AI is real" and backed it with $56 billion in 2026 capex, the largest semiconductor investment in history. Intel and Samsung are scrambling to catch up. They're not close.