OpenAI's $10 billion custom chip bet with Broadcom promises cheaper ChatGPT and less Nvidia dependence—if the software stack delivers. First silicon ships 2026, but the real test is whether custom accelerators can match CUDA's mature ecosystem.
Tech giants pledged billions at White House AI education events while Trump threatened chip tariffs—revealing how investment commitments have become regulatory insurance. Missing: Elon Musk and any serious safety talk.
Microsoft is transforming Windows to work with AI in a fundamentally new way. The change could reshape how we use computers – but it also creates new security risks that Microsoft must solve. Here's what's at stake for the billion-plus Windows users worldwide.
Microsoft is changing how Windows works with AI. The company announced native support for Model Context Protocol (MCP) in Windows, letting AI apps connect and work together like USB devices.
MCP acts as a universal port for AI applications. Just as USB-C lets you plug any compatible device into your computer, MCP lets AI apps talk to Windows features, other apps, and web services.
The change comes as Microsoft pushes to make Windows central to AI computing again. Alongside MCP support, Microsoft launched Windows AI Foundry, a system for running AI models directly on PCs.
Security First: Protecting the New AI Layer
"We want Windows to evolve to a place where agents are part of how customers interact with their apps and devices," says Windows chief Pavan Davuluri. These AI agents will help users complete tasks by connecting to different parts of Windows through MCP.
In a demo, Microsoft showed how an AI app called Perplexity uses MCP to search files. Instead of manually selecting folders, users can ask natural questions like "find my vacation photos from last summer." The app connects to Windows through MCP to search files automatically.
But adding AI connectors to Windows brings security risks. Hackers could steal authentication tokens, compromise servers, or inject malicious commands. Microsoft knows this and is rolling out MCP carefully.
Building Safety into the System
"We're considering large language models as untrusted, as they can be trained on untrusted data," says David Weston, Microsoft's vice president of security. The company built several safety features into Windows' MCP support:
A security checkpoint system that asks user permission before AI apps access Windows features
A verified registry of approved MCP connections
Isolation of AI components to prevent system-wide attacks
Required code signing for all MCP servers
Runtime monitoring of AI app activities
Microsoft wants to avoid past mistakes with security prompts. Windows Vista's constant permission popups frustrated users and became a joke in Apple ads. The company needs to balance security and convenience with MCP.
The Windows AI Foundry helps developers use AI models on regular PCs. It works with models from various sources and lets apps tap into AI features on newer Windows computers. Microsoft partnered with AMD, Intel, Nvidia, and Qualcomm to optimize AI processing across different chips.
This marks a shift in how AI runs on computers. Instead of sending everything to the cloud, more AI tasks will happen directly on PCs. Microsoft's Stevie Bathiche says this change is coming faster than expected.
The Shift to Local Processing
The system uses all available processors – CPU, GPU, and neural processing unit (NPU). "Managing workloads efficiently across all devices is super important," Bathiche explains. This local processing helps AI work faster and keeps sensitive data on your computer.
Microsoft sees AI agents as the next big change in computing. After decades of keyboard and mouse input, AI agents offer a new way to interact with computers. The company wants Windows to be ready for this shift.
Developers will get early access to test these features. Microsoft plans a careful rollout, starting with select developers to ensure security and stability before wider release.
Why this matters:
Windows is shifting from a system you control with clicks to one that understands what you want to accomplish
By running AI locally on PCs instead of in the cloud, your computer becomes more capable while keeping your data private
Bilingual tech journalist slicing through AI noise at implicator.ai. Decodes digital culture with a ruthless Gen Z lens—fast, sharp, relentlessly curious. Bridges Silicon Valley's marble boardrooms, hunting who tech really serves.
OpenAI's $10 billion custom chip bet with Broadcom promises cheaper ChatGPT and less Nvidia dependence—if the software stack delivers. First silicon ships 2026, but the real test is whether custom accelerators can match CUDA's mature ecosystem.
Atlassian pays $610M for The Browser Company, betting that browsers will become AI-powered workplace control centers. With 85% of enterprise work happening in tabs but only 10% using secure browsers, the market gap is massive—if Dia can deliver.
Switzerland just shipped a fully transparent AI model, trading ChatGPT-level performance for complete openness. Every line of code, training recipe, and data source is public—a radical bet that compliance beats capability in regulated markets.
DeepSeek promises a sophisticated AI agent by Q4 2025 while admitting hallucinations remain "unavoidable." The tension between ambitious goals and candid limitations reveals the reality of today's agent race.