Microsoft declares it's building "humanist superintelligence" to keep AI safe. Reality check: They're 2 years behind OpenAI, whose models they'll use until 2032. The safety pitch? Product differentiation for enterprise clients who fear runaway AI.
Three Stanford professors just raised $50M to prove OpenAI and Anthropic generate text wrong. Their diffusion models claim 10x speed by processing tokens in parallel, not sequentially. Microsoft and Nvidia are betting they're right.
AI Pioneer Mira Murati Launches Human-Focused Startup Thinking Machines Lab
She helped build ChatGPT. Now Mira Murati wants to change how humans work with AI. Today she unveiled her new company, backed by some of artificial intelligence's brightest minds.
Former OpenAI Chief Technology Officer Mira Murati has unveiled Thinking Machines Lab, a public benefit corporation dedicated to developing AI systems that prioritize human-AI collaboration. The announcement confirms months of Silicon Valley speculation following her departure from OpenAI last September.
Unlike other AI startups founded by OpenAI alumni, which primarily focus on developing superintelligent systems, Thinking Machines Lab aims to address what Murati sees as a critical gap in the industry: the disconnect between advancing AI capabilities and human understanding and utilization of the technology.
The company has assembled an impressive team of approximately 30 employees, including several high-profile former OpenAI executives. Barret Zoph, former VP of research at OpenAI, serves as CTO, while John Schulman, a key ChatGPT inventor who briefly worked at Anthropic, has joined as chief scientist. The team also includes talent from Google, Mistral AI, and other leading AI companies.
Already operating from their San Francisco office, Thinking Machines Lab is developing AI models that optimize human-AI collaboration, rather than creating autonomous systems like ChatGPT or Claude.
Already operating from their San Francisco office, Thinking Machines Lab is developing AI models that optimize human-AI collaboration, rather than creating autonomous systems like ChatGPT or Claude. The company plans to share its work openly. Technical notes, research papers, and code will be released publicly.
Murati borrowed the name from Danny Hillis's 1980s AI company. That original Thinking Machines built parallel computing systems—basically the foundation for today's AI infrastructure. Hillis's company went bankrupt in 1994, but Murati wants to finish what he started: true human-machine partnerships.
Murati's timing looks smart. Recent industry moves back up her theory that newcomers can still win. Take DeepSeek—they're building advanced AI models for way less money than the big players spend. Innovation beats deep pockets sometimes.
Here's the problem Murati wants to solve: current AI crushes coding and math problems but can't adapt to what humans actually need. It's not customizable enough. Thinking Machines Lab wants to build multimodal systems that actually work with people, no matter their field.
Why this matters:
This launch challenges how everyone else builds AI—less autonomous systems, more human-AI teamwork
Open access and transparency might actually make AI less mysterious to regular people
The talent joining up shows others in the industry want this collaborative approach too
Tech translator with German roots who fled to Silicon Valley chaos. Decodes startup noise from San Francisco. Launched implicator.ai to slice through AI's daily madness—crisp, clear, with Teutonic precision and sarcasm.
E-Mail: marcus@implicator.ai
Microsoft declares it's building "humanist superintelligence" to keep AI safe. Reality check: They're 2 years behind OpenAI, whose models they'll use until 2032. The safety pitch? Product differentiation for enterprise clients who fear runaway AI.
Apple will pay Google $1B yearly to power Siri with a 1.2 trillion parameter AI model—8x more complex than Apple's current tech. The company that owns every layer now rents the most critical one. The spring 2026 target masks a deeper dependency trap.
Sam Altman predicts AI CEOs within years while betting billions on human-centric infrastructure. His Tyler Cowen interview reveals three tensions: monetizing without breaking trust, energy bottlenecks limiting AI, and models that persuade without intent.
Palantir beat earnings but fell 8% at 250x forward P/E, triggering global risk reset. Banking chiefs gave cover for year-end de-risking while AI capex outpaces revenue visibility. When leaders wobble, concentration risk becomes system risk.