OpenAI's CFO floated a federal backstop for AI infrastructure, then reversed within hours after White House rejection. The whiplash exposed the core problem: OpenAI needs $1.4 trillion while generating $20 billion. The math doesn't work.
Microsoft declares it's building "humanist superintelligence" to keep AI safe. Reality check: They're 2 years behind OpenAI, whose models they'll use until 2032. The safety pitch? Product differentiation for enterprise clients who fear runaway AI.
Three Stanford professors just raised $50M to prove OpenAI and Anthropic generate text wrong. Their diffusion models claim 10x speed by processing tokens in parallel, not sequentially. Microsoft and Nvidia are betting they're right.
Security researchers have uncovered a troubling privacy leak in Microsoft Copilot. Data exposed to the internet—even briefly—can persist in AI chatbots long after being made private.
Israeli cybersecurity firm Lasso discovered this vulnerability when their own private GitHub repository appeared in Copilot results. The repository had been accidentally public for a short time before being locked down.
"Anyone in the world could ask Copilot the right question and get this data," warned Lasso co-founder Ophir Dror.
The problem extends far beyond Lasso. Their investigation found over 20,000 since-private GitHub repositories still accessible through Copilot, affecting more than 16,000 organizations including Google, IBM, PayPal, Tencent, and Microsoft itself.
The exposed repositories contain damaging materials: confidential archives, intellectual property, and even access keys and tokens. In one case, Lasso retrieved contents from a deleted Microsoft repo that hosted a tool for creating "offensive and harmful" AI images.
Microsoft classified the issue as "low severity" when notified in November 2024, calling the caching behavior "acceptable." While the company stopped showing Bing cache links in search results by December, Lasso says the underlying problem persists—Copilot still accesses this hidden data.
Why this matters:
The digital world has no true delete button. What you expose today might be repeated by AI tomorrow.
Your "deleted" data isn't gone—it's cached in AI systems with perfect memory and questionable discretion.
The traditional web security model is breaking down when AI can recall and share content that's no longer publicly available.
Tech translator with German roots who fled to Silicon Valley chaos. Decodes startup noise from San Francisco. Launched implicator.ai to slice through AI's daily madness—crisp, clear, with Teutonic precision and sarcasm.
E-Mail: marcus@implicator.ai
Microsoft declares it's building "humanist superintelligence" to keep AI safe. Reality check: They're 2 years behind OpenAI, whose models they'll use until 2032. The safety pitch? Product differentiation for enterprise clients who fear runaway AI.
Apple will pay Google $1B yearly to power Siri with a 1.2 trillion parameter AI model—8x more complex than Apple's current tech. The company that owns every layer now rents the most critical one. The spring 2026 target masks a deeper dependency trap.
Sam Altman predicts AI CEOs within years while betting billions on human-centric infrastructure. His Tyler Cowen interview reveals three tensions: monetizing without breaking trust, energy bottlenecks limiting AI, and models that persuade without intent.
Palantir beat earnings but fell 8% at 250x forward P/E, triggering global risk reset. Banking chiefs gave cover for year-end de-risking while AI capex outpaces revenue visibility. When leaders wobble, concentration risk becomes system risk.