Musk promised truth-seeking AI. When Grok 4 tackles politics, it searches Musk's posts first. Tests show 54 of 64 citations came from him. Accident or intent? The answer matters for every AI system we build.
Experienced developers work 19% slower with AI coding tools but think they're 20% faster. New study challenges AI's flagship use case and shows why self-reported productivity gains can't be trusted.
Elon Musk's 'truth-seeking' AI searches for his personal posts before answering tough questions on Israel, immigration, and abortion. Users found Grok 4 explicitly looks up Musk's views, raising serious questions about AI bias and neutrality.
Security researchers have uncovered a troubling privacy leak in Microsoft Copilot. Data exposed to the internet—even briefly—can persist in AI chatbots long after being made private.
Israeli cybersecurity firm Lasso discovered this vulnerability when their own private GitHub repository appeared in Copilot results. The repository had been accidentally public for a short time before being locked down.
"Anyone in the world could ask Copilot the right question and get this data," warned Lasso co-founder Ophir Dror.
The problem extends far beyond Lasso. Their investigation found over 20,000 since-private GitHub repositories still accessible through Copilot, affecting more than 16,000 organizations including Google, IBM, PayPal, Tencent, and Microsoft itself.
The exposed repositories contain damaging materials: confidential archives, intellectual property, and even access keys and tokens. In one case, Lasso retrieved contents from a deleted Microsoft repo that hosted a tool for creating "offensive and harmful" AI images.
Microsoft classified the issue as "low severity" when notified in November 2024, calling the caching behavior "acceptable." While the company stopped showing Bing cache links in search results by December, Lasso says the underlying problem persists—Copilot still accesses this hidden data.
Why this matters:
The digital world has no true delete button. What you expose today might be repeated by AI tomorrow.
Your "deleted" data isn't gone—it's cached in AI systems with perfect memory and questionable discretion.
The traditional web security model is breaking down when AI can recall and share content that's no longer publicly available.
Tech translator with roots in Germany, now decoding Silicon Valley from San Francisco. Ex-ARD West Coast correspondent. I publish implicator.ai to make sense of AI’s daily chaos—crisply, clearly, and with a hint of sarcasm.
Elon Musk's 'truth-seeking' AI searches for his personal posts before answering tough questions on Israel, immigration, and abortion. Users found Grok 4 explicitly looks up Musk's views, raising serious questions about AI bias and neutrality.
Browser extensions promised to help users plant trees and boost productivity. Instead, they secretly turned nearly 1 million browsers into web scrapers for a commercial operation disguised as 'bandwidth sharing.'
Meta pays $200 million to poach Apple's AI chief, part of a billion-dollar talent grab targeting OpenAI and Google researchers. The twist? Meta's AI models currently rank 17th. Zuckerberg's betting talent can overcome performance gaps.
Musk launches Grok 4 with $300 monthly plan just one day after his AI posted antisemitic content. The timing highlights AI moderation challenges as companies race to release more powerful models while struggling with content control.