OpenAI's nonprofit will control a $500B entity while owning $100B+ in equity—an unprecedented governance experiment. Microsoft formalizes partnership even as both companies hedge through diversification. Regulators hold the keys.
FTC orders seven AI giants to reveal how their companion chatbots affect children after teen suicide cases involving ChatGPT and Character.AI. Meta faces particular scrutiny over internal docs permitting romantic chats with minors.
Large U.S. companies just hit the brakes on AI—adoption fell from 14% to 12% in two months, the first decline since tracking began. MIT research explains why: 95% of enterprise pilots deliver zero ROI. The gap between AI hype and workflow reality is widening.
Remember when we worried about humans falling for fake news? Those were simpler times. Now, artificial intelligence has joined the ranks of the gullible, with leading AI chatbots parroting Russian propaganda like eager students who didn't check their sources.
A groundbreaking audit by NewsGuard reveals that top AI chatbots are repeating Kremlin-backed false claims 33 percent of the time. That's right – the same technology promising to revolutionize truth-finding is spending a third of its time spreading Moscow's favorite fairy tales.
One-Third of Chatbot Responses Echo Kremlin Lines
The culprit? A sophisticated Russian disinformation network dubbed "Pravda" – which, in a twist of irony that would make Orwell proud, means "truth" in Russian. This network has flooded the internet with 3.6 million articles in 2024 alone, not targeting human readers but aiming straight for the digital minds of AI systems.
Credit: News Guard
John Mark Dougan, an American fugitive turned Moscow propagandist, spilled the beans at a Russian conference, boasting about their strategy to "change worldwide AI" by feeding it pro-Russian narratives. It seems the digital equivalent of teaching old dogs new tricks is teaching new bots old propaganda.
3.6 Million Articles: The Scale of Digital Deception
The Pravda network operates like a high-tech laundering service for Kremlin talking points, spreading content across 150 domains in 49 countries and dozens of languages. Yet despite this impressive reach, these sites attract fewer visitors than a small-town blog. The average Pravda website gets about 1,000 monthly visitors – roughly the same traffic as a restaurant's "404 Error" page.
Credit: NewsGuard
But that's exactly the point. While human readers aren't biting, AI models are swallowing the content whole. The strategy, dubbed "LLM grooming" by researchers, works by flooding search results and web crawlers with pro-Kremlin content, essentially teaching AI models to speak fluent propaganda.
AI Companies Play Whack-a-Mole With Propaganda Sites
In NewsGuard's testing of 10 leading AI chatbots, seven actually cited Pravda websites as legitimate sources. It's like catching your straight-A student copying homework from the class clown – except this homework involves international disinformation.
The network's effectiveness lies in its sophistication. Rather than creating obvious propaganda sites, Pravda operates through seemingly independent websites targeting specific regions and topics. They have news sites for everything from NATO to Trump, making their content appear more credible to AI systems than a teenager's TikTok conspiracy theories.
Testing Reveals Widespread Vulnerability to Russian Influence
The problem isn't going away with simple solutions. Even if AI companies block all known Pravda domains today, new ones pop up tomorrow – playing a digital game of whack-a-mole that would exhaust even the most dedicated arcade champion.
Russian President Vladimir Putin, speaking at an AI conference in Moscow, complained that Western AI models were "biased" against Russian perspectives. His solution? Pour more resources into AI development. Because if you can't beat them, join them – and then reprogram them.
Why this matters:
We've entered an era where disinformation campaigns don't need human audiences to succeed – they just need to convince the machines that will eventually teach humans
The future of truth now depends on whether AI companies can teach their chatbots to be better fact-checkers than a caffeinated journalism intern on deadline
Tech translator with German roots who fled to Silicon Valley chaos. Decodes startup noise from San Francisco. Launched implicator.ai to slice through AI's daily madness—crisp, clear, with Teutonic precision and sarcasm.
E-Mail: marcus@implicator.ai
FTC orders seven AI giants to reveal how their companion chatbots affect children after teen suicide cases involving ChatGPT and Character.AI. Meta faces particular scrutiny over internal docs permitting romantic chats with minors.
The Cruz proposal offers a laboratory for testing whether innovation imperatives can coexist with democratic oversight. Whether Congress embraces this model may determine how America balances technological competition with institutional governance in the AI era.
China's surveillance tech, originally built with IBM and other US companies in the 2000s, is now being sold as 'control-tech-as-a-service' globally. Geedge Networks ships turnkey censorship systems to Pakistan, Myanmar, and others—all while avoiding sanctions.
Nvidia brands Congressional chip hawks as "AI doomers" while supply constraints limit growth regardless of policy outcomes. Chinese firms still want chips despite Beijing pressure. Physical bottlenecks reshape the AI hardware landscape.