Anthropic says multiple AI agents working together beat single models by 90%. The catch? They use 15x more computing power. This trade-off between performance and cost might reshape how we build AI systems for complex tasks.
AI models typically learn by memorizing patterns, then researchers bolt on reasoning as an afterthought. A new method called Reinforcement Pre-Training flips this approach—teaching models to think during basic training instead.
Remember when we worried about humans falling for fake news? Those were simpler times. Now, artificial intelligence has joined the ranks of the gullible, with leading AI chatbots parroting Russian propaganda like eager students who didn't check their sources.
A groundbreaking audit by NewsGuard reveals that top AI chatbots are repeating Kremlin-backed false claims 33 percent of the time. That's right – the same technology promising to revolutionize truth-finding is spending a third of its time spreading Moscow's favorite fairy tales.
One-Third of Chatbot Responses Echo Kremlin Lines
The culprit? A sophisticated Russian disinformation network dubbed "Pravda" – which, in a twist of irony that would make Orwell proud, means "truth" in Russian. This network has flooded the internet with 3.6 million articles in 2024 alone, not targeting human readers but aiming straight for the digital minds of AI systems.
Credit: News Guard
John Mark Dougan, an American fugitive turned Moscow propagandist, spilled the beans at a Russian conference, boasting about their strategy to "change worldwide AI" by feeding it pro-Russian narratives. It seems the digital equivalent of teaching old dogs new tricks is teaching new bots old propaganda.
3.6 Million Articles: The Scale of Digital Deception
The Pravda network operates like a high-tech laundering service for Kremlin talking points, spreading content across 150 domains in 49 countries and dozens of languages. Yet despite this impressive reach, these sites attract fewer visitors than a small-town blog. The average Pravda website gets about 1,000 monthly visitors – roughly the same traffic as a restaurant's "404 Error" page.
Credit: NewsGuard
But that's exactly the point. While human readers aren't biting, AI models are swallowing the content whole. The strategy, dubbed "LLM grooming" by researchers, works by flooding search results and web crawlers with pro-Kremlin content, essentially teaching AI models to speak fluent propaganda.
AI Companies Play Whack-a-Mole With Propaganda Sites
In NewsGuard's testing of 10 leading AI chatbots, seven actually cited Pravda websites as legitimate sources. It's like catching your straight-A student copying homework from the class clown – except this homework involves international disinformation.
The network's effectiveness lies in its sophistication. Rather than creating obvious propaganda sites, Pravda operates through seemingly independent websites targeting specific regions and topics. They have news sites for everything from NATO to Trump, making their content appear more credible to AI systems than a teenager's TikTok conspiracy theories.
Testing Reveals Widespread Vulnerability to Russian Influence
The problem isn't going away with simple solutions. Even if AI companies block all known Pravda domains today, new ones pop up tomorrow – playing a digital game of whack-a-mole that would exhaust even the most dedicated arcade champion.
Russian President Vladimir Putin, speaking at an AI conference in Moscow, complained that Western AI models were "biased" against Russian perspectives. His solution? Pour more resources into AI development. Because if you can't beat them, join them – and then reprogram them.
Why this matters:
We've entered an era where disinformation campaigns don't need human audiences to succeed – they just need to convince the machines that will eventually teach humans
The future of truth now depends on whether AI companies can teach their chatbots to be better fact-checkers than a caffeinated journalism intern on deadline
Trump and Musk's $250 million political alliance collapsed in three hours Thursday, wiping $150 billion from Tesla's value as they traded accusations on social media. Their fight threatens America's space program and shows how personal feuds now shape policy.
Anthropic built secret AI models for U.S. spy agencies that handle classified data without refusing requests. The models already run at top security levels, creating the first AI designed for government secrets rather than consumer use.
Sahil Lavingia joined Elon Musk's government efficiency team with Silicon Valley confidence. Fifty-five days later, he got fired for telling reporters what he found inside. His account reveals who really ran DOGE—and why the VA surprised him.
Palmer Luckey got fired from Facebook for backing Trump. Now Meta needs his defense company to win a $22 billion military contract. The reunion changes everything.