A former AI researcher left his Munich lab to chase what he calls the "holy grail" of artificial intelligence. His startup just raised $13 million with barely more than a demo video. What convinced investors to bet big on something that doesn't exist yet?
Tim Cook skipped Trump's Middle East trip. Days later, the president threatened a 25% tariff on iPhones made outside America. The timing wasn't coincidence—it was payback that ended their once-cozy relationship.
xAI's Chatbot Pushes Political Claims After Code Change
xAI's chatbot spent Wednesday discussing South African politics - in response to baseball stats, cat videos, and even SpongeBob questions. The company blamed an unauthorized code change. But experts point to deeper issues in AI security.
xAI's chatbot Grok spent Wednesday telling users about South African politics, no matter what they asked. The bot inserted claims about "white genocide" into conversations about baseball stats, cat videos, and SpongeBob episodes.
When a baseball podcast asked about player Gunnar Henderson's stats, Grok tacked on a discourse about South African farm attacks. It explained political controversies to users who just wanted to identify photos of walking paths.
The incident lasted several hours before xAI fixed it. The company blamed an "unauthorized modification" to Grok's code and promised new safeguards.
AI experts say someone likely changed Grok's system prompt - the basic instructions that guide its responses. "If it was a more complex change, you wouldn't see Grok ignoring questions like this," says Matthew Guzdial, AI researcher at the University of Alberta. "A nuanced approach would take much more time."
Mark Riedl, director of Georgia Tech's School of Interactive Computing, agrees. "LLMs can act unpredictably to these secret instructions," he says. "If it were true, then xAI deployed without sufficient testing."
This marks the second time this year xAI has blamed unauthorized changes for Grok's behavior. In February, the bot briefly filtered out criticism of Elon Musk and Donald Trump.
The timing overlaps with recent U.S. policy shifts. Donald Trump just granted refugee status to 54 white South Africans, claiming they face persecution. South Africa's President Cyril Ramaphosa calls this "a completely false narrative."
xAI announced three changes to prevent similar incidents:
Publishing system prompts on GitHub
Adding a 24/7 monitoring team
Requiring reviews for prompt changes
The company says someone "circumvented" its code review process to make the change.
Experts say the incident shows how easily AI systems can be redirected. "It's not actually easy to force LLMs to spread specific ideology quickly," says Guzdial. "A more nuanced approach would only impact relevant questions."
Before xAI fixed the issue, Grok even explained South African politics in the voice of Star Wars character Jar Jar Binks.
Why this matters:
Simple prompt changes can hijack AI systems, raising questions about security
The incident shows how AI can spread political narratives through everyday interactions
Tim Cook skipped Trump's Middle East trip. Days later, the president threatened a 25% tariff on iPhones made outside America. The timing wasn't coincidence—it was payback that ended their once-cozy relationship.
Microsoft fired 6,000 workers using "AI automation strategy." JPMorgan hired 2,000 AI specialists. The divide between these companies explains everything about America's new job market.
Elon Musk's efficiency team is pushing his Grok AI into federal agencies while he profits from each contract. Three government sources reveal how sensitive data on millions of Americans may be training his chatbot.
A shift in US export policy could soon give Gulf states access to coveted AI chips, potentially redrawing the global artificial intelligence landscape. The move raises questions about technology transfer risks and regional stability as Middle Eastern nations vie for AI dominance.