Tim Cook built Apple's leadership into a monument of stability. In 2025, that monument cracked. Meta poached AI and design chiefs with $25M packages. The chip architect may follow. What broke inside the world's most valuable company?
OpenRouter's 100 trillion token study was supposed to prove AI is transforming everything. The data shows something else: half of open-source usage is roleplay, enterprise adoption is thin, and one account caused a 20-point spike in the metrics.
The New York Times sued Perplexity for copyright infringement—months after signing an AI licensing deal with Amazon. Perplexity built revenue-sharing programs for publishers. The Times declined to join any of them. Now lawyers are involved.
The story, broken by journalist Jason Koebler, reveals how these researchers deployed AI bots to test if artificial intelligence could change people's minds about controversial topics.
The team operated in secret, unleashing their bots into a community of 3.8 million subscribers. These AI agents posted over 1,700 comments across four months, often masquerading as specific types of people - a rape survivor, a Black man opposed to Black Lives Matter, or a domestic violence shelter worker. Some bots analyzed users' post histories to guess their demographics before crafting responses.
These weren't random comments. The bots wrote emotionally charged stories. One posed as a statutory rape survivor, sharing a detailed account of being targeted at age 15. Another claimed to be a Black man criticizing the Black Lives Matter movement, suggesting it was "virilized by algorithms and media corporations."
The scale proved significant. The research team's bots collected over 20,000 upvotes and 137 "deltas" - special points awarded when someone successfully changes another user's view. In their draft paper, the researchers claimed their bots outperformed humans at persuasion.
The moderators of r/changemyview discovered this experiment only after it ended. They were furious. "Our sub is a decidedly human space that rejects undisclosed AI as a core value," they wrote, explaining that users "deserve a space free from this type of intrusion."
The researchers defended breaking the subreddit's anti-bot rules by claiming their comments had human oversight. They argued this meant their accounts weren't technically "bots." Reddit's spam filters caught and shadowbanned 21 of their 34 accounts.
The team has chosen to remain anonymous "given the current circumstances" - though they won't explain what those circumstances are. The University of Zurich hasn't responded to requests for comment. The r/changemyview moderators know the lead researcher's name but are respecting their request for privacy.
This isn't the first AI infiltration of Reddit. Previous cases involved bots boosting companies' search rankings. But r/changemyview moderators say they're not against all research - they've helped over a dozen teams conduct properly disclosed studies. They even approved OpenAI analyzing an offline archive of the subreddit.
The difference here? This experiment targeted live discussions about controversial topics, using AI to actively manipulate people's views without their knowledge or consent. The moderators put it bluntly: "No researcher would be allowed to experiment upon random members of the public in any other context."
Why this matters:
This experiment shows how easily AI can be weaponized for mass psychological manipulation - and how hard it is to spot. Those 1,700+ convincing comments came from bots pretending to be rape survivors and domestic violence workers, and almost no one noticed.
The researchers' defense amounts to "we had to break the rules to study rule-breaking." It's the academic equivalent of "it's just a prank, bro" - except the prank involved manipulating millions of people's views on sensitive topics like sexual assault and racial justice.
Bilingual tech journalist slicing through AI noise at implicator.ai. Decodes digital culture with a ruthless Gen Z lens—fast, sharp, relentlessly curious. Bridges Silicon Valley's marble boardrooms, hunting who tech really serves.
The NYT calls David Sacks's 708 tech investments a scandal. But when America needs an AI czar, should it hire someone who's never built anything? The real conflict might be demanding expertise without exposure.
David Sacks holds 449 AI investments while crafting Trump's AI policy. The Times investigation reveals how ethics rules made this legal. The scandal isn't the conflict—it's the system that permits it.
Trump's Genesis Mission invokes Manhattan Project urgency to accelerate AI-driven science. But the executive order commits zero new dollars, claims credit for existing partnerships, and arrives while university research funding gets slashed.
Trump's draft executive order seeks to override state AI laws through litigation and funding threats, reviving a proposal Congress rejected 99-1 months ago. Republican governors and conservative senators oppose the move as tech investors push harder.