Voize raised $50M for nursing documentation AI. Abridge raised $300M at $5.3B valuation. The 10× gap reveals what healthcare really values—and what happens when efficiency gains hit an industry that already cuts corners on staffing.
Six weeks after calling AI an 'industrial bubble,' Bezos commits $6.2 billion to Project Prometheus. Not contradiction—strategic positioning. The physical AI startup may be less about commercial AI than vertical integration for Blue Origin's ambitions.
The story, broken by journalist Jason Koebler, reveals how these researchers deployed AI bots to test if artificial intelligence could change people's minds about controversial topics.
The team operated in secret, unleashing their bots into a community of 3.8 million subscribers. These AI agents posted over 1,700 comments across four months, often masquerading as specific types of people - a rape survivor, a Black man opposed to Black Lives Matter, or a domestic violence shelter worker. Some bots analyzed users' post histories to guess their demographics before crafting responses.
These weren't random comments. The bots wrote emotionally charged stories. One posed as a statutory rape survivor, sharing a detailed account of being targeted at age 15. Another claimed to be a Black man criticizing the Black Lives Matter movement, suggesting it was "virilized by algorithms and media corporations."
The scale proved significant. The research team's bots collected over 20,000 upvotes and 137 "deltas" - special points awarded when someone successfully changes another user's view. In their draft paper, the researchers claimed their bots outperformed humans at persuasion.
The moderators of r/changemyview discovered this experiment only after it ended. They were furious. "Our sub is a decidedly human space that rejects undisclosed AI as a core value," they wrote, explaining that users "deserve a space free from this type of intrusion."
The researchers defended breaking the subreddit's anti-bot rules by claiming their comments had human oversight. They argued this meant their accounts weren't technically "bots." Reddit's spam filters caught and shadowbanned 21 of their 34 accounts.
The team has chosen to remain anonymous "given the current circumstances" - though they won't explain what those circumstances are. The University of Zurich hasn't responded to requests for comment. The r/changemyview moderators know the lead researcher's name but are respecting their request for privacy.
This isn't the first AI infiltration of Reddit. Previous cases involved bots boosting companies' search rankings. But r/changemyview moderators say they're not against all research - they've helped over a dozen teams conduct properly disclosed studies. They even approved OpenAI analyzing an offline archive of the subreddit.
The difference here? This experiment targeted live discussions about controversial topics, using AI to actively manipulate people's views without their knowledge or consent. The moderators put it bluntly: "No researcher would be allowed to experiment upon random members of the public in any other context."
Why this matters:
This experiment shows how easily AI can be weaponized for mass psychological manipulation - and how hard it is to spot. Those 1,700+ convincing comments came from bots pretending to be rape survivors and domestic violence workers, and almost no one noticed.
The researchers' defense amounts to "we had to break the rules to study rule-breaking." It's the academic equivalent of "it's just a prank, bro" - except the prank involved manipulating millions of people's views on sensitive topics like sexual assault and racial justice.
Bilingual tech journalist slicing through AI noise at implicator.ai. Decodes digital culture with a ruthless Gen Z lens—fast, sharp, relentlessly curious. Bridges Silicon Valley's marble boardrooms, hunting who tech really serves.
Chinese hackers automated 80-90% of cyber intrusions using Anthropic's Claude by simply telling it they were security testers. Four breaches succeeded. The jailbreak was embarrassingly simple, and now every AI company faces the same vulnerability.
Anthropic projects profitability by 2028. So why $50 billion in infrastructure? The announcement, arriving as OpenAI's subsidy request fails, reveals how even disciplined AI companies can't resist the pressure to match rivals' megaprojects.
Germany wrote Europe's privacy playbook after the Nazis weaponized census data and the Stasi monitored citizens. Now Berlin is leading the charge to gut those protections, handing trillion-dollar AI companies access to European data through regulatory shortcuts.
Seven families sue OpenAI, claiming ChatGPT drove four people to suicide after a May 2024 design change prioritized engagement over safety. The cases test whether AI chatbots qualify as products under liability law.