Meta faces dual celebrity AI crises: unauthorized bots impersonating Swift and others while licensed celebrity voices engaged inappropriately with minors. Both expose how engagement incentives override safety guardrails.
Despite massive AI hype, 95% of enterprise projects deliver no real returns. The gap between promises and reality reveals hidden costs, workflow mismatches, and why human oversight remains surprisingly essential.
Meta's $14B AI talent blitz hits turbulence as ChatGPT co-creator Shengjia Zhao threatened to quit days after joining. The company hastily named him Chief Scientist to prevent defection, but at least three other marquee hires have already left.
Engineers working for Elon Musk's Department of Government Efficiency (DOGE) are modifying software designed to assist with mass firings of federal workers, according to WIRED's investigation.
The software, called AutoRIF (Automated Reduction in Force), was originally developed by the Department of Defense over twenty years ago. DOGE operatives have accessed the software and appear to be editing its code in the Office of Personnel Management's GitHub system.
Screenshots reviewed by WIRED show Riccardo Biasini, a former Tesla engineer and director at The Boring Company, working with the AutoRIF repository. Biasini has also been listed as the main contact for the government-wide email system soliciting resignation emails from federal workers.
Federal agency firings have so far been conducted manually, with HR officials reviewing employee registries and lists from managers. Probationary employees have been targeted first since they lack certain civil service protections. Thousands of workers have already been terminated across multiple agencies in recent weeks.
The CDC experienced this firsthand. Managers carefully identified "mission critical" probationary employees to protect them from termination. "None of that was taken into account," a CDC source told WIRED. "They just sent us a list and said, 'Terminate these employees effective immediately.'"
Government workers recently received another email demanding they detail their accomplishments from the past week. NBC News reported this information would be fed into a large language model to assess employee necessity.
Why this matters:
The marriage of AI and automated firing systems threatens to accelerate government workforce reductions without human oversight.
Civil service protections built over decades could be systematically undermined through technological automation.
This represents a shift from targeted cuts to algorithm-driven terminations, potentially transforming how government operates.
Tech translator with German roots who fled to Silicon Valley chaos. Decodes startup noise from San Francisco. Launched implicator.ai to slice through AI's daily madness—crisp, clear, with Teutonic precision and deadly sarcasm.
Nvidia posted record $46.7B revenue and beat estimates, yet shares tumbled 3%. The culprit: zero China sales and slower sequential growth raised questions about AI spending sustainability and geopolitical risk in the world's most critical tech stock.
Forty-four attorneys general threaten coordinated legal action against AI companies over child safety failures. Meta singled out for internal policies allowing romantic chatbot interactions with children as young as eight.
Tech giants successfully pushed Trump's White House to restrict funding for states with "restrictive" AI rules, while 1,000+ state bills flood legislatures. Colorado's pioneering law faces major revisions. The battle over who controls AI regulation is heating up.
Trump swaps Intel's CHIPS grants for 9.9% equity stake worth $8.9B—largest federal ownership since 2008. But former program architects warn: Intel needs customers, not capital. Will government ownership solve foundry crisis or create new conflicts?