Nvidia paid $20 billion for Groq's technology and hired its leadership. But it's not an acquisition, according to Jensen Huang. The licensing deal structure has become Silicon Valley's preferred method for absorbing competitors while avoiding regulatory scrutiny.
Marissa Mayer raised $8 million for her new AI startup. OpenAI raised $11 billion. That gap tells the real story—and so does the $20 million she burned at Sunshine, which managed just 1,000 downloads across multiple products over seven years.
Astra AI just won Slovenia's Startup of the Year. The numbers look great: 170,000 users, expansion into Germany, rave reviews. But their claimed 50 billion monthly tokens raises a question no one's asking: can a €24/month subscription cover those API bills?
Workslop: AI’s productivity paradox is hitting the office
AI adoption doubles across companies, but 95% see no returns. The culprit: "workslop"—polished AI content that shifts real work onto colleagues. Each incident costs $186 in hidden labor. The productivity promise meets workplace reality.
📊 MIT research shows 95% of organizations see no measurable ROI from AI tools despite usage doubling since 2023
💸 "Workslop"—polished AI content lacking substance—hits 40% of workers monthly, costing $186 per incident in downstream labor
🏢 At companies with 10,000 employees, workslop drains over $9 million annually in hidden productivity costs
👨💼 Klarna's CEO exemplifies the problem by making engineers review AI prototypes he creates despite having no coding experience
🤝 Trust erodes as 50% of recipients view workslop senders as less capable and 42% see them as less trustworthy
🔧 A cottage industry emerges as 95% of developers now spend extra time fixing AI-generated code, with many reporting net negative time savings
Polished AI output looks efficient—but shifts the real work onto coworkers.
Generative-AI use is exploding inside companies, but measurable returns aren’t. A widely cited MIT finding says 95% of organizations report no ROI from their deployments even as usage doubles. New survey research described in a Harvard Business Review analysis of workslop points to a culprit: AI-generated content that looks professional yet forces others to redo, interpret, or verify it.
The hidden cost of “workslop”
BetterUp Labs calls the pattern “workslop.” Four in ten employees say they’ve received it in the past month. Dealing with each instance takes almost two hours and creates an “invisible tax” of roughly $186 per employee per month. At a 10,000-person firm, that pencils out to more than $9 million a year in lost productivity. The files look great. The value isn’t there.
What does workslop look like? Polished slide decks with vague numbers. Long “summaries” that miss the point. Cleanly formatted code that compiles but doesn’t meet the spec. Recipients spend time deciphering intent, rebuilding context, and negotiating who should fix it. That’s not efficiency. It’s deferral.
The reputational damage compounds the waste. In the BetterUp data, about half of recipients view the sender as less capable afterward; 42% trust them less. Collaboration frays.
Where leaders make it worse
Executives often signal “use AI everywhere” without defining when, how, or to what standard. The result is productivity theater: more output, little outcome. Consider Klarna’s CEO, who, by his own account on a podcast, prototypes features with AI tools despite not being a developer, then hands them to engineers to “review.” According to Gizmodo’s write-up, the team must validate and translate those prototypes back into workable plans. The work wasn’t eliminated. It was redistributed—with added social friction because the boss made it.
This dynamic scales. Blanket mandates model indiscriminate use; they also blur ownership. When everyone is told to push more AI into the workflow, nobody owns the rework created by bad AI output.
Creation is cheap; consumption is costly
Classic cognitive offloading moved work from people to machines: calculators, search, spell-check. Workslop moves work from one human to another under the cover of software polish. AI collapses the cost of generating text, code, and slides. It does not collapse the cost of verifying them.
That asymmetry tilts incentives. Senders are rewarded for volume because volume is visible. Receivers eat the cost because rework is invisible. Over time, that creates a culture where “more” beats “right.”
BetterUp’s research usefully separates “pilots” from “passengers.” Pilots use AI to extend their skills toward a clear goal; passengers use it to avoid work. Both are adopting AI. Only one group is improving outcomes.
The metrics are lying to you
Most dashboards celebrate AI adoption: active users, prompts per day, pages generated, tickets closed. Few track second-order effects: time spent verifying AI outputs, the percentage of artifacts returned for rework, or the number of cross-team escalations triggered by unclear AI-assisted documents. If you measure activity, you will get activity. You may not get results.
We’ve seen this film. Email promised speed and delivered inbox debt. PowerPoint empowered storytelling and enabled “death by slides.” Generative AI will repeat the pattern unless managers measure the mess, not the buzz.
Evidence from the trenches
Market signals echo the surveys. NBC and 404 Media report a rising cottage industry of contractors hired to fix AI-generated code. In a Fastly poll, 95% of developers said they spend extra time correcting AI output; many said the net effect is slower delivery. Research firm METR similarly found that AI tools often make developers slower on complex tasks. The demand for “AI cleanup” work is a tell: internal quality controls are failing.
A workable playbook
The antidote is not “ban AI.” It’s better management.
Set usage standards. Define tasks where AI is allowed, discouraged, or prohibited. For allowed tasks, require a short “owner’s note” atop any AI-assisted deliverable: goal, inputs, model used, unresolved uncertainties. One paragraph is enough. It forces intent.
Shift metrics from volume to veracity. Track rework rates and verification time alongside adoption. Tie incentives to accuracy, completeness, and downstream acceptance—not pages generated.
Make baton passes explicit. When handoffs include AI-generated artifacts, require a checklist: source citations, constraints, known gaps, and an explicit ask of the recipient. No checklist, no handoff.
Train for “pilot” behavior. Reward employees who use AI to explore options, generate examples, and pressure-test assumptions—then synthesize a human judgment. Discourage unedited pastes.
Protect critical paths. For customer-facing, regulatory, or safety-relevant content, require human-in-the-loop review with named approvers. Clarity about ownership beats enthusiasm about tooling.
Model the standard from the top. Leaders should publish their own AI guardrails and follow them. If your prototype is weekend-vibe code, label it as such and route it as a question, not an instruction. Culture follows cues.
The competitive divide
Workslop is not a law of nature; it’s a management choice. Firms that combine human judgment with AI-enabled throughput will widen the gap over those that substitute one for the other. The former will get faster and clearer. The latter will drown in their own drafts.
Why this matters:
Activity ≠ productivity. If you measure AI usage without measuring rework, you will pay for output twice—once to create it, again to fix it.
Burden shifts are cultural. AI can amplify strong collaboration norms—or entrench bad ones by normalizing handoffs that hide the real work.
❓ Frequently Asked Questions
Q: How can I tell if I've received workslop versus legitimate AI-assisted work?
A: Look for warning signs like vague conclusions, missing context about your specific project, generic formatting that doesn't match your company style, or content that requires you to guess the sender's intent. BetterUp's research shows recipients typically spend the first 30 minutes of the average 1 hour 56 minutes just trying to understand what the sender actually wants.
Q: Which industries see the most workslop problems?
A: Professional services and technology sectors are disproportionately affected, according to the BetterUp study. This makes sense since these industries have rapid AI adoption rates and knowledge work that's easier to automate superficially. However, the phenomenon occurs across all industries surveyed, with 40% of workers encountering it monthly regardless of sector.
Q: Are there any AI tools that actually help productivity, or should companies avoid them entirely?
A: The research identifies "pilot" users who see genuine benefits. These workers use AI 75% more often than "passengers" but focus on enhancing creativity and achieving specific goals rather than avoiding work. Pilots use AI to generate examples, pressure-test ideas, and explore options before making human judgments—not to replace thinking entirely.
Q: How do I address workslop from a colleague without damaging our relationship?
A: The study found 34% of recipients notify teammates or managers about workslop incidents, but this often escalates tensions. Instead, try asking specific clarifying questions: "What outcome are you hoping for?" or "Which parts need my input versus approval?" This forces the sender to add missing context without directly criticizing their AI use.
Q: What's the difference between "pilots" and "passengers" in AI usage?
A: Pilots combine high agency with high optimism about AI tools. They use AI purposefully to enhance their existing skills toward clear goals. Passengers have low agency and low optimism, using AI primarily to avoid doing work themselves. Pilots use AI 75% more at work and 95% more outside work than passengers, but generate far less workslop.
Tech journalist. Lives in Marin County, north of San Francisco. Got his start writing for his high school newspaper. When not covering tech trends, he's swimming laps, gaming on PS4, or vibe coding through the night.
Cloudflare's 2025 data shows Googlebot ingests more content than all other AI bots combined. Publishers who want to block AI training face an impossible choice: lose search visibility entirely. The structural advantage runs deeper than most coverage acknowledged.
Stanford's AI hacker cost $18/hour and beat 9 of 10 human pentesters. The headlines celebrated a breakthrough. The research paper reveals an AI that couldn't click buttons, mistook login failures for success, and required constant human oversight.
Microsoft analyzed 37.5M Copilot conversations. Health queries dominated mobile usage every hour of every day. Programming's share collapsed. The data shows users want a confidant, not a productivity tool. The industry built for the boardroom anyway.
64% of teens use AI chatbots. But which ones? Higher-income teens cluster around ChatGPT for productivity. Lower-income teens are twice as likely to use Character.ai—the companion bot facing wrongful death lawsuits. The technology is sorting kids by class.