Workslop: AI’s productivity paradox is hitting the office

AI adoption doubles across companies, but 95% see no returns. The culprit: "workslop"—polished AI content that shifts real work onto colleagues. Each incident costs $186 in hidden labor. The productivity promise meets workplace reality.

AI Workslop Costs Companies $9M Annually Despite Adoption Boom

💡 TL;DR - The 30 Seconds Version

📊 MIT research shows 95% of organizations see no measurable ROI from AI tools despite usage doubling since 2023

💸 "Workslop"—polished AI content lacking substance—hits 40% of workers monthly, costing $186 per incident in downstream labor

🏢 At companies with 10,000 employees, workslop drains over $9 million annually in hidden productivity costs

👨‍💼 Klarna's CEO exemplifies the problem by making engineers review AI prototypes he creates despite having no coding experience

🤝 Trust erodes as 50% of recipients view workslop senders as less capable and 42% see them as less trustworthy

🔧 A cottage industry emerges as 95% of developers now spend extra time fixing AI-generated code, with many reporting net negative time savings

Polished AI output looks efficient—but shifts the real work onto coworkers.

Generative-AI use is exploding inside companies, but measurable returns aren’t. A widely cited MIT finding says 95% of organizations report no ROI from their deployments even as usage doubles. New survey research described in a Harvard Business Review analysis of workslop points to a culprit: AI-generated content that looks professional yet forces others to redo, interpret, or verify it.

The hidden cost of “workslop”

BetterUp Labs calls the pattern “workslop.” Four in ten employees say they’ve received it in the past month. Dealing with each instance takes almost two hours and creates an “invisible tax” of roughly $186 per employee per month. At a 10,000-person firm, that pencils out to more than $9 million a year in lost productivity. The files look great. The value isn’t there.

What does workslop look like? Polished slide decks with vague numbers. Long “summaries” that miss the point. Cleanly formatted code that compiles but doesn’t meet the spec. Recipients spend time deciphering intent, rebuilding context, and negotiating who should fix it. That’s not efficiency. It’s deferral.

The reputational damage compounds the waste. In the BetterUp data, about half of recipients view the sender as less capable afterward; 42% trust them less. Collaboration frays.

Where leaders make it worse

Executives often signal “use AI everywhere” without defining when, how, or to what standard. The result is productivity theater: more output, little outcome. Consider Klarna’s CEO, who, by his own account on a podcast, prototypes features with AI tools despite not being a developer, then hands them to engineers to “review.” According to Gizmodo’s write-up, the team must validate and translate those prototypes back into workable plans. The work wasn’t eliminated. It was redistributed—with added social friction because the boss made it.

This dynamic scales. Blanket mandates model indiscriminate use; they also blur ownership. When everyone is told to push more AI into the workflow, nobody owns the rework created by bad AI output.

Creation is cheap; consumption is costly

Classic cognitive offloading moved work from people to machines: calculators, search, spell-check. Workslop moves work from one human to another under the cover of software polish. AI collapses the cost of generating text, code, and slides. It does not collapse the cost of verifying them.

That asymmetry tilts incentives. Senders are rewarded for volume because volume is visible. Receivers eat the cost because rework is invisible. Over time, that creates a culture where “more” beats “right.”

BetterUp’s research usefully separates “pilots” from “passengers.” Pilots use AI to extend their skills toward a clear goal; passengers use it to avoid work. Both are adopting AI. Only one group is improving outcomes.

The metrics are lying to you

Most dashboards celebrate AI adoption: active users, prompts per day, pages generated, tickets closed. Few track second-order effects: time spent verifying AI outputs, the percentage of artifacts returned for rework, or the number of cross-team escalations triggered by unclear AI-assisted documents. If you measure activity, you will get activity. You may not get results.

We’ve seen this film. Email promised speed and delivered inbox debt. PowerPoint empowered storytelling and enabled “death by slides.” Generative AI will repeat the pattern unless managers measure the mess, not the buzz.

Evidence from the trenches

Market signals echo the surveys. NBC and 404 Media report a rising cottage industry of contractors hired to fix AI-generated code. In a Fastly poll, 95% of developers said they spend extra time correcting AI output; many said the net effect is slower delivery. Research firm METR similarly found that AI tools often make developers slower on complex tasks. The demand for “AI cleanup” work is a tell: internal quality controls are failing.

A workable playbook

The antidote is not “ban AI.” It’s better management.

Set usage standards. Define tasks where AI is allowed, discouraged, or prohibited. For allowed tasks, require a short “owner’s note” atop any AI-assisted deliverable: goal, inputs, model used, unresolved uncertainties. One paragraph is enough. It forces intent.

Shift metrics from volume to veracity. Track rework rates and verification time alongside adoption. Tie incentives to accuracy, completeness, and downstream acceptance—not pages generated.

Make baton passes explicit. When handoffs include AI-generated artifacts, require a checklist: source citations, constraints, known gaps, and an explicit ask of the recipient. No checklist, no handoff.

Train for “pilot” behavior. Reward employees who use AI to explore options, generate examples, and pressure-test assumptions—then synthesize a human judgment. Discourage unedited pastes.

Protect critical paths. For customer-facing, regulatory, or safety-relevant content, require human-in-the-loop review with named approvers. Clarity about ownership beats enthusiasm about tooling.

Model the standard from the top. Leaders should publish their own AI guardrails and follow them. If your prototype is weekend-vibe code, label it as such and route it as a question, not an instruction. Culture follows cues.

The competitive divide

Workslop is not a law of nature; it’s a management choice. Firms that combine human judgment with AI-enabled throughput will widen the gap over those that substitute one for the other. The former will get faster and clearer. The latter will drown in their own drafts.

Why this matters:

  • Activity ≠ productivity. If you measure AI usage without measuring rework, you will pay for output twice—once to create it, again to fix it.
  • Burden shifts are cultural. AI can amplify strong collaboration norms—or entrench bad ones by normalizing handoffs that hide the real work.

❓ Frequently Asked Questions

Q: How can I tell if I've received workslop versus legitimate AI-assisted work?

A: Look for warning signs like vague conclusions, missing context about your specific project, generic formatting that doesn't match your company style, or content that requires you to guess the sender's intent. BetterUp's research shows recipients typically spend the first 30 minutes of the average 1 hour 56 minutes just trying to understand what the sender actually wants.

Q: Which industries see the most workslop problems?

A: Professional services and technology sectors are disproportionately affected, according to the BetterUp study. This makes sense since these industries have rapid AI adoption rates and knowledge work that's easier to automate superficially. However, the phenomenon occurs across all industries surveyed, with 40% of workers encountering it monthly regardless of sector.

Q: Are there any AI tools that actually help productivity, or should companies avoid them entirely?

A: The research identifies "pilot" users who see genuine benefits. These workers use AI 75% more often than "passengers" but focus on enhancing creativity and achieving specific goals rather than avoiding work. Pilots use AI to generate examples, pressure-test ideas, and explore options before making human judgments—not to replace thinking entirely.

Q: How do I address workslop from a colleague without damaging our relationship?

A: The study found 34% of recipients notify teammates or managers about workslop incidents, but this often escalates tensions. Instead, try asking specific clarifying questions: "What outcome are you hoping for?" or "Which parts need my input versus approval?" This forces the sender to add missing context without directly criticizing their AI use.

Q: What's the difference between "pilots" and "passengers" in AI usage?

A: Pilots combine high agency with high optimism about AI tools. They use AI purposefully to enhance their existing skills toward clear goals. Passengers have low agency and low optimism, using AI primarily to avoid doing work themselves. Pilots use AI 75% more at work and 95% more outside work than passengers, but generate far less workslop.

Large Company AI Use Drops as 95% of Enterprise Pilots Fail
Large U.S. companies just hit the brakes on AI—adoption fell from 14% to 12% in two months, the first decline since tracking began. MIT research explains why: 95% of enterprise pilots deliver zero ROI. The gap between AI hype and workflow reality is widening.
Students Use AI; Schools Delay—Big Tech Writes the Rules
Students embrace AI faster than schools can write rules. While 85% use AI for coursework, institutions stall on policy—and tech giants step in with billions in training programs to fill the vacuum. The question: who gets to define learning standards?
Digital Assistants Stuck in 2018 Despite AI Upgrades
Tech giants spent billions upgrading Siri, Alexa, and Google Assistant with AI. Americans still use them for weather checks and timers—exactly like 2018. Fresh YouGov data reveals why the utility gap persists.
Young Americans Lead AI Adoption Despite Work Integration Gap
Young Americans adopt AI at triple the rate of older adults, but most still won’t use it for work despite years of tech industry promises. The gap reveals how people create their own rules for AI use, ignoring Silicon Valley’s script.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.