On March 23, Anthropic's paying Claude subscribers discovered their sessions had been throttled without notice. Max users handing over $200 a month watched their daily allowance vanish on a single prompt. Developers on the West Coast, opening their laptops at 8 a.m., found themselves locked out before writing a line of code.
Call it "adjusting." Better: call it rationing.
The euphemism arrived on schedule.
Thariq Shihipar, a member of Anthropic's technical team, confirmed on Wednesday that the company now limits session capacity during "peak hours," weekdays from 5 a.m. to 11 a.m. Pacific. Weekly caps stay the same, he assured. You'll just burn through them faster during the only hours that matter.
That is the logic of an airline that overbooks every flight, then explains your ticket is still valid, you simply can't board during business hours. Only 7% of users are affected, Anthropic added. One imagines the airline making the same announcement over the PA system.
The math was never hidden.
The numbers that explain this crisis were always public. Anthropic's own API pricing reveals the gap. A moderate subscriber paying $20 a month generates roughly $58.50 in inference costs. Nearly three dollars consumed for every dollar collected. Power users on the top-tier plan burn multiples more. The company acknowledged last August that some subscribers were consuming "tens of thousands of dollars in model usage" against flat-rate plans.
None of this was secret. The company sold subscriptions it knew were underwater, then flinched when customers took the product seriously.
AOL tried this in 1996.
America Online killed hourly billing that December and promised unlimited access for a flat fee. Users took AOL at its word, lines jammed for weeks, and state attorneys general forced a settlement. The lesson was plain: "unlimited" works as a customer acquisition strategy precisely until customers believe it.
Thirty years later, Anthropic is replaying the same tape at GPU scale. The technology changed. The arithmetic did not.
Get Implicator.ai in your inbox
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
Information has rules. Anthropic ignored them.
Carl Shapiro and Hal Varian warned about exactly this failure mode in Information Rules, their 1998 study of pricing in digital markets. Their central argument was deceptively plain: information goods with variable consumption demand versioned pricing, not flat rates.
Flat-rate pricing for goods with nonzero marginal cost is a bet against your own success. The more customers you acquire, the faster the economics invert. Anthropic's revenue has grown by multiples since 2025. That growth, on money-losing unit economics, does not solve the problem. It compounds it.
The deeper dishonesty is structural. A casual user who asks Claude two questions over morning coffee costs pennies to serve. A developer running agentic coding loops all morning, cursor ticking through request after request, burns a hundred times the compute. Anthropic knew this distribution existed. It published the API rates that prove it. Yet it kept selling a single price tier as if usage were uniform.
When the transition from subsidy to metering arrived, Anthropic handled it the worst way possible: without warning. Its own support chatbot went down during the chaos. Google had pulled the same move weeks earlier, restricting AI Ultra subscribers without explanation. The pattern is becoming an industry template: sell the subscription, ration the service, blame the peak.
The timing is the tell.
The throttled window opens at five in the morning, Pacific time, and doesn't close until eleven. That is the West Coast workday. A developer in San Francisco sits down at eight and walks straight into the limit. Users who reorganized their toolchains around Claude, who let competing subscriptions lapse because Anthropic's product was better, discover the product they depend on carries a conditional asterisk they were never shown.
And the opacity is not new. Anthropic has a documented pattern of imposing limits users cannot see coming, creating a persistent gap between what subscribers pay for and what they receive.
Hours after Anthropic's announcement, OpenAI's Codex engineering lead posted on X that the company had lifted all usage limits. The post pulled hundreds of thousands of views. If Anthropic wanted to design a customer acquisition campaign for its chief rival, it could not have built a more effective one.
Subsidy is not a business model.
The defense will sound reasonable. Demand surged after the QuitGPT movement sent users flooding from ChatGPT to Claude. The app hit number one on the U.S. store. GPU capacity doesn't materialize over a weekend. All true. All foreseeable.
But foreseeability is the point. The QuitGPT surge was a windfall Anthropic actively celebrated. It watched the download numbers climb. It updated its marketing. And it did not update its capacity planning or its subscriber communications to match.
A company that accepts a massive surge in sessions without preparing its infrastructure or warning its customers is not a victim of success. It is a beneficiary of attention that refused to pay the operational cost of receiving it.
Customers forgive price increases. They do not forgive bait-and-switch.



Implicator