I cannot stand preachy commentary. It is pretentious, it never ages well, and it belongs to an era of opinion writing the profession should have outgrown. I need to get that out of the way first, because this piece is heading in precisely that direction. You have probably guessed as much. An advance apology, then, especially since the subject is a company for which I have great admiration.

On March 31, version 2.1.88 of Claude Code arrived on the npm public registry with a 59.8-megabyte debugging artifact attached. That file contained the complete TypeScript source for Anthropic's highest-revenue product. All 512,000 lines. Every feature flag, system prompt, and internal codename. A post on X linking to the exposed code collected 21 million views before Anthropic issued a statement. Thirteen months earlier, the identical failure hit the identical registry through the identical vector.

Call it a "release packaging issue caused by human error." Better: call it a pattern.

The admiration is real. The leaked source revealed genuine engineering ambition: an autonomous daemon called KAIROS, a memory consolidation engine, a planning system that offloads thirty-minute reasoning sessions to remote containers. Claude Code is closer to an operating system for software development than a terminal assistant, and the architecture reflects engineers who think in original, ambitious terms. That creativity deserves respect. Innovation without operational discipline, though, is carelessness with better branding. And so, reluctantly, the sermon.

Competence is not a press release.

Anthropic's identity rests on a single proposition: it is the responsible AI company. White papers on existential risk. Voluntary deployment commitments. A public posture of caution so conspicuous that regulators cite the firm as the standard. Enterprise customers pay a premium for that reputation. The company is preparing for an IPO on the strength of it.

Five days before the npm leak, Anthropic's own content management system left roughly 3,000 unpublished files accessible to anyone with a browser, including details of an unreleased model and an invite-only CEO retreat. The explanation: human error in CMS configuration. Two distinct systems failed basic access controls within a single week. Same company. Same one-word excuse both times.

Deviance, once normalized, recurs.

In 1986, NASA launched Challenger knowing the O-ring seals were compromised. Engineers had raised the alarm. Management flew the shuttle anyway. Diane Vaughan spent years studying why, and the answer she found in The Challenger Launch Decision fits Anthropic uncomfortably well. Her term was "normalization of deviance." Something goes wrong. Nothing blows up. So the anomaly gets reclassified as tolerable, then repeated, until the day it stops being tolerable at all.

Apply this to Anthropic and the fit is precise. In February 2025, source maps leaked Claude Code's internals to npm. Engineers patched the issue. No customer data escaped. No model weights surfaced. The incident produced no visible cost, and the anomaly was absorbed. Thirteen months later, the identical configuration error reached the identical public registry. The deviance had been normalized.

Vaughan's analysis yields three lessons that apply without modification. Nobody votes to accept catastrophic risk in a conference room. It happens gradually. A small exception here. A shortcut that worked fine last time. Before long the threshold has moved, and nobody can say exactly when. Fix the .npmignore file, sure. But if nobody asks how a 59.8-megabyte source map sailed through every gate in the release pipeline, the fix treats the symptom. And the gap between an institution's safety rhetoric and its operational behavior widens precisely when no one is measuring it.

The Pentagon is watching.

Context sharpens the damage. Anthropic is locked in a tense standoff with the Department of Defense over Claude's safety guardrails, a confrontation in which the company's credibility as a disciplined, security-conscious operator is the core asset under negotiation. Pentagon officials must now reconcile two versions of Anthropic: the company that lectures the Defense Department on responsible AI deployment, and the company that cannot prevent its build pipeline from publishing proprietary source code to a public registry. Twice.

Not a peripheral embarrassment. A strategic liability. A company seeking defense contracts must demonstrate it can secure its own infrastructure before credibly promising to secure anyone else's. The distance between Anthropic's safety pitch and its operational reality does not shrink with repetition. It compounds.

"Rolling out measures" is not accountability.

What makes this episode troubling is not the mistake itself. Mistakes happen. Organizations recover. The concern is the response.

Anthropic told CNBC it is "rolling out measures to prevent this from happening again." No specifics. No timeline. No acknowledgment that identical measures apparently failed thirteen months earlier. For a company whose entire value proposition is rigor, vagueness functions as a confession.

Chinese laboratories that ran 16 million fraudulent API exchanges to extract Claude's reasoning patterns now possess the production harness those API calls could never reach. Western competitors face litigation risk if they study the code too closely. Beijing's labs face no such constraint. The asymmetry is the actual cost of a misconfigured .npmignore file.

Trust is not a tagline.

I admire what Anthropic has built. The engineering in that leaked code is genuinely impressive, and I mean that. Most of Anthropic's competitors ship thinner products and talk louder about them. None of that makes the pattern forgivable. A CMS left open to the public internet. Source code shipped to a public registry, twice, through the same vector. A response that promises future measures for a failure mode already patched once before.

An organization that normalizes deviance does not correct itself through good intentions. It corrects itself through structural change, the kind that costs money, slows releases, and inconveniences engineers. If Anthropic's next npm publish still contains a .map file, no white paper on existential risk will matter. The Pentagon will notice. So will the investors pricing Anthropic's IPO.

Safety is daily discipline. When it curdles into brand strategy, someone ships a .map file to a public registry. Twice.

Anthropic Claude Code Leak Exposes 44 Hidden Features
Version 2.1.88 of Claude Code shipped to npm with a 59.8 MB source map exposing 512,000 lines of TypeScript across 1,900 files, marking the second identical failure in 13 months.
Anthropic Called Its Own AI a Cybersecurity Threat. Wall Street Obliged.
A leaked draft blog post about Anthropic's Claude Mythos model triggered billions in cybersecurity stock losses before independent verification.
Anthropic Makes Leaders Write in Public. The Ones Who Resist Tell You Everything.
Every leader at Anthropic keeps a public Slack channel called a notebook, posting weekly goals and thinking for the whole company. Head of Product Ami Vora revealed the practice at a Salesforce event.
Opinion
Marcus Schuler

Marcus Schuler

San Francisco

Tech translator with German roots who fled to Silicon Valley chaos. Decodes startup noise from San Francisco. Launched implicator.ai to slice through AI's daily madness—crisp, clear, with Teutonic precision and sarcasm. E-Mail: [email protected]