Anthropic's attempt to contain its Claude Code source leak backfired Wednesday when a copyright takedown notice accidentally disabled more than 8,100 GitHub repositories, including legitimate forks of the company's own public code, TechCrunch reported. The company retracted most of the notices within hours, narrowing the takedown to one repository and 96 forks. By then, a Python rewrite of the leaked code had already become the fastest-growing repository in GitHub history.

Key Takeaways

The takedown that overshot

Boris Cherny, Anthropic's head of Claude Code, acknowledged the overbroad removal was unintentional. The targeted repository sat inside a fork network connected to Anthropic's own public Claude Code repo. When GitHub processed the Digital Millennium Copyright Act notice, it cascaded across the entire network, blocking developers who had nothing to do with the leaked source code.

"We retracted the notice for everything except the one repo we named, and GitHub has restored access to the affected forks," an Anthropic spokesperson told TechCrunch.

Developers whose legitimate projects got caught in the blast were not quiet about it. Blocked repositories. Disrupted workflows. Angry posts aimed at a company that markets itself on responsible AI. The cleanup made everything worse. Anthropic had already shipped 512,000 lines of Claude Code's source through a packaging error on Tuesday. Now it was blocking its own users too.

The rewrite nobody can take down

Anthropic's lawyers were still filing notices when a developer named Sigrid Jin woke up at 4 a.m. in South Korea and started porting the core architecture to Python. He pushed a project called claw-code to GitHub before sunrise. One hundred thousand stars in 24 hours. GitHub has never seen a repository climb that fast.

Developers argue copyright protects expression, not functionality, and that rewriting code from scratch in a different language puts the result beyond any DMCA notice. The legal reality is more contested, since Jin clearly examined the original before porting it, but the argument has gained traction. More rewrites followed in Rust and Bash. The architectural blueprint Anthropic spent years building now exists in multiple languages, hosted on platforms that have publicly pledged to ignore takedown requests.

That's the tell. The takedown bought Anthropic roughly one news cycle before the internet routed around it.

Anthropic's aggressive use of DMCA takedowns carried a specific irony. The company faces active copyright lawsuits from authors, publishers, and Universal Music Group over allegations it trained Claude on copyrighted material without permission. A court ordered $1.5 billion in damages in one such case last September, Business Insider reported.

Now the company leans on the same legal framework to protect its own code. The foundation may be thinner than expected.

Anthropic has publicly acknowledged that Claude Code is largely AI-generated. VentureBeat reported the figure at 90%, citing the company's own disclosures. Under the DC Circuit's March 2025 ruling in Thaler v. Perlmutter, works generated solely by AI cannot receive copyright protection. If courts determine that portions of Claude Code lack sufficient human authorship, Anthropic's copyright claims over those sections collapse. The Innovation Attorney, a legal analysis publication, called this "a genuine copyright gap." If you ship code written by your own AI, then invoke copyright to claw it back, the law may not cooperate.

Trade secret protection runs into its own wall. The code shipped through npm, a public distribution channel with zero access controls. Second time in 13 months. Courts evaluating whether Anthropic took "reasonable measures" to protect its secrets will weigh that record, though at this point the question feels almost academic.

What the IPO cannot afford

Anthropic recently closed a funding round valuing the company at $380 billion ahead of a possible public offering this year. Claude Code alone generates an estimated $2.5 billion in annualized revenue, VentureBeat reported.

Gartner issued a same-day advisory calling Anthropic's cluster of March incidents, the source leak, a CMS breach that exposed 3,000 internal files, and repeated outages, "a systemic signal." The analyst firm recommended that enterprise customers demand operational maturity standards from AI coding tool vendors, including published SLAs and 30-day vendor-switch capability.

The full blueprint for one of the most commercially successful AI coding agents ever built is circulating on mirrors that have promised never to come down. Anthropic's single DMCA notice took down 8,100 repositories to stop it. The internet responded with a rewrite, a standing ovation, and a shrug.

Frequently Asked Questions

How many GitHub repositories did Anthropic's takedown affect?

The DMCA notice disabled approximately 8,100 repositories, but most were legitimate forks of Anthropic's own public Claude Code repository. The company retracted the notice and narrowed it to one repo and 96 forks.

What is claw-code?

A Python rewrite of Claude Code's core architecture created by developer Sigrid Jin. It became the fastest-growing repository in GitHub history, reaching 100,000 stars in 24 hours.

Can Anthropic use copyright to protect AI-generated code?

Legal analysts raise doubts. Under the Thaler v. Perlmutter ruling, AI-generated works are ineligible for copyright protection. VentureBeat reported that Claude Code is 90% AI-generated per Anthropic's own disclosures.

Did the Claude Code leak expose user data?

No. Anthropic confirmed that no customer data, credentials, or model weights were exposed. The leak contained the Claude Code harness, feature flags, and orchestration architecture.

How does this affect Anthropic's IPO plans?

Anthropic recently closed funding at a $380 billion valuation ahead of a possible public offering. Gartner issued a same-day advisory calling the March incidents a systemic signal and recommended enterprise customers demand operational maturity standards.

This FAQ was generated using AI and reviewed by an editor. Learn more about how we use artificial intelligence in our reporting.

Arm Abandoned Neutrality. Its Biggest Customers Cheered.
The Mac Mini Is Not an AI Server. It's the End of Needing One.
Moltbook Was Broken, Fake, and Brilliant. Meta Paid Anyway.
AI News
Maria Garcia

Maria Garcia

Los Angeles

Bilingual tech journalist slicing through AI noise at implicator.ai. Decodes digital culture with a ruthless Gen Z lens—fast, sharp, relentlessly curious. Bridges Silicon Valley's marble boardrooms, hunting who tech really serves.