On March 19, Cursor launched Composer 2 and called it an "in-house" model. The AI code editor, valued at $29.3 billion, published benchmarks, claimed superiority over Claude Opus 4.6, and positioned the release as proof of proprietary engineering. Twenty-four hours later, a developer intercepted an API response and found the model ID: kimi-k2p5-rl-0317-s515-fast. The "proprietary" model was Kimi K2.5, built by Moonshot AI, a Beijing company backed by Alibaba and Tencent.
Call it a "mistake in attribution." Better: call it concealment by omission.
Disclosure is not a courtesy.
Cursor's co-founder, confronted with the evidence, acknowledged that omitting the Kimi base was a "mistake" and promised the next model would be "upfront about it." Moonshot's official account, pivoting from accusation to congratulation within hours, confirmed an "authorized commercial partnership." The choreography answered nothing.
A partnership is not the issue. Cursor's paying users were routing proprietary source code through a model whose origin was hidden, whose maker has been flagged by the U.S. Department of Commerce as evidence of China's "growing AI depth," and whose company Anthropic named in a distillation report alleging 16 million fraudulent prompts siphoned from Claude.
OpenAI sent its own letter to U.S. lawmakers making similar accusations against DeepSeek. Google's threat intelligence arm warned of rising distillation attacks on Gemini. Three of the four largest American AI companies have publicly accused Chinese labs of extracting capabilities from their proprietary models.
And Cursor quietly built its flagship product on one of the accused.
This was not the first time. In November 2025, Cursor shipped Composer 1. The community discovered its tokenizer was identical to DeepSeek's. The model occasionally produced Chinese output during inference. Cursor offered no comment. You'd think once would be enough to fix the disclosure problem.
Twice in four months. A pattern, not a slip.
Provenance is a security question.
The licensing crowd will frame this as an attribution lapse, a technicality resolved over email. Strip away that framing and the architecture is plain: a company earning $2 billion in annual revenue took a model from a Chinese AI lab, wrapped it in proprietary branding, and presented it to enterprise customers who had no way to assess the jurisdictional risk of the technology processing their code.
For companies in regulated industries, the gap is not academic. Banks have data residency rules. So do hospitals and defense contractors. Each restricts what sensitive information can flow through technology tied to foreign jurisdictions. Compliance requires knowledge. When the vendor erases the model's origin, there is nothing left to audit.
Stay ahead of the curve
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
The omission was the weapon.
Consider Crypto AG. From its offices in Zug, the Swiss firm sold encryption hardware to more than a hundred governments across four decades. The machines worked. The encryption was real. What buyers did not know, until a 2020 investigation by the Washington Post and ZDF revealed it, was that Crypto AG was secretly owned by the CIA and the German BND. The systems were designed to be breakable.
The betrayal was not that the machines failed. It was that concealed ownership eliminated the buyer's ability to assess risk. Every government that relied on Crypto AG hardware made security decisions on incomplete information. The concealment was the point.
When the origin of a tool is hidden, the user's risk calculus collapses into fiction.
Disclosure is the floor, not the ceiling.
In December 2025, the Center for AI Standards and Innovation at the U.S. Department of Commerce published its evaluation of Moonshot's Kimi K2 model. CAISI, which collaborates with OpenAI and Anthropic on AI safety, singled out Moonshot as evidence of the "growing depth" of China's AI industry and flagged national security risks.
From this follow three observations, each uncomfortable for Cursor's position. First: a model whose maker was flagged for national security risk is not a model you conceal inside a branded product.
Second: performance and provenance answer different questions. Kimi K2.5 may be the "strongest base," as Cursor's co-founder claimed. That does not make it the safest choice for processing proprietary enterprise code. Developers should make that judgment. Cursor made it for them.
And third: satisfying an open-source license is not the same as earning a security clearance. Attribution tells the licensor who used the code. It tells the customer nothing about where sensitive data actually travels.
Convenience is not a strategy.
Eighty percent of American startups that use open models now rely on Chinese ones, according to Andreessen Horowitz general partner Martin Casado. Switching costs are minimal. Performance is competitive. Price is right. Disclosure, apparently, optional.
But this is not a market functioning. It is a market sleepwalking into dependency. Developers choosing these models make performance decisions. They are not making security decisions, because the information required for a security decision has been withheld.
Picture it. Late 2026. A defense contractor discovers that its AI-assisted code review pipeline routed classified logic through a foundation model built in Beijing. The vendor's marketing said "in-house." The model ID said otherwise. The data sovereignty audit missed it because there was nothing to find. No label, no warning, no trace. The breach was not a hack. It was a brand name.
Trust is not a feature you patch.
Cursor promised to be "upfront" next time. A company that conceals provenance once, then twice, then pledges reform is not a company that made a mistake. It is a company that tested a boundary and found it soft.
The AI industry builds products that sit between developers and their most sensitive intellectual property. Code editors, copilots, agent frameworks route proprietary logic through foundation models that developers trust by default. That default trust requires one condition. You must know what you are trusting.
Concealment is not a bug. It is a choice.



Implicator