Cursor's $29 Billion Bet on a Bottleneck It May Be Creating

Cursor just paid a premium for a code review startup to fix what it calls the new bottleneck in software development. But a July 2025 study using Cursor's own tools found AI made experienced developers 19% slower. The math gets uncomfortable from there.

Cursor's $29B Bet on a Bottleneck It May Be Creating

In July 2025, the nonprofit research organization METR published a randomized controlled trial measuring how AI coding tools affect experienced developers. The study used Cursor Pro with Claude 3.5/3.7 Sonnet, the same tools now central to Cursor's billion-dollar business. Sixteen developers completed 246 tasks on repositories they had worked on for years. Those using AI took 19% longer to finish. The developers believed they were 20% faster.

Five months later, Cursor announced it would acquire Graphite, a code review startup, for an undisclosed sum that Axios reports came in "way over" Graphite's $290 million valuation from March. The strategic rationale: code review has become the bottleneck now that AI makes writing code faster.

The premise assumes AI does make writing code faster. The METR data, collected using Cursor's own product, points elsewhere.

The Breakdown

• Cursor acquires Graphite at "way over" $290M to address code review bottlenecks it claims AI productivity gains created

• July 2025 METR study using Cursor Pro found experienced developers took 19% longer with AI, despite believing they were 20% faster

• Faros AI data from 10,000 developers shows AI-generated code is 154% larger per pull request with 9% more bugs, shifting strain to review

• Fourth acquisition in twelve months as Cursor races to build moats before foundation model providers like Anthropic compete directly


The perception gap

Cursor CEO Michael Truell told Fortune that AI has made it "much faster to write production code." He framed the Graphite acquisition as addressing "an emerging bottleneck in software development" created by these productivity gains.

The METR study tested this claim directly. Researchers recruited contributors to large open-source projects, averaging 22,000+ stars and over one million lines of code. These were not novices learning new codebases. They were experienced maintainers working on familiar repositories, completing tasks drawn from their own backlogs. The conditions favored AI assistance.

Tasks completed without AI took less time. The gap was statistically significant. Expert economists surveyed before the study predicted AI would make developers 39% faster. Machine learning researchers predicted 38% faster. The measured result: 19% slower.

Developers in the study attributed the slowdown to several factors. They spent more time prompting AI and waiting for responses than they would have spent coding directly. AI-generated code required additional review and testing. One participant reported wasting "at least an hour" attempting an AI-assisted solution before reverting all changes and implementing manually.

The study's authors note that AI capabilities evolve rapidly and that results might differ even three months later. But the July 2025 findings used frontier models. Cursor Pro was the primary tool. Claude 3.5 and 3.7 Sonnet powered the assistance. These are not outdated systems.

The bottleneck that AI creates

In August 2025, Faros AI published research based on telemetry from over 10,000 developers across 1,255 teams. The findings complicate the productivity narrative further.

Developers using AI coding assistants produced more code. They parallelized more workstreams. They completed more tasks. By individual metrics, output increased.

But the code they produced was 154% larger per pull request on average. Bug rates rose 9% per developer. The additional volume and complexity shifted pressure downstream, to the review and testing systems that must process the output.

The Faros report identified a pattern: "AI-augmented code is getting bigger and buggier, and shifting the bottleneck to review."

This finding reframes the Cursor-Graphite deal. If AI coding tools create review bottlenecks by generating more code that requires more scrutiny, then acquiring a code review company addresses a problem that Cursor's core product contributes to. The strategic logic holds, but the underlying dynamic is less flattering than "AI made writing code so fast that review can't keep up." The more precise framing: AI may be generating volume that strains quality controls.

Stack Overflow's 2025 Developer Survey tracked the shift. In 2023 and 2024, more than 70% of developers viewed AI tools favorably. This year: 60%. Adoption keeps climbing, 84% now using or planning to use. But the enthusiasm has cooled.

What Cursor claims versus what studies show

Truell told Fortune that Salesforce experienced a 30% productivity uplift from using Cursor. Microsoft CEO Satya Nadella says 30% of code in the company's repositories now comes from AI. Google CEO Sundar Pichai puts their figure at 25%.

These numbers describe adoption and output volume, not productivity. More code generated is not the same as faster delivery of working software. The Faros data shows this distinction clearly: team-level output metrics improved while company-level throughput, DORA metrics, and quality KPIs showed no significant gains. Downstream bottlenecks absorbed the value.

Bain & Company's September assessment of real-world AI coding deployments described savings as "unremarkable." MIT Technology Review, reporting on the METR study two days ago, interviewed developer Mike Judge, who ran his own six-week experiment after seeing the research. He had estimated AI provided a 25% speedup. His measured results aligned with METR's findings.

The pattern across independent studies is consistent. Developers believe AI helps more than measurements confirm. Corporate deployment figures emphasize metrics that flatter adoption. Controlled trials show smaller gains or net slowdowns for experienced developers on complex tasks.

Foundation model dependency

Both Cursor and Graphite run on AI models they do not own. Cursor uses Anthropic's Claude and offers access to models from other providers. Graphite received funding from Anthology, an investment fund backed by Anthropic.

Graphite CEO Merrill Lutsky downplayed competitive risk to Fortune. "The larger base-model companies are trying to compete across many different verticals. Cursor is solely focused on how engineers build with AI, and that focus really sets them apart."

Focus does not eliminate dependency. If Anthropic builds code review features into Claude, or if OpenAI launches an integrated development environment, Cursor's differentiation reduces to interface design and workflow integrations. GitHub Copilot, backed by Microsoft and built on OpenAI's models, already competes in this market. Copilot's parent company controls the underlying models. Cursor pays licensing fees to potential competitors.

Truell acknowledged the structure without resolving it. "Our approach here is to use a combination of the best technology that partners have to offer and then technology that we develop ourselves." Cursor has invested in proprietary models. Whether those provide sufficient differentiation against a direct challenge from model providers remains untested.

Acquisition velocity

Graphite marks Cursor's fourth acquisition in roughly twelve months. The company bought AI coding assistant Supermaven in November 2024, acquired talent from enterprise startup Koala in July, and purchased recruiting strategy company Growth by Design more recently.

A company with $1 billion in annualized revenue and a $29.3 billion valuation has acquisition firepower. Four deals in twelve months also signals urgency. CodeRabbit reached a $550 million valuation in September. Greptile announced a $25 million Series A this fall. OpenAI and Anthropic continue expanding their developer tool ambitions.

Truell told Fortune the company has no additional deals planned, with Cursor "focused on building out product features rather than eyeing an IPO."

Investor overlap

Accel invested in both Cursor and Graphite. So did Andreessen Horowitz. Anthropic-backed Anthology funded Graphite at the seed stage. Neo, Ali Partovi's early-stage venture firm, backed Graphite's seed round and connected the founders through its Neo Scholars program.

When the same venture firms hold stakes in both acquirer and target, the premium Cursor paid went partly to funds that already own Cursor shares. The transaction consolidates a portfolio as much as it consolidates a market.

What changes

For developers using Graphite, the immediate answer: not much. Lutsky emphasized that "Graphite's product and brand aren't going anywhere." Integration with Cursor comes gradually through 2026.

The companies plan to merge Graphite's AI Reviewer with Cursor's Bugbot. Stacked pull requests, Graphite's feature for working on multiple dependent changes simultaneously, will gain deeper integration with Cursor's code generation.

For independent code review tools, the deal accelerates pressure. CodeRabbit and Greptile now compete against a product backed by a $29.3 billion parent.

For engineers, the productivity question remains open. The July 2025 METR study found developers believed they were faster when measurements showed them slower. The August 2025 Faros research found AI adoption shifting bottlenecks to code review while increasing bug rates. If AI coding tools create the problems that AI code review tools then address, the value proposition becomes circular.

The timeline

Truell made a projection to Fortune: "We think that this is the decade in which coding will be automated, and the way in which professional teams build and deliver software will change across the entire software development life cycle."

That claim assumes productivity gains are real and will compound. It assumes foundation model providers will not vertically integrate. It assumes venture funding continues valuing AI companies at current multiples long enough for Cursor to reach profitability that justifies $29.3 billion.

Cursor has built a product developers adopt, generating measured revenue at scale. The $1 billion ARR figure is real.

But the gap between "developers adopt this tool" and "this tool makes developers measurably more productive" shows up in every independent study published this year. Cursor paid a premium for a code review company to address a bottleneck. The Faros data suggests AI coding tools help create that bottleneck. Whether that makes the acquisition strategically necessary or strategically confused depends on which problem Cursor believes it is solving.

❓ Frequently Asked Questions

Q: What are stacked pull requests, and why do they matter?

A: Stacked pull requests let developers submit code for review and immediately start on the next piece without waiting for approval. Traditional workflows force sequential waits. Graphite's approach lets engineers work on multiple dependent changes simultaneously, which becomes more valuable when AI tools generate larger volumes of code that need review.

Q: How reliable is the METR study? Could 16 developers be too small a sample?

A: The sample is small but the design is rigorous. METR used a randomized controlled trial with 246 tasks across repositories averaging 1 million+ lines of code. Participants had 5+ years of experience on their assigned projects. The 19% slowdown was statistically significant. Larger surveys from Stack Overflow and Faros AI show consistent patterns of declining trust and downstream bottlenecks.

Q: What are DORA metrics?

A: DORA metrics measure software delivery performance: deployment frequency, lead time for changes, change failure rate, and time to restore service. The Faros study found AI adoption improved individual output but showed no gains in DORA metrics at the company level. Teams coded faster; organizations didn't ship faster.

Q: Will Graphite customers need to switch to Cursor?

A: Not immediately. Graphite will continue as an independent product with the same team. Cursor plans gradual integration through 2026, starting with connecting local development to pull requests. The companies say Graphite's brand and standalone functionality will remain. Customers at Shopify, Snowflake, Figma, and Ramp can keep using the tool as before.

Q: How does Cursor make money if it doesn't own the AI models?

A: Cursor charges for its code editor interface and workflow tools while paying licensing fees to Anthropic and other model providers. The company has invested in proprietary models to supplement licensed ones, but the core AI still comes from partners. At $1 billion ARR, margins depend on how much Cursor pays for model access versus what it charges users.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.