The librarian in the loop
A developer gave Claude Code access to 100 books and a simple command: "find something interesting." What came back wasn't summaries. It was connections no hand-tuned pipeline could find.
The man who predicted the 2008 crash sat in a Google Doc with Anthropic's co-founder. Both admitted what they don't know. Burry sees stranded assets and accounting tricks. Clark can't prove his own tools improve productivity.
Michael Burry made his fortune betting against consensus. Back in 2005, while Wall Street was packaging garbage mortgages into triple-A securities, Burry was reading the fine print. What he found was fraud dressed up as financial innovation. He bet against the housing market. His investors nearly fired him. His fund's lawyers sent angry letters. Then 2008 happened, and Burry walked away with $100 million in personal profit.
Burry is now watching the AI buildout with the same squint he brought to mortgage bonds. The numbers bother him. This week, he entered a Google Doc with Jack Clark, co-founder of Anthropic, one of the companies at the center of that buildout. No conference room. No cameras. Just cursors blinking in silence, text appearing without voice, the long pauses where you see someone typing and then nothing appears. Dwarkesh Patel, who has interviewed everyone from Mark Zuckerberg to Tyler Cowen about where this technology is headed, joined as a third voice. Patrick McKenzie moderated.
The result is the most substantive debate about AI economics published in years. Not because anyone shouted. No one could. Because both sides admitted what they don't know.
The Breakdown
• METR study shows AI coding tools make developers 20% slower; Anthropic admits it can't prove its own tools boost productivity
• Nvidia selling $400 billion in chips while end-user AI revenue sits below $100 billion, a 4:1 infrastructure-to-application gap
• Burry's escalator argument: if all competitors adopt AI, none gains advantage, and value flows to customers, not investors
• Software giants becoming hardware companies, with ROIC falling fast and private credit financing at risk of stranded assets
If you've used Copilot or Claude Code, you know the feeling: the code flows faster, but you spend twice as long hunting down hallucinations. The METR study put a number on that frustration.
Researchers tracked experienced developers working in codebases they knew well. The result: a roughly 20% increase in time to merge pull requests when using AI coding tools. Not faster. Slower.
Clark opened with a revealing confession. Anthropic surveyed its own developers about AI-assisted coding. Sixty percent reported using Claude in their work. Those who used it claimed a 50% productivity boost.
"The data is conflicting and sparse," Clark said. "We need better data and, specifically, instrumentation for developers inside and outside the AI labs to see what is going on."
Patel pressed the point. "Not to rabbit hole on this, but the self-reported productivity being way higher than—and potentially even in the opposite direction of—true productivity is predicted by the METR study."
Clark agreed. Anthropic is now working on instrumentation to figure out what's actually happening. "What people self-report may end up being different from reality."
This admission matters. Anthropic makes Claude. Anthropic can't prove Claude makes its own engineers faster. Hundreds of billions are being spent on the assumption that it will.
Drive past any hyperscaler campus and you see cranes. Server racks stacked to the ceiling. Cooling systems the size of aircraft hangars. The buildout looks like triumph. Then you check the revenue numbers. The story falls apart.
Burry's critique centers on a ratio. Nvidia is selling $400 billion in chips. End-user AI revenue sits below $100 billion. The infrastructure spending doesn't just exceed the application revenue. It dwarfs it.
"At the end of the day, AI has to be purchased by someone," Burry wrote. "Someone out there pays for a good or service. That is GDP. And that spending grows at GDP rates, 2% to 4%."
The entire SaaS software market—every subscription, every corporate license, every creative tool—runs less than $1 trillion annually. "This is why I keep coming back to the infrastructure-to-application ratio—Nvidia selling $400 billion of chips for less than $100 billion in end-user AI product revenue."
The gap is stark. Companies are building data centers on the assumption that demand will show up. But demand for what? Chatbots? Coding assistants? The killer app that justifies trillion-dollar infrastructure is missing in action.
Burry sees this pattern before. "The capital expenditure spending cycle is faith-based and FOMO-based. No one is pointing to numbers that work. Yet."
Patel pushed back with the standard response: isn't this the "lump of labor" fallacy? The assumption that there's a fixed amount of software to be written, a fixed amount of work to automate?
Burry's response was measured. "New markets do emerge, but they develop slower than acutely incentivized futurists believe. This has always been true."
Burry brought up Buffett.
Back in the late 1960s, Buffett owned a department store in Baltimore. The store across the street put in an escalator. Buffett had to put one in too. Both stores wrote the check. Neither gained a thing. The customers rode up and down for free. The owners ate the cost.
"That is how most AI implementation will play out," Burry wrote. "Most will not benefit, because their competitors will benefit to the same extent, and neither will have a competitive advantage because of it."
This is the core problem with the "AI will transform everything" thesis. Even if the technology works exactly as promised, the value may flow entirely to customers, not to the companies building or deploying it.
Patel saw the logic. "If it does turn out to be the case that (1) nobody across the AI stack can make crazy profits and (2) AI still turns out to be a big deal, then obviously the value accrues to the customer. Which, to my ears, sounds great."
Great for society. Terrible for investors betting on AI stocks at current valuations.
The competitive dynamics in AI support this concern. Patel noted that leads in AI have proven surprisingly non-durable. Google was far ahead in 2017. OpenAI seemed untouchable two years ago. Now the major labs rotate around the podium every few months. Some force—talent poaching, reverse engineering, the sheer speed of iteration—neutralizes any runaway advantage.
If no one can build a moat, no one can charge monopoly prices. The escalator problem scales to the entire industry.
Burry's most technical argument concerns return on invested capital. ROIC measures how efficiently a company converts investment into profit. Software companies historically enjoyed exceptional ROIC because their products cost almost nothing to replicate. Build the code once, sell it a million times.
That model is dying. In his interview with Patel, Microsoft CEO Satya Nadella acknowledged that all the big software companies are now hardware companies. They're building data centers, buying chips, negotiating power contracts. Capital intensity is rising.
"ROIC was very high at these software companies," Burry wrote. "Now that they are becoming capital-intensive hardware companies, ROIC is sure to fall, and this will pressure shares in the long run. Nothing predicts long-term trends in the markets like the direction of ROIC—up or down, and at what speed. ROIC is heading down really fast at these companies now."
Nadella himself acknowledged the risk. He told Patel he's looking for software to maintain ROIC through a heavy capital expenditure cycle. He sounded less like a software visionary and more like a construction foreman worried about cement prices. There's a resignation in that framing: the high-margin days are over. Burry's assessment: "I cannot see it, and even to Nadella, it sounds like only a hope."
The financing structure adds another layer of concern. Private credit is funding much of this buildout—a murky market with a dangerous duration mismatch. The loans are being securitized as if the assets last two decades. The hyperscalers have exit clauses every four to five years. Chips cycle annually. Data centers built today may not handle the chips of 2028.
"This is just asking for trouble," Burry wrote. "Stranded assets."
He flagged another accounting concern: construction in progress. CIP represents capital equipment not yet "placed into service." It doesn't depreciate. It doesn't count against income. It can sit on the balance sheet indefinitely.
"I imagine a lot of stranded assets will be hidden in CIP to protect income, and I think we are already seeing that potential."
Nadella's own words support the concern. He told Patel he backed off some projects and slowed down the buildout because he didn't want to get stuck with four or five years of depreciation on one generation of chips. "That is a bit of a smoking-gun statement," Burry observed.
Clark and Patel offered a counterargument that deserves serious consideration.
"Something we say often to policymakers at Anthropic is 'This is the worst it will ever be,'" Clark wrote. "And it's really hard to convey to them just how important that ends up being."
The current AI systems are the floor, not the ceiling. Every future iteration will be more capable. The baseline for what's possible keeps rising. If you're calibrating based on what LLMs could do in November, you're already wrong about what they can do in January.
Patel extended the point. "If you showed me Gemini 3 or Claude 4.5 Opus in 2017, I would have thought it would put half of white-collar workers out of their jobs."
It hasn't. The labor market impact of AI "requires spreadsheet microscopes to see, if there is indeed any." Models pass the Turing test, solve open-ended coding problems, reason through complex mathematics. By every common-sense definition of artificial general intelligence, we're already there. And yet unemployment sits at historic lows.
This could mean the technology is overhyped. Or it could mean the integration of transformative technology takes longer than enthusiasts expect. The steam engine existed for decades before it reshaped manufacturing. Electricity took half a century to transform factories.
Clark sees another possibility: the productivity gains are real but unevenly distributed. Coding adoption leads because code can be validated. You run the program, it works or it doesn't. Knowledge work lacks that closed loop.
"Now that friction has been taken away, you're seeing greater uptake," Clark wrote. "Therefore, I expect we're about to see what happened to coders happen to knowledge workers more broadly."
The conversation ended with each participant naming what would shift their view.
For Burry, it would take "autonomous AI agents displacing millions of jobs at the biggest companies." Or application-layer revenue hitting $500 billion or more "because of a flood of killer apps."
Neither has happened. Neither appears imminent.
For Clark, the surprise would be "scaling hits a wall." The entire infrastructure buildout assumes scaling continues to produce capability improvements. If that stops working, the investment thesis collapses.
Patel offered a specific test: 2026 cumulative AI lab revenues below $40 billion or above $100 billion. Either would signal that things have moved faster or slower than expected.
The market research firm Straits Research estimates the AI productivity tools market at $8.9 billion in 2025, growing to $35 billion by 2034. Those numbers sit uncomfortably against the hundreds of billions being spent on infrastructure.
Burry's prediction is characteristically bleak: "We will see one of two things: either Nvidia's chips last five to six years and people therefore need less of them, or they last two to three years and the hyperscalers' earnings will collapse and private credit will get destroyed."
The man who called the 2008 crash sees stranded assets and accounting gimmicks. The man building the AI models admits he can't prove they improve productivity. The podcaster who talks to everyone acknowledges that even superhuman AI hasn't moved the employment needle.
And all three of them use Claude to make their charts.
Q: What exactly did the METR productivity study find?
A: METR tracked experienced developers working in codebases they knew well. Developers using AI coding tools took roughly 20% longer to merge pull requests than those working without AI assistance. This contradicts self-reported productivity gains, which typically run 30-50% higher. The study suggests developers may feel more productive while actually working slower.
Q: What is ROIC and why does Burry focus on it?
A: Return on invested capital measures how efficiently a company converts investment into profit. Software companies historically had exceptional ROIC because code costs almost nothing to copy. As tech giants spend hundreds of billions on data centers and chips, they're becoming hardware companies with lower ROIC. Burry calls this metric the best predictor of long-term stock performance.
Q: What is "construction in progress" and why is it a concern?
A: Construction in progress (CIP) is an accounting category for capital equipment not yet operational. CIP doesn't depreciate or reduce reported income. Burry warns that companies could park obsolete or stranded AI infrastructure in CIP indefinitely, hiding losses from investors. The equipment sits on balance sheets at full value even if it's already outdated.
Q: How big is the gap between AI infrastructure spending and AI revenue?
A: Nvidia sold roughly $400 billion in AI chips while end-user AI product revenue sits below $100 billion. The AI productivity tools market totaled $8.9 billion in 2025. The entire global SaaS market runs under $1 trillion annually. Burry's core argument: infrastructure spending has outpaced demand by a factor of four or more.
Q: What evidence would prove Burry wrong about AI being a bubble?
A: Burry named two conditions: autonomous AI agents displacing millions of jobs at major companies, or application-layer revenue exceeding $500 billion through killer apps. Dwarkesh Patel offered a specific benchmark: if 2026 AI lab revenues exceed $100 billion, it would signal faster adoption than expected. Current trajectories don't support either threshold.



Get the 5-minute Silicon Valley AI briefing, every weekday morning — free.