Apple Called Google's AI Nonsense. Then Apple Bought It.
Apple signed a multiyear deal with Google to power Siri using Gemini AI after failed talks with Anthropic and OpenAI. Two major updates coming in 2026.
Meet Boris Cherny, who built Claude Code at Anthropic. His viral workflow runs 10 AI agents in parallel. He hasn't written code by hand in months.
Boris Cherny broke both his arms in a motorcycle accident about a decade ago. Couldn't code for a month. When he came back, he started chasing languages that took fewer keystrokes. CoffeeScript. Haskell. TypeScript came later. He wrote O'Reilly's Programming TypeScript book and started what he describes as the world's biggest TypeScript meetup at the time. None of this had anything to do with AI.
Now Cherny leads Claude Code at Anthropic, a terminal-based coding agent that has become the most discussed developer tool since GitHub Copilot. Jensen Huang called it "incredible." A senior Google engineer claimed it recreated a year's worth of work in an hour. Microsoft engineers use it internally despite the company selling Copilot, an admission so awkward that Redmond has encouraged even non-developers to adopt a competitor's product. Cherny says that for the last two months, Claude Code has written 100% of his code. In one 30-day stretch, he claims, he landed 259 pull requests and 497 commits, every line generated by Claude Code running on Opus 4.5.
The engineer who once broke his arms learning to code differently is now teaching machines to code for him. And he's turning every other engineer into a fleet commander whether they wanted the promotion or not.
Key Takeaways
• Cherny claims Claude Code now writes 100% of his production code, with 259 PRs landed in 30 days
• His viral workflow runs 5-10 Claude instances in parallel, treating coding like fleet command
• Anthropic reports 70% productivity gains per engineer since Claude Code adoption
• The shift raises questions about what skills matter when AI handles the typing
Cherny's path to Anthropic wound through a decade at Meta, where he learned to ship products by testing them on cafeteria workers.
Working on Facebook Groups in the early days, his team lacked a user researcher. So Cherny walked to the cafeteria at lunch, showed new features to the staff, and watched them struggle to find buttons. He taught his team to do the same. "You could see where they struggled and what they got," he explained in a recent interview. "This was an observational user research study."
The scrappiness paid off. Cherny rose from an under-leveled mid-level engineer to IC8, Meta's principal engineer designation. Along the way he built Undux, which briefly became Facebook's most popular state management framework. One story captures the pattern. A senior engineer had championed a data model migration. Cherny reverse-engineered it, found the original approach was actually right, then talked that same engineer into undoing the whole thing. Disagree, execute, course-correct. That became his signature.
Instagram's codebase was written in Python. Should have been Hack, Facebook's optimized server language. Cherny saw the problem. He didn't write a proposal. He bought beer for the engineers who'd been around long enough to think migration was impossible, learned their names, heard their war stories. Then he recruited them. "You have to build trust," Cherny said. "I had to get to know them as people."
The project is still running today. Meta lost an IC8 who understood that the hardest engineering problems are usually people problems. That's the kind of departure a company feels for years.
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
When Cherny joined Anthropic, Claude Code wasn't supposed to be a product. He built it as an internal tool, expecting engineers to use tab-completion like everyone else.
His manager Ben Mann pushed back. Mann had been at Anthropic since the beginning and understood scaling laws, the predictable curve of model improvement. "Don't build for the model of today," Mann told him. "Build for the model six months from now."
Cherny listened. For months, Claude Code barely worked. He used it for maybe 10% of his coding. The model wasn't capable enough. Then Anthropic released Sonnet and Opus 4, and suddenly the product clicked. "We saw this in the usage data, and I saw this in my own coding," Cherny said. "I started to be able to use it for probably like half of my code."
Now the Claude Code team writes 80 to 90 percent of their code using Claude Code. Cherny claims productivity per engineer at Anthropic has grown nearly 70% since the tool's adoption, even as the company tripled in size. Data scientists who never touched a terminal installed Node.js to run the tool. Half of Anthropic's sales team uses it weekly, connecting Slack to spreadsheets and automating the tedious parts of their workflow.
The bet on a future model paid off in a present windfall.
Think about how you code. One problem, one file, one block of concentration. Maybe a second monitor with documentation. The maker's schedule, as Paul Graham called it, with its sacred uninterrupted hours.
Cherny threw that away.
In early January, he posted a thread on X describing his new workflow. The thread went viral, triggering coverage in VentureBeat, InfoQ, and tech forums across the web.
The approach is disarmingly simple. Cherny runs five Claude instances in parallel in his terminal, numbered one through five. He uses system notifications to know when one needs input. He starts additional sessions from his phone each morning, kicks off three or four agents before getting out of bed, then checks on them when he reaches his computer. Five more instances run in his browser. Sometimes ten.
"My job now isn't to go super deep on one task," Cherny explained. "It's to do a bunch of tasks in parallel."
He exclusively uses Opus 4.5, Anthropic's largest and slowest model. The counterintuitive choice reflects a lesson Cherny learned at Meta: the bottleneck isn't speed, it's correction. A smarter model requires less steering. Cherny argues you end up going faster with the big model because you spend less time fixing its mistakes.
His team maintains a single CLAUDE.md file in their Git repository. Whenever Claude makes a mistake, someone adds a note. The file grows smarter over time, a shared memory that prevents the same error twice. The principle: no one should have to point out the same issue more than once. Every mistake becomes a rule. The longer the team works together, the smarter their fleet becomes.
Developers who implemented Cherny's setup described the experience as "more like Starcraft than traditional coding." The comparison stuck because it's accurate. You're not typing syntax anymore. You're commanding units, managing cooldowns, watching a minimap of notifications. The skills that matter shift from keystroke fluency to strategic allocation.
Daily at 6am PST
No breathless headlines. No "everything is changing" filler. Just who moved, what broke, and why it matters.
Free. No spam. Unsubscribe anytime.
Here's the part that makes engineers uncomfortable: Cherny's workflow works.
Some Anthropic engineers have stopped writing code entirely. CEO Dario Amodei said at Davos that the company might be six to 12 months away from models doing most of what software engineers do end-to-end. Entry-level software engineering roles have declined as AI-generated code has ramped up.
The correlation looks causal because it probably is. The question isn't whether AI coding changes the profession. The question is what replaces the old skills.
Cherny's answer: judgment, not typing. He pushes back on the idea that AI coding produces slop. "If the code sucks, we're not gonna merge it," he said. "It's the same exact bar" as human-written code. The trick is learning to wield the tool, knowing when to vibe-code a throwaway prototype versus pairing carefully on production logic.
Cherny describes his work now as tending to his fleet of Claudes, jumping between tabs, answering questions, unblocking agents. The fleet commander doesn't fire weapons directly. The fleet commander decides where the fleet goes.
Cherny laughed when he described it. "It's crazy. If you'd asked me six months ago if this is how I would code, I would have said no."
Google knows what this means. Its engineers using Claude Code isn't just embarrassing for a company that bet its future on AI. It's an admission that the best tool for the job came from elsewhere, that Anthropic found product-market fit while Google was still debating infrastructure. The discomfort in Mountain View is palpable, even if no one will say it publicly.
Cherny declines to predict where any of this goes. He plans in one-week timelines because the exponential curve defeats longer forecasts. A year ago, AI coding meant autocomplete. Now he writes production code from his phone before breakfast.
"The model is advancing exponentially and just like my puny human meat brain can't grapple with the exponential," he said. "We think in linears."
Last week, Anthropic launched Cowork, a version of Claude Code for non-programmers. Cherny's team built it in roughly a week and a half, using Claude Code to do the work. The speed itself makes the argument: tools that can build their own successors compound in ways that linear engineering cannot match.
What Cherny does know: the tools exist to multiply human output by a factor of five. The programmers who make the mental leap first won't just be more productive. They'll be playing a different game. The ones who insist on typing every character themselves will find themselves competing against small teams that operate like large ones.
Call it good, call it bad. The debate matters less than the fact that it's already happening. The motorcycle accident that pushed Cherny toward functional programming and TypeScript, that taught him there's always a way to do more with less, shaped a career that now shapes how millions of developers work. The injury was an accident. What came after was not.
Q: What is Claude Code?
A: Claude Code is Anthropic's terminal-based coding agent that can write, edit, and test code autonomously. Unlike autocomplete tools, it operates as an agent that can execute multi-step tasks, access files, and iterate on its own output.
Q: How does Cherny's parallel workflow actually work?
A: Cherny runs 5-10 Claude instances simultaneously across terminal tabs and browser windows. He uses system notifications to monitor when each instance needs input, treating the workflow like managing units in a real-time strategy game rather than sequential coding.
Q: What is CLAUDE.md and why does it matter?
A: CLAUDE.md is a shared file in the team's Git repository where they document every mistake Claude makes. This creates institutional memory so the AI doesn't repeat errors. The file grows smarter over time as the team adds rules.
Q: Why does Cherny use the slower Opus 4.5 model instead of faster options?
A: Cherny argues the bottleneck is correction, not speed. Opus 4.5 requires less steering and produces fewer errors, so despite being slower per token, you spend less time fixing mistakes. Net result: faster overall completion.
Q: What is Cowork and how does it differ from Claude Code?
A: Cowork is Anthropic's version of Claude Code for non-programmers, launched in January 2026. It provides a graphical interface instead of terminal access, letting users automate file management, email, spreadsheets, and browser tasks without coding knowledge.



Get the 5-minute Silicon Valley AI briefing, every weekday morning — free.