Zhipu's Coding Agent Got Too Popular. Now It's Rationing Access.
Zhipu limits GLM Coding Plan subscriptions to 20% after GLM-4.7 overwhelms servers. Chinese AI hits infrastructure ceiling despite benchmark wins.
Bosworth calls Meta's first Superintelligence Labs models "very good." The hedged language reveals where Meta stands in the AI race after Llama 4 criticism.
Andrew Bosworth took questions in Davos on Wednesday and said something strange. Meta's Superintelligence Labs had shipped its first internal models. Six months of work from the research unit Zuckerberg built after blowing up his AI leadership. Bosworth told Reuters they were "very good."
Not great. Not groundbreaking. Very good.
That phrasing tells you something about where Meta sits in the AI race right now. The company that once moved fast and broke things is moving carefully and hedging its claims. Llama 4, the company's most recent public model, took heat from researchers and developers who found it underwhelming compared to offerings from Google and OpenAI. Zuckerberg responded by hiring Alexandr Wang away from Scale AI, building out an entirely new lab, and writing checks large enough to poach top researchers from competitors.
Bosworth has been at Meta for two decades. He knows what confidence sounds like. This was not it.
Key Takeaways
• Meta Superintelligence Labs shipped first internal models after six months of work, with Bosworth calling them "very good"
• Bosworth compared AI infrastructure spending to the 19th-century railroad boom, placing a 30-year bet on returns
• Meta executives pitched wearables as data collection infrastructure that could feed next-generation AI models
• Yann LeCun's departure signals internal disagreement over whether LLMs can achieve superintelligence
Meta has not confirmed which models landed on internal servers this month. December reports pointed to two projects in development: a text-based system codenamed Avocado, reportedly targeting a first-quarter release, and an image-and-video model called Mango. Bosworth declined to say whether either of those reached the finish line.
What he did say matters more for understanding the company's timeline. "There's a tremendous amount of work to do post-training," he told reporters, "to actually deliver the model in a way that's usable internally and by consumers."
Post-training work means safety testing, fine-tuning, and the kind of evaluation that determines whether a model hallucinates less often than it tells the truth. You can ace benchmarks and still fail spectacularly when actual users start typing actual questions. Meta learned this with Llama 4, and the experience left the company visibly chastened. Bosworth's careful language in Davos carried the defensiveness of an executive who shipped something embarrassing and got called on it.
At Axios House Davos the day before, Bosworth offered a different kind of preview. Asked about the billions flowing into AI infrastructure, he reached for 19th-century history. "We don't regret railroads, telecom fiber... all the build-ups of this kind that we've done in history, we have ended up feeling great about. And in the process of doing that, yeah, a lot of companies went overboard."
Translation: some AI companies will go bankrupt. Meta does not plan to be one of them.
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
Bosworth made the math explicit. "My personal opinion is it is absolutely, in a 30-year view, going to be worth the investment we're making right now. All the companies can't survive 30 years of variance."
Meta can. The company sits on cash reserves that dwarf most competitors. Zuckerberg controls the voting stock, which means he can sustain losses that would trigger board revolts at other firms. The railroad analogy works in another way too: the companies that survived the 1870s boom owned the tracks everyone else had to use.
Here is what you need to understand about Meta's AI strategy: the company may not need the best models if it has the best data. LLMs train on text scraped from the web. That source is tapped out. What comes next? Real-world context, the kind you get from cameras and microphones strapped to people walking through their lives. Meta already sells those cameras. Already sells those microphones.
Across the Davos promenade, at Meta's own pavilion, Derya Matras made this point explicit. "They see what you see, they hear what you hear and that start to make meaning of the environment that you're in," the VP for Europe, Middle East, and Africa told The National, pointing to Ray-Ban smart glasses. "That's how we get closer to superintelligence."
Three billion people use Meta apps daily, according to Nicola Mendelsohn, the company's head of global business. Some fraction of them wear Ray-Bans. Every glance, every overheard conversation, every environment those glasses record becomes potential training data. Meta does not need to build the best model from scratch. It needs to build a model that gets better every time someone puts on a pair of sunglasses.
Not everyone at Meta believed this path would work. Yann LeCun, the company's former chief scientist and a pioneer in neural network research, called large language models a "dead end" for achieving superintelligence. LeCun argued that systems trained primarily on text lack the embodied understanding required for true reasoning.
Daily at 6am PST
No breathless headlines. No "everything is changing" filler. Just who moved, what broke, and why it matters.
Free. No spam. Unsubscribe anytime.
His departure from Meta, according to media reports, followed Zuckerberg's decision to hire Wang and double down on exactly the approach LeCun questioned. When asked about LeCun's criticism in Davos, Matras offered a diplomatic non-answer. "Yann has a particular view and he is a brilliant scientist."
She then pivoted to the company line. "We have perhaps the most talented team in the industry and they're heads down focused on building our new models."
LeCun spent decades at Meta building that team. His exit suggests he saw something the Davos presentations glossed over: a company so anxious to catch up that it abandoned the researcher who helped it get this far. If you are betting on Meta's superintelligence play, you are betting against the judgment of the person who understood the technical foundations better than anyone else in the building.
Bosworth's 30-year timeline deserves attention. Most AI companies talk in quarters, maybe years. Meta's CTO is placing a three-decade bet.
The framing gives the company room to absorb short-term failures. A disappointing Llama 5 release next year does not invalidate a 30-year thesis. Neither does a competitor pulling ahead for a few years. The timeline also signals something about internal expectations. If Meta thought superintelligence was around the corner, Bosworth would not be talking about surviving "variance" for decades.
What he would be talking about is what Matras described at the pavilion: AI tools that know your goals, interests, and dreams. The company that built its fortune on knowing which ads to show you wants to build systems that understand you well enough to anticipate what you need before you ask.
First models delivered internally. Very good. Thirty years of variance ahead.
Meta's bet is that when the tracks are finally laid, it will own the railroad.
Q: What is Meta Superintelligence Labs?
A: A research unit Meta created in 2025 after reorganizing its AI leadership. The lab combines researchers and engineers under one roof, led by former Scale AI CEO Alexandr Wang as chief AI officer. It handles AI research, Llama model development, and commercialization.
Q: What are the Avocado and Mango models?
A: Codenames for two AI systems Meta reportedly has in development. Avocado is a text-based model reportedly targeting Q1 2026 release. Mango focuses on image and video generation. Bosworth did not confirm which models were delivered internally.
Q: Why did Yann LeCun leave Meta?
A: LeCun, Meta's former chief scientist, called large language models a "dead end" for superintelligence. Media reports link his departure to Zuckerberg's decision to hire Alexandr Wang and double down on LLM-based approaches that LeCun criticized.
Q: How do wearables fit into Meta's AI strategy?
A: Meta executives argue that cameras and microphones on Ray-Ban smart glasses capture real-world context that text-trained AI models lack. Three billion people use Meta apps daily, making wearable data a potential competitive advantage for training future models.
Q: What did Bosworth mean by comparing AI to railroads?
A: Bosworth argued that infrastructure booms benefit society even when individual companies fail. He said Meta's cash reserves and Zuckerberg's voting control let the company sustain 30 years of "variance" that competitors cannot survive.



Get the 5-minute Silicon Valley AI briefing, every weekday morning — free.