Privacy-focused email promised liberation from Big Tech surveillance. Reality delivered Bridge daemon crashes, mobile search gaps, and calendar sync headaches. A year later, deadline-driven pragmatism wins over ideological purity.
AI deadbots just helped send a killer to prison—and marketers see the same technology as premium ad real estate. Courts embrace victim avatars while $80B industry eyes commercial grief monetization. U.S. law can't keep pace.
China’s AI stack goes domestic: fabs, funding, and a DeepSeek-led playbook
China plans to triple AI chip output by 2026 through new fabs and coordinated funding, targeting replacement of Nvidia's "China-compliant" processors. DeepSeek's FP8 format becomes the organizing principle for domestic hardware alignment.
🏭 China prepares to triple AI chip output by 2026 with three new Huawei-focused fabs plus SMIC doubling 7nm capacity.
📈 Cambricon swung to record 1.03 billion yuan profit on 44-fold revenue jump, market value doubled to $80 billion this month.
🎯 DeepSeek's FP8 format becomes the standard, favoring efficiency over precision to work around export restrictions.
💰 Four smaller chip designers raised $3 billion in pre-IPO rounds, preparing for public offerings by year-end.
🌐 Goal isn't to beat Nvidia's flagship chips but replace "China-compliant" models for most domestic AI training.
🚀 Synchronized moves across fabs, memory, and cloud signal shift from tech importer to potential AI infrastructure exporter.
China is moving to curb its Nvidia habit with a 2026 plan to triple AI-chip output. The target is scale first, parity later. That shift now touches fabs, memory, cloud, and model standards.
What’s new
People familiar with the effort say three fabrication lines are being readied to build Huawei-designed accelerators. One line could start producing by year-end, with two more slated for next year. The ownership picture is murky, and Huawei says it has no plans to run its own fabs. The chips will still be Huawei-centric.
SMIC, China’s top foundry, is also preparing to double its 7-nanometer capacity next year, with Huawei as the anchor customer. If both tracks hit schedule, aggregate output from the Huawei-focused lines could rival the throughput of SMIC’s similar lines today. The aim is volume. Fast.
The DeepSeek catalyst
DeepSeek has nudged the ecosystem toward an FP8 numerical format to squeeze more work from “good-enough” silicon. That choice favors efficiency over precision while keeping model quality within bounds through training recipes and software. It’s a practical hack around export limits.
Chip designers are aligning accordingly. Huawei’s 910-class parts and Cambricon’s newer accelerators are being tuned for that FP8 regime, while smaller designers such as Biren and MetaX vie for SMIC slots. The result is a narrower target that simplifies compilers, kernels, and libraries. Less fragmentation means quicker iteration. It also means fewer excuses.
Follow the money
Capital is chasing the switch. Cambricon swung to a record first-half profit of 1.03 billion yuan on a roughly 44-fold jump in revenue to 2.9 billion yuan. Its market value doubled this month to about $80 billion, briefly topping long-time onshore favorites in price per share. Investors are pricing in import substitution at scale. Sentiment feeds capacity.
Policy air cover helps. The State Council again urged AI adoption across sectors, from government services to autos and robotics. Four smaller chip designers, including Biren and MetaX, are pushing toward IPOs after raising about $3 billion in pre-IPO rounds, while Cambricon secured approval for roughly $600 million more. The funding stack is lining up with the hardware stack. That matters.
Cloud and memory move in tandem
Huawei has reorganized its cloud unit to put AI front and center, consolidating compute, storage, database, and security teams around training and inference workloads. The goal is to ship services that match local silicon and software constraints. One playbook, many products.
Memory is the other pillar. CXMT is testing HBM3-class parts with a target launch next year, one generation behind the cutting edge now feeding Nvidia’s flagship systems. If CXMT hits volume, local accelerators get the bandwidth they need without relying on Korea- or US-linked suppliers. It won’t close the gap entirely. It does narrow the bottleneck.
What this means for Nvidia
The competition in China isn’t with H200 or Blackwell-class parts. It’s with the “China-compliant” chips Nvidia and AMD are still permitted to sell, plus older stock. Beijing’s goal is to replace those for most domestic training and inference while accepting that Nvidia keeps the crown for frontier runs. Substitution beats starvation.
If SMIC doubles 7-nanometer output and the Huawei-tuned lines ramp, buyers inside China will spend less time hunting for allotments and more time standardizing stacks. That unlocks model deployment at ministries, internet platforms, and startups. Nvidia’s halo at the very high end remains. The overall denominator grows.
Risks and unknowns
China still trails on leading-edge logic, advanced packaging, and top-tier HBM. Tool provenance and fab ownership around the Huawei-tuned lines lack transparency. Many labs continue to rely on Nvidia clusters for headline training cycles. Execution will decide the slope: yields, firmware stability, kernel maturity, and library depth. Any wobble slows the flywheel.
There’s also policy risk. If US rules shift again—loosening mid-range chip sales or tightening tool exports—the economics can change quickly. These are moving targets. Plan accordingly.
Why this matters:
A unified FP8-first stack could expand China’s AI capacity quickly without matching Nvidia at the frontier.
Synchronized moves in fabs, memory, cloud, and funding indicate a state-backed push from importer to potential exporter of AI infrastructure.
❓ Frequently Asked Questions
Q: What exactly is FP8 format and why does DeepSeek prefer it?
A: FP8 uses 8-bit floating point numbers instead of the standard 16-bit or 32-bit formats, requiring less memory and computation. This trades precision for efficiency—models run faster on less powerful hardware. DeepSeek's software compensates for lower precision through optimized training techniques, making "good-enough" Chinese chips competitive.
Q: How do China's current AI chips actually compare to Nvidia's performance?
A: Chinese chips like Huawei's 910D are roughly 2-3 generations behind Nvidia's flagship H100/H200 series in raw performance. However, they're designed to compete with Nvidia's "China-compliant" H20 chips—deliberately downgraded versions the US allows for export. The gap is narrower there, especially when optimized for FP8 workloads.
Q: Who actually owns these mysterious new Huawei-focused fabs?
A: The ownership structure remains deliberately opaque. Huawei denies owning the fabs directly, likely to avoid additional US sanctions targeting its semiconductor operations. The facilities are probably owned by state-linked entities or partners that can legally produce Huawei-designed chips while maintaining plausible separation from the company itself.
Q: What percentage of China's AI training currently uses domestic versus foreign chips?
A: Most major Chinese AI training still relies on Nvidia hardware, including clusters used by DeepSeek itself. Domestic chips handle smaller-scale deployment and inference tasks, but haven't reached the volume or performance needed for frontier model training. The capacity expansion aims to change this ratio significantly by 2026.
Q: Could China eventually export AI infrastructure to compete globally?
A: That's the long-term goal. By building complete stacks—chips, memory, networking, software—optimized for efficiency over raw performance, China could offer cost-effective AI infrastructure to countries that can't access or afford Nvidia's premium systems. Success depends on execution: yields, software maturity, and real-world performance at scale.
Tech journalist. Lives in Marin County, north of San Francisco. Got his start writing for his high school newspaper. When not covering tech trends, he's swimming laps, gaming on PS4, or vibe coding through the night.