OpenAI, Anthropic, Google Share Attack Data to Counter Chinese AI Distillation
OpenAI, Anthropic, and Google are sharing attack data through the Frontier Model Forum to counter Chinese AI distillation. The effort targets labs that generated 16 million fraudulent exchanges from US models. But with Chinese models 14 times cheaper, the economics may outpace any defense.
OpenAI, Anthropic, and Google have started passing each other data on adversarial distillation, working through the Frontier Model Forum, Bloomberg reported Monday. The three built that nonprofit with Microsoft in 2023. Chinese AI labs have fired millions of fraudulent queries at US frontier models. They take what comes back and train rival systems on it. Cheaper to operate by orders of magnitude. The practice bleeds Silicon Valley of billions a year, US officials have estimated.
Maps the India–Germany–U.S. AI triangle from New Delhi. Background in cross-market operations and business development. Writes about supply chains, enterprise adoption, and talent—the unsexy forces that actually move global AI.
The Morning Briefing
Get the Morning Briefing in your inbox.
Sign up to our free daily morning newsletter and free member articles. Only our special weekly Pro Briefing is available for $8/month.