OpenAI, Anthropic, and Google Unite to Combat Unauthorized Chinese AI Model Copying
Three major US AI companies collaborate through the Frontier Model Forum to combat Chinese 'adversarial distillation,' sharing intelligence to prevent billions in annual losses.
OpenAI, Anthropic, and Google have started working together to combat the unauthorized copying of their AI models by Chinese competitors. According to Bloomberg, the three companies are sharing information through the 'Frontier Model Forum,' founded in 2023, to detect so-called adversarial distillation — where outputs of an existing AI model are used to train a cheaper copycat model. US authorities estimate that adversarial distillation costs American AI labs billions of dollars in lost revenue each year. OpenAI had already warned Congress in February that DeepSeek was using increasingly sophisticated methods to extract data from US models. Anthropic identified DeepSeek, Moonshot, and Minimax as actors involved in the practice. The collaboration mirrors how the cybersecurity industry operates, where companies routinely share attack data with each other.
AI Newsletter
Get the latest AI tools and news delivered daily
Related Articles
Mustafa Suleyman: AI Development Won't Hit a Wall — 1,000x Compute by End of 2028
Microsoft AI CEO Mustafa Suleyman writes in MIT Technology Review that AI training data has grown 1 trillion times since 2010, predicting another 1,000x in effective compute by end of 2028.
Uber Adopts Amazon's Custom AI Chips (Trainium & Graviton) to Move Beyond GPU Dependency
Uber adopts AWS Graviton4 and Trainium3 custom chips for driver matching optimization and AI model training, accelerating the shift from standard GPUs to specialized hardware.
Anthropic Accidentally Leaks 500,000 Lines of Claude Code Source Code via npm Registry
Anthropic accidentally published Claude Code's entire source code (512,000 lines, 1,906 files) to the public npm registry, exposing its three-layer memory architecture.