Anthropic Faces Large-Scale Distillation Attacks from Chinese AI Firms
Anthropic disclosed on February 23, 2026, that three Chinese artificial intelligence companies—DeepSeek, Moonshot AI, and MiniMax—conducted extensive distillation attacks against its Claude model, generating over 16 million queries through nearly 24,000 fraudulent accounts aimed at extracting proprietary training data.
The companies allegedly utilized sophisticated tactics, including proxy networks, various account types, and blended internet traffic to bypass security measures, violating both Anthropic’s terms of service and a ban on access from China. Anthropic uncovered these attacks by correlating IP addresses and analyzing request metadata, linking the campaigns to specific labs.
The Scale of the Attacks
In the wake of the incidents, the company reported that Moonshot AI significantly contributed to the onslaught, running over 3.4 million queries to harvest Claude’s advanced capabilities, such as reasoning, coding, and data analysis. These capabilities are critical for training competitive AI models, highlighting the aggressive tactics adopted by rivals within the industry.
The escalating methods employed in these attacks elevate concerns about intellectual property risk in the AI sector. As nations strive for dominance in AI technology, using distillation attacks to siphon proprietary data becomes increasingly prevalent, especially among firms operating in jurisdictions like China, where access to western innovations often incurs regulatory barriers.
Anthropic’s revelations raise alarms about potential national security threats arising from rapid advancements in AI capabilities by these extracted models. The company specifically pointed out risks related to the misuse of AI tools, including bioweapons development, disinformation campaigns, or authoritarian surveillance mechanisms, if these distilled models are released in open-source environments.
Industry Response and Countermeasures
To combat such breaches, Anthropic has initiated stronger countermeasures that include implementing behavioral fingerprinting classifiers to detect unusual patterns in user activity, enhancing anomaly detection systems, and tightening account verification protocols. They have also called for collaborative efforts across the industry involving cloud service providers to create a coordinated response to the rising threat.
Regulatory adjustments have become increasingly necessary as other tech giants grapple with similar cybersecurity breaches. Anthropic’s experience comes in the wake of similar incidents, notably Google’s recent disruptions of attacks targeting its Gemini project. With heightened awareness of AI vulnerabilities, it’s critical that tech companies innovate proactively to safeguard their intellectual property in a fiercely competitive environment.
Amid ongoing debates regarding export controls of advanced chips crucial for AI development, the US government faces pressure to address these emerging threats. Such events underline a growing imperative for governments and businesses alike to reconsider their strategies for safeguarding advancements in AI technology on a global scale.
As global tensions surrounding technology competition mount, the repercussions of these attacks could catalyze broader discourse on international standards for cybersecurity and ethical AI development.









