NEW DELHI — U.S.-based artificial intelligence company Anthropic has accused three Chinese AI startups of improperly extracting capabilities from its Claude model, raising concerns about national security and the misuse of advanced AI systems.
According to a report cited by CNN Business, Anthropic alleged that firms including DeepSeek, MiniMax, and Moonshot AI used a technique known as “distillation” to replicate and enhance their own models by drawing on Claude’s outputs.
The company claimed the process involved the creation of roughly 24,000 fraudulent accounts, which were used to generate more than 16 million interactions with its AI system to train competing models.
Anthropic warned that systems developed through such methods may lack critical safety safeguards built into its own models, increasing the risk of misuse in areas such as cyberattacks or even biological threats.
“These models could enable authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance,” the company said, adding that “the window to act is narrow.”
CNN reported that it has reached out to the three Chinese firms for comment.
The allegations come amid heightened scrutiny of China’s rapidly advancing AI sector, with companies like DeepSeek gaining prominence and prompting debate over the effectiveness of U.S. export controls on advanced technology.
Anthropic argued that the alleged attempts to replicate its model actually demonstrate the importance of those controls, noting that cutting-edge AI development still depends heavily on access to advanced semiconductor chips.
Similar concerns have been raised previously by OpenAI, which accused DeepSeek of benefiting from technologies developed by leading U.S. AI labs.
Anthropic also recently received a “Supply Chain Risk” designation from the U.S. government, though the company clarified that the classification applies only to the use of its Claude models in Department of War contracts and not to broader commercial use.
The company’s chief executive has also issued an apology after earlier criticism of President Donald Trump.
Anthropic did not provide further details on potential legal action but emphasized the urgency of addressing what it described as growing risks tied to the global development of artificial intelligence. (Source: IANS)





