Musk Attacks Anthropic as Pentagon Pressures AI Firm Over Military Use of Claude

0
54

MUMBAI– Tensions between the U.S. Defense Department and artificial intelligence company Anthropic intensified this week, as Defense Secretary Pete Hegseth reportedly summoned Anthropic Chief Executive Officer Dario Amodei to the Pentagon for what officials described as a high-stakes and tense meeting over the military’s use of the company’s Claude AI model.

According to a report citing a senior defense official, the meeting was “not a friendly” one, with Pentagon leaders pressing Anthropic to loosen restrictions on its technology. The company has so far refused to fully remove safeguards on Claude, including limits on mass surveillance of Americans and the development of fully autonomous weapons.

Claude is currently the only AI system deployed inside classified U.S. defense networks under a $200 million pilot contract signed last year. In a January 9 memo, Hegseth asked AI companies working with the Pentagon to renegotiate contract terms and reduce restrictions on their models. Anthropic, however, has held firm, the report said.

Defense officials warned that Anthropic could be labeled a “supply chain risk,” a designation that could void existing contracts and restrict other Pentagon partners from using Claude. An Anthropic spokesperson described the discussions with the Defense Department as “productive,” but officials acknowledged that replacing the company would be difficult due to the deep integration of its systems within defense infrastructure.

Pressure on Anthropic has increased as the Pentagon signs agreements with rival AI firms, including Elon Musk’s xAI, and moves closer to a deal with Google for its Gemini model, according to separate reports.

The dispute has also spilled into public view. Ahead of the Pentagon meeting, Anthropic alleged that three Chinese AI firms used chatbots to siphon millions of Claude outputs to train their own models. Musk responded sharply on X, accusing Anthropic of large-scale data theft.

“Anthropic is guilty of stealing training data at massive scale and has had to pay multi-billion-dollar settlements for their theft,” Musk wrote, mocking the company by calling it “MisAnthropic.”

Claude is marketed as a next-generation AI assistant designed to be safe, accurate, and secure. The system is positioned to automate tasks such as legal document review, compliance checks, sales planning, marketing analysis, financial reconciliation, data visualization, SQL-based reporting, and enterprise-wide document search.

The clash underscores growing friction between AI developers and governments as artificial intelligence becomes increasingly embedded in national security systems, raising unresolved questions about control, safeguards, and accountability. (Source: IANS)