Anthropic rejects Pentagon's request to loosen AI safeguards
Company refuses to remove safeguards from its AI model over concerns about surveillance, autonomous weapons
ISTANBUL
Artificial intelligence company Anthropic said Thursday it will not comply with a US Defense Department request to relax safeguards on its AI systems, citing concerns over mass surveillance and autonomous weapons.
In a statement, CEO Dario Amodei said the company opposes allowing its AI model, Claude, to be used for “mass domestic surveillance” or “fully autonomous weapons.” He said advanced AI systems are not reliable enough to operate such weapons without human oversight and require safeguards that “don’t exist today.”
He also said AI can support national security but warned that large-scale, AI-driven surveillance could pose risks to civil liberties.
Anthropic and the Pentagon have been negotiating for weeks. The Trump administration has threatened to invoke the Defense Production Act, which allows the government to compel companies to prioritize national defense needs, and has considered labeling Anthropic a “supply chain risk.” Such a designation would prevent Defense Department contractors from using its software.
Axios reported that the Pentagon has begun steps toward that designation and asked Boeing and Lockheed Martin to detail their reliance on Claude.
Pentagon spokesperson Sean Parnell denied the department intends to use AI for unlawful surveillance or fully autonomous weapons without human involvement. In a post on the US social media company X, he said the Pentagon is seeking to use Anthropic’s model for “all lawful purposes” and would not allow a private company to dictate operational decisions.
Anadolu Agency website contains only a portion of the news stories offered to subscribers in the AA News Broadcasting System (HAS), and in summarized form. Please contact us for subscription options.
