Defense Secretary Pete Hegseth has given Anthropic chief executive Dario Amodei until Friday evening to remove key safety restrictions from the company's Claude artificial-intelligence model for U.S. military use, threatening to cancel a $200 million Pentagon contract and pursue a "supply chain risk" designation if the company refuses.
The ultimatum was delivered during a Feb. 25 meeting at the Pentagon, according to people familiar with the discussion who spoke to Axios. Hegseth told Amodei the Defense Department would invoke the Defense Production Act if Anthropic failed to comply by 5:01 p.m. Friday, compelling cooperation under federal authority.
The Defense Production Act, enacted in the 1950s, allows the federal government to require private firms to prioritize and accept contracts deemed vital to national security. A Pentagon official told CNN that if Anthropic does not agree, Hegseth will ensure the act is invoked, forcing the company to allow military use of Claude "whether they want to or not."
At issue are two guardrails Anthropic has maintained during months of negotiations. The company has refused to permit:
- Use of Claude for mass surveillance of American citizens
- Final targeting decisions in weapons systems without human oversight
Anthropic has held that those constraints are integral to its safety framework.
Pentagon officials dispute the characterization of the dispute. A senior defense official told CNN that the disagreement is not specifically about autonomous weapons or domestic surveillance, adding that "legality is the Pentagon's responsibility as the end user." The department, officials say, seeks access to Claude for "all lawful purposes," without the company retaining the authority to veto particular applications.
Hegseth reportedly drew an analogy to defense manufacturing. Boeing does not dictate how aircraft it sells are deployed, he argued, according to CBS News, suggesting AI vendors should operate under similar principles once systems are delivered.
The Pentagon is also weighing labeling Anthropic a "supply chain risk," NPR reported. Such a designation would require defense contractors to certify that Claude is absent from their systems. The label is typically applied to foreign adversaries. Anthropic has said eight of the ten largest U.S. companies use its technology.
Claude currently holds a unique position inside the Defense Department. It is the only AI model operating within classified Pentagon networks. According to NPR, the model was used during the January operation that resulted in the capture of former Venezuelan president Nicolás Maduro through a partnership with defense contractor Palantir. Anthropic had not specifically authorized that use.
The Pentagon last summer awarded contracts worth up to $200 million each to Anthropic, OpenAI, Google and Elon Musk's xAI. While Anthropic's contract represents a fraction of its roughly $14 billion in annual revenue, access to classified systems enhances its stature in government and commercial markets.
Anthropic said after the meeting that it had engaged in "good-faith conversations" and remains "committed to using frontier AI in support of US national security," according to Business Insider. The company did not signal any shift in its policies regarding autonomous weapons or domestic surveillance.