The U.S. Department of Defense has warned artificial intelligence developer Anthropic that it could invoke the Defense Production Act or bar the company from future federal contracts after the firm refused to provide broader military access to its chatbot Claude, escalating a high-stakes confrontation between Silicon Valley and the Trump administration.
The dispute centers on Pentagon demands for expanded operational flexibility in how Claude can be deployed within defense systems. Anthropic has declined to modify what it describes as core safety guardrails embedded in the model, arguing that certain potential uses fall outside its ethical commitments and contractual scope.
The standoff intensified following a Feb. 24 meeting between Defense Secretary Pete Hegseth and Anthropic Chief Executive Dario Amodei. According to Axios, Hegseth imposed a deadline and indicated the administration could label Anthropic a supply chain risk if it fails to cooperate-potentially allowing continued access to its technology under emergency authority.
The Defense Production Act, a Cold War-era statute, grants the federal government broad powers to direct private companies in the interest of national security. Though historically used during wartime or public health emergencies, its application to artificial intelligence governance would mark a significant expansion of executive authority in technology policy.
Anthropic has drawn a firm line. The company told officials it could not "in good conscience accede" to conditions that would weaken safeguards in Claude. Amodei has stated that the threats "do not change" the company's position.
According to company executives, the dispute stems from two categories of use the firm considers unacceptable:
- Mass domestic surveillance
- Fully autonomous weapons systems
Anthropic maintains that such applications were not contemplated in its agreements with what it described as the U.S. Department of War and that its safeguards are integral to the system's design rather than optional settings.
The Pentagon has framed the matter in terms of military readiness. In remarks cited by the Associated Press, Hegseth warned that failure to align with defense requirements could carry consequences beyond a single contract. Officials have also raised the possibility of blacklisting Anthropic from future federal business.
For Anthropic, the stakes are substantial. Losing Defense Department work could affect revenue and long-term positioning in a rapidly consolidating AI market. Yet loosening protections could jeopardize its reputation among employees and customers who regard the firm as a standard-bearer for responsible AI development.
The clash reflects a broader friction between Washington and technology firms as artificial intelligence becomes embedded in national security strategy. Some defense officials argue that advanced AI systems are too strategically important to leave to corporate discretion. Critics counter that compelling companies to dilute safety standards could generate systemic risks and undermine public trust.
Anthropic is reportedly prepared to challenge any supply chain risk designation in court, potentially setting up a landmark case over how far federal authority extends in directing private AI development.