The Trump administration has blacklisted Anthropic after the artificial intelligence company refused to remove safeguards preventing its chatbot Claude from being used for domestic mass surveillance and fully autonomous weapons, a decision that terminated a contract worth up to £159 million ($200 million) and opened the door for OpenAI to step into classified Pentagon networks.
President Donald Trump ordered federal agencies on Feb. 28, 2026 to "immediately cease" using Anthropic's technology, escalating a week-long confrontation between Washington and the San Francisco-based AI firm. Within hours, OpenAI Chief Executive Sam Altman announced that his company had reached an agreement with the U.S. Department of Defense to deploy its models under what he described as shared "red lines."
The dispute centered on two restrictions embedded in Anthropic's acceptable-use policy: prohibitions on domestic mass surveillance and on deploying AI in fully autonomous weapons systems. The Pentagon insisted vendors must make models available for "all lawful purposes," leaving determinations to the military.
Anthropic CEO Dario Amodei rejected that position. "These threats do not change our position," he wrote Thursday. "We cannot in good conscience accede to their request." He also highlighted what he called a contradiction in the government's stance: "One labels us a security risk; the other labels Claude as essential to national security."
The confrontation intensified after Defense Secretary Pete Hegseth imposed a Friday 17:01 ET deadline for compliance. He warned that failure could result in Anthropic being designated a "Supply-Chain Risk to National Security" and raised the possibility of invoking the Defense Production Act to compel broader access.
When the deadline passed without agreement, Hegseth formally issued the designation, a measure typically reserved for firms linked to adversarial nations. The classification bars companies doing business with the Pentagon from using Anthropic's technology, a move with potential ripple effects across defense contractors.
Anthropic's government contract, signed in July 2025, positioned it as the first AI developer approved for classified Defense Department networks. The agreement emphasized advancing "responsible AI in defence operations," including explicit guardrails.
Key financial and strategic stakes include:
- Contract value: up to £159 million ($200 million)
- Company valuation: approximately £302 billion ($380 billion)
- Reported revenues: £11.1 billion ($14 billion)
- Enterprise clients tied to defense contractors facing compliance decisions
Anthropic has disputed the scope of the supply-chain designation, stating that under federal law it would apply only to "the use of Claude as part of Department of War contracts, it cannot affect how contractors use Claude to serve other customers."
OpenAI moved swiftly. Altman wrote that the Pentagon had "displayed a deep respect for safety," adding that the agreement maintained "no domestic mass surveillance and human oversight for all use-of-force decisions." He later said the restrictions "reflect existing US law and Pentagon policy."
In an interview with CNBC, Altman said companies should work with the military "as long as it is going to comply with legal protections" and "the few red lines that we share with Anthropic."