OpenAI is confronting a surge of user backlash after confirming an agreement with the U.S. Department of Defense to deploy its artificial intelligence models on classified government networks, a move that has prompted visible ChatGPT subscription cancellations and revived concerns about AI ethics and data privacy.

The agreement, which allows OpenAI technology to be used in defense settings, quickly ignited a "Cancel ChatGPT" trend across Reddit and X. Users posted screenshots of canceled subscriptions and urged others to migrate to rival platforms, framing the partnership as a turning point for the company's relationship with consumers.

OpenAI has said the contract includes safeguards designed to prevent misuse. The company emphasized that the arrangement contains "guardrails" and reiterated that its AI models will not be used to develop fully autonomous weapons or conduct mass surveillance.

Still, critics have focused on language permitting use for "all lawful purposes," arguing that such phrasing leaves room for broad interpretation once systems are embedded within military infrastructure.

Within hours of news of the deal spreading, Reddit forums dedicated to artificial intelligence were filled with posts criticizing the partnership. According to Windows Central, the "'Cancel ChatGPT' movement was going mainstream" following the OpenAI-U.S. Department of War agreement. Some users described the move as a breach of trust, while others shared detailed instructions on how to delete accounts and export personal data.

Although OpenAI has not released figures on subscription losses, the protest's visibility has expanded across social media platforms. Phrases such as "no ethics" and "selling out" trended within AI-focused communities.

At the same time, Anthropic's chatbot Claude rose to the top of Apple App Store rankings in several regions. While it is not possible to attribute the ranking shift solely to OpenAI's defense contract, the timing has intensified speculation that users are testing alternatives.

The controversy has sharpened comparisons between OpenAI and Anthropic. Earlier reports indicated that Anthropic declined certain government arrangements over concerns related to mass surveillance and fully autonomous weapons systems. OpenAI, by contrast, has maintained that its defense agreement contains explicit red lines prohibiting those uses.

Privacy concerns remain central to the backlash. Some subscribers fear that user interaction data could be exposed to government agencies or incorporated into defense-related projects. OpenAI has not indicated that individual user conversations will be shared with the Department of Defense, and there is no evidence that private accounts are directly accessible under the agreement.

Even so, perception has become a decisive factor. For many users, the association of a consumer-facing AI product with military applications has triggered broader unease about how commercial AI models intersect with national security priorities.