OpenAI's flagship chatbot ChatGPT is facing a wave of consumer backlash after the company secured a contract with the U.S. Department of Defense, triggering a surge in app deletions and negative user reviews that analysts say highlights growing tensions between artificial intelligence development and public concerns about military use.

New data from mobile analytics firm Sensor Tower indicates that ChatGPT mobile app uninstalls in the United States spiked 295% on Feb. 28, a dramatic increase compared with the app's typical daily churn rate of roughly 9%.

The surge followed news that OpenAI had reached an agreement to deploy AI systems on classified Pentagon networks, a development that ignited debate across social media platforms and prompted calls for users to abandon the application.

At the same time, the app experienced a noticeable decline in new downloads. According to Sensor Tower data:

  • Installs fell 13% on Feb. 28
  • Downloads dropped another 5% on March 1

The reaction quickly spilled into app-store ratings. Sensor Tower reported that one-star reviews jumped 775% on Feb. 28, compared with the previous day, while negative ratings rose another 100% the following day. During the same period, five-star reviews declined by about 50%, reflecting a sharp shift in user sentiment.

Many of the critical reviews cited concerns about artificial intelligence being integrated into military systems. Users posting online questioned transparency around the agreement and expressed unease about the role of AI technologies in warfare and national security operations.

The controversy unfolded alongside developments involving Anthropic, an OpenAI rival whose chatbot Claude has gained traction with users seeking alternatives.

According to Sensor Tower, Claude's U.S. usage surged 51% on Feb. 28, helping the application reach the No. 1 position in the Productivity category on the U.S. App Store on March 2.

Online discussions on platforms such as Reddit and X suggested some users were switching to Claude partly because of reports that Anthropic declined certain Department of Defense partnership proposals. Although no direct causal link has been confirmed, social-media posts frequently cited that stance when encouraging others to try competing AI services.

Screenshots shared across forums showed users deleting the ChatGPT app and promoting what participants called the "Quit ChatGPT" campaign.

The movement has coalesced around the hashtag #QuitGPT, with organizers claiming more than 1.5 million users have pledged to remove the application or cancel subscriptions.

While ChatGPT remains the dominant AI chatbot platform-with roughly 900 million weekly users globally-analysts say the episode underscores how quickly public sentiment toward technology companies can shift when ethical concerns arise.

The backlash also highlights the business stakes for OpenAI's subscription model. ChatGPT operates on a freemium structure in which paid plans represent a significant source of revenue. If cancellations extend beyond a short-term protest, analysts say the financial implications could become more substantial.

OpenAI Chief Executive Sam Altman addressed the criticism directly in a post on X, seeking to reassure users that the company's government partnerships would not enable domestic surveillance.

Altman said the agreements do not permit monitoring of Americans and emphasized constitutional safeguards governing intelligence activities.

He wrote that the company's contracts do not allow systems to be used for domestic mass surveillance, referencing legal protections such as the Fourth Amendment and existing national-security statutes.

Altman also noted that any intelligence-related deployments would require separate contractual frameworks, signaling that the Pentagon partnership would not automatically extend to broader surveillance or data-collection activities.