OpenAI Chief Executive Sam Altman told employees that technology companies ultimately cannot control how governments deploy artificial intelligence systems once they are adopted for official use, underscoring the growing tension between AI developers and policymakers as governments expand their use of advanced software for defense and national-security operations.
Altman delivered the message during an internal town hall meeting with staff, according to people familiar with the discussion, as OpenAI faces increased scrutiny over its collaboration with the U.S. Department of Defense. The remarks reflect a broader debate emerging across the global technology sector about whether AI companies should impose strict limits on how their products are used by governments.
The issue has become especially contentious as nations race to integrate AI into military, intelligence and public-sector operations.
Altman told employees that while OpenAI can design safeguards and policies around its systems, operational decisions about how the technology is used ultimately belong to governments that deploy it.
He emphasized that OpenAI staff cannot dictate those decisions.
To illustrate the point, Altman referenced geopolitical scenarios employees might have personal views about.
He said workers may have opinions about events such as the "Iran strike" or the "Venezuela invasion," but those perspectives would not determine how governments choose to act once they control the technology.
The comments came shortly after OpenAI finalized an agreement to provide AI systems to the Pentagon for use within classified government networks.
The deal followed negotiations between the Defense Department and Anthropic, a rival artificial-intelligence company whose chatbot Claude has gained popularity among developers. Those talks reportedly collapsed after Anthropic declined to allow certain uses of its systems within defense environments because of ethical concerns.
The contrasting approaches highlight a widening philosophical divide within the AI industry about the proper relationship between technology firms and national governments.
Some developers argue that companies should strictly limit how their systems are deployed, particularly in areas involving surveillance or military targeting.
Others contend that governments, as sovereign authorities, must ultimately determine national-security policies.
The debate has intensified as AI capabilities advance and governments seek to incorporate the technology into strategic planning.
Critics of military partnerships with AI companies warn that advanced models could potentially enable controversial applications, including automated intelligence analysis, large-scale surveillance systems or tools that assist in battlefield decision-making.
Anthropic has publicly opposed deploying its technology in some scenarios, including mass surveillance and fully autonomous weapons systems, according to statements cited by industry observers.
OpenAI has taken a different position, emphasizing safeguards within its agreements with government partners.
Company officials say the firm attempts to embed protections against certain uses, including domestic mass surveillance or autonomous lethal systems, though critics question whether such restrictions can be effectively enforced once governments control the infrastructure.
Altman's remarks appear to acknowledge that limitation.
By distinguishing between building AI technology and governing its operational use, the OpenAI chief framed the company's role as a supplier rather than a policymaker in matters of national security.
The comments come as artificial intelligence becomes increasingly central to geopolitical competition, with governments investing billions of dollars to develop systems capable of supporting intelligence analysis, logistics planning and cyber defense.