OpenAI has signed a $38 billion agreement with Amazon Web Services to secure access to hundreds of thousands of Nvidia processors, marking its first major partnership with the world's largest cloud provider and a clear step away from its long-standing dependence on Microsoft.

The deal, announced Monday, allows the ChatGPT developer to immediately begin running AI workloads on AWS infrastructure in the United States. According to Amazon executives, OpenAI will expand capacity through 2026, with future growth tied to additional data center buildouts. "It's completely separate capacity that we're putting down," said Dave Brown, Amazon's vice president of compute and machine learning services. "Some of that capacity is already available, and OpenAI is making use of that."

Amazon's stock rose roughly 5% following the announcement, reflecting investor optimism over AWS's strengthening position in the AI infrastructure race. "The breadth and immediate availability of optimized compute demonstrates why AWS is uniquely positioned to support OpenAI's vast AI workloads," AWS CEO Matt Garman said in a statement.

The partnership underscores OpenAI's growing effort to diversify its infrastructure providers after Microsoft's exclusive cloud arrangement expired earlier this year. Since then, OpenAI has signed large-scale buildout agreements worth more than $1.4 trillion with Oracle, Google, Nvidia, and Broadcom, triggering both excitement over AI's rapid industrialization and concern among analysts who warn of a speculative bubble in the sector.

"Scaling frontier AI requires massive, reliable compute," OpenAI CEO Sam Altman said in the release. "Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone."

Under the new pact, OpenAI will use AWS servers equipped with Nvidia's latest Blackwell GPUs to train and deploy next-generation foundation models. The infrastructure will power both inference tasks-such as ChatGPT's real-time responses-and model training workloads. AWS will also construct dedicated facilities for OpenAI's use, though the companies have not disclosed specific geographic locations or timelines beyond 2026.

The agreement arrives amid record spending by OpenAI, whose compute commitments now exceed $1 trillion for the decade. Despite its soaring costs, the company is preparing for an eventual public offering. Its recent reorganization into a for-profit public benefit corporation was viewed as a step toward a potential $1 trillion IPO valuation, according to Reuters. CFO Sarah Friar has described the restructuring as part of OpenAI's path to "financial independence and operational maturity."

Amazon's cloud unit has strong ties to OpenAI's main competitor, Anthropic, in which it has invested billions. AWS is currently building an $11 billion data center campus in Indiana dedicated to Anthropic's workloads and its use of Amazon's in-house Trainium chips. When asked about OpenAI's potential adoption of Trainium, Brown said, "We like Trainium because we're able to give customers something that gives them better price performance and honestly gives them choice," but declined to comment on "anything we've done with OpenAI on Trainium at this point."