OpenAI, the prominent artificial intelligence company behind ChatGPT, has recently disbanded its Superalignment team, a research group dedicated to ensuring the safe control of future superintelligent AI systems. The team's dissolution comes amid staff resignations and accusations that OpenAI has prioritized product launches over AI safety, raising questions about the company's commitment to its stated goals and public promises.

In July 2023, OpenAI announced the formation of the Superalignment team, co-led by Ilya Sutskever, the company's chief scientist and co-founder, and Jan Leike, a long-time OpenAI researcher. At the time, OpenAI pledged to dedicate 20% of its then-available computing resources to the team's efforts over the next four years, signaling the importance of the team's work in preventing potentially superintelligent AI from "going rogue."

However, according to six sources familiar with the functioning of the Superalignment team, OpenAI never fulfilled its commitment to provide the team with the promised computing power, as reported by the Fortune. The team repeatedly saw its requests for access to graphics processing units (GPUs), crucial for training and running AI applications, turned down by OpenAI's leadership, even though the team's total compute budget never approached the 20% threshold.

The revelations have called into question the sincerity of OpenAI's public pledge and the credibility of the company's other public commitments. The disbandment of the Superalignment team follows the departures of Sutskever and Leike, as well as at least six other AI safety researchers from different teams within the company in recent months.

Jan Leike, who announced his resignation on Friday, criticized his former employer in a series of posts on X (formerly Twitter), stating that "safety culture and processes have taken a backseat to shiny products." He also mentioned the team's struggles with accessing compute, saying, "Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done."

The compute allocation issues faced by the Superalignment team were reportedly exacerbated in the wake of a pre-Thanksgiving showdown between OpenAI CEO Sam Altman and the board of the OpenAI non-profit foundation. Sutskever, who was on the board and had voted to fire Altman, never returned to work at OpenAI following Altman's rehiring. With Sutskever's departure, the Superalignment team lost a key advocate for its compute allocation within the organization.

OpenAI is also facing a backlash over its use of a voice for its AI speech generation features that bears a striking resemblance to actress Scarlett Johansson's voice. Johansson claims that Altman approached her twice for permission to use her voice, which she declined, casting doubt on OpenAI's public statements that the similarity is purely coincidental.

In response to Leike's comments, Altman and OpenAI co-founder Greg Brockman posted on X, expressing their gratitude for Leike's contributions and emphasizing the need to elevate the company's safety work to match the stakes of each new model. They outlined their view of the company's approach to AI safety going forward, which would involve a greater emphasis on testing models currently under development rather than developing theoretical approaches for future, more powerful models.