OpenAI, the prominent artificial intelligence research lab, has announced the establishment of a new safety and security committee in response to internal dissent and key departures from the company. This move comes just weeks after the dissolution of its AI safety team, signaling a renewed commitment to addressing safety concerns as the company progresses in training its next major AI model.

In a blog post on Tuesday, OpenAI revealed that the newly formed committee would be led by CEO Sam Altman, Board Chair Bret Taylor, and board member Nicole Seligman. This announcement follows the resignation of Jan Leike, a key executive focused on AI safety, who left the company citing insufficient investment in safety measures and escalating tensions with leadership.

The new committee is tasked with evaluating and enhancing OpenAI's safety protocols over the next 90 days. "At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full Board. Following the full Board's review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security," the company stated in its blog post.

This development occurs amidst significant upheaval within OpenAI. Ilya Sutskever, another pivotal figure in the company's safety team known as "superalignment," also departed earlier this month. Sutskever had previously played a controversial role in Altman's brief ousting as CEO last year, only to later support his reinstatement.

OpenAI's decision to dissolve the superalignment team and redistribute its members across the company was initially intended to better achieve its goals, according to a spokesperson. However, this restructuring has not been without criticism, both from within and outside the company.

The formation of the new committee, which includes technical and policy leaders such as Aleksander Mądry, Lilian Weng, John Schulman, Matt Knight, and Jakub Pachocki, is seen as a strategic move to reassure stakeholders about the company's commitment to AI safety. The committee will also consult with external safety and security experts, including former cybersecurity official Rob Joyce and former top DOJ official John Carlin.

In conjunction with the announcement of the safety committee, OpenAI confirmed that it has started training its next large language model, anticipated to be a successor to GPT-4. This new model represents another step toward artificial general intelligence, a long-term goal for the company. Mira Murati, OpenAI's CTO, hinted earlier this month at the model's significant advancements, and Microsoft's CTO Kevin Scott suggested that the new model would be substantially larger than GPT-4.

These developments follow the high-profile exits of Leike and Sutskever, who criticized the company for not adequately supporting its long-term safety efforts. Policy researcher Gretchen Krueger also announced her departure last week, echoing similar concerns.

John Schulman, a co-founder of OpenAI, has now taken on an expanded role as head of alignment science, overseeing both short-term safety and long-term superalignment research. This consolidation within the research unit is intended to enhance effectiveness and increase investment in safety measures over time, according to a source familiar with the company's plans.

The source emphasized that OpenAI is committed to addressing any valid criticisms of its work and expanding on its commitments to the White House and at recent AI summits. "We welcome a robust debate at this important moment," OpenAI stated, indicating its openness to scrutiny and improvement.