Elon Musk, along with over 1,000 artificial intelligence (AI) experts and industry leaders, is advocating for a six-month pause in developing AI systems more sophisticated than OpenAI's recently launched GPT-4. Their concerns are expressed in an open letter issued by the non-profit Future of Life Institute, highlighting potential threats to society and humanity.
Microsoft-backed OpenAI's GPT-4 has gained attention for its wide range of applications, such as engaging in human-like conversations, composing music, and summarizing lengthy documents. The open letter, signed by AI professionals like Musk, requests a temporary halt in advanced AI development until shared safety protocols can be devised, executed, and independently reviewed by experts. The letter asserts, "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
OpenAI has not yet commented on the issue.
The letter stresses potential risks to society and civilization from AI systems capable of competing with humans, including economic and political disruptions. It also encourages developers to work with policymakers to create governance and regulatory authorities.
Signatories include Emad Mostaque, CEO of Stability AI; researchers from Alphabet-owned DeepMind; and AI pioneers Yoshua Bengio and Stuart Russell. The Future of Life Institute's primary funding sources include the Musk Foundation, London-based effective altruism group Founders Pledge, and the Silicon Valley Community Foundation, according to the European Union's transparency register.
These concerns emerge as Europol, the EU police force, recently expressed ethical and legal apprehensions about advanced AI systems like ChatGPT, warning of potential misuse in phishing attempts, disinformation, and cybercrime.
Meanwhile, the UK government proposed an "adaptable" regulatory framework for AI, which would allocate responsibility for AI governance among its human rights, health and safety, and competition regulators, rather than establishing a new dedicated body.
Tesla's Elon Musk, a vocal AI critic, did not sign the letter. OpenAI's ChatGPT has prompted competitors to develop similar large language models and incorporate generative AI models into their products since its launch last year.
New York University professor Gary Marcus, a letter signatory, said, "The letter isn't perfect, but the spirit is right: we need to slow down until we better understand the ramifications."
However, some critics argue that the letter's signatories are promoting "AI hype," claiming that the technology's current potential is greatly exaggerated. Johanna Björklund, an AI researcher and associate professor at Umeå University, said, "These kinds of statements are meant to raise hype... I don't think there's a need to pull the handbrake." Björklund advocates for greater transparency requirements for AI researchers, rather than halting research. She believes that "if you do AI research, you should be very transparent about how you do it."
Since its introduction, OpenAI's ChatGPT has led numerous companies to integrate generative AI models into their products. Last week, OpenAI announced partnerships with around a dozen firms, enabling ChatGPT users to order groceries through Instacart or book flights via Expedia. Sam Altman, OpenAI's chief executive, has not signed the letter, according to a Future of Life spokesperson.
As AI technologies continue to evolve and be implemented, the debate surrounding their potential impact on society and the need for appropriate regulations to ensure their safe and responsible use will likely persist. The open letter serves as a reminder of the importance of considering both the benefits and potential risks of powerful AI systems. It is essential for developers, policymakers, and industry leaders to work together to establish safety protocols and regulatory frameworks that can address these concerns while allowing AI to continue advancing and providing valuable solutions.