OpenAI, the creator of the chatbot, ChatGPT, is now facing an investigation by the U.S. Federal Trade Commission (FTC), according to media reports on Thursday. This probe, initiated through a civil investigative demand (CID) letter sent to OpenAI, mainly focuses on whether the chatbot has caused harm to individuals by disseminating false information and how OpenAI has managed such risks.
This marks the first time that U.S. regulatory bodies have formally scrutinized the risks posed by artificial intelligence chatbots. The FTC, under the leadership of Lina Khan, has broad authority to regulate unfair and deceptive business practices, signaling potential legal threats for ChatGPT, one of the most popular applications worldwide.
The FTC's letter posed several detailed inquiries regarding the measures taken by OpenAI to mitigate the risks of their large language model products generating false, misleading, or defamatory statements about real individuals.
Legislators currently are particularly concerned about the risks of so-called deepfake videos, which falsely depict real individuals engaging in embarrassing actions or making embarrassing statements.
However, some industry insiders believe that the FTC's purview is too broad. Citing Adam Kovacevich, CEO of tech trade organization, Chamber of Progress, the media reported him saying, "Is it within the FTC's jurisdiction when ChatGPT makes a false statement about someone that could potentially harm their reputation? I think it's far from clear. Such matters fall more into the realm of free speech and are beyond the FTC's authority."
In the CID letter, the FTC also inquired about specific matters regarding OpenAI's data security practices and privacy, scrutinizing whether there's any unfair or deceptive behavior. The FTC referred to an incident in 2020 when OpenAI disclosed an error that allowed users to view other users' chat dialogues and some payment-related information.
The letter included several dozen questions, encompassing other themes such as the technical details of ChatGPT's design, practices in training AI models, OpenAI's marketing strategies, handling of users' personal information, issues regarding user complaints, and how the company assesses people's understanding of the chatbot's accuracy and reliability. The FTC asked OpenAI to share relevant internal materials.
This move from the FTC was not unexpected. In May this year, the FTC issued a warning stating that it was closely monitoring how companies choose to use artificial intelligence technologies, including new generative AI tools, that have substantial and tangible effects on consumers.
On Thursday, Lina Khan testified at the U.S. House Judiciary Committee, where she faced strong criticism from Republican members for her aggressive enforcement stance. When asked about the investigation into OpenAI during the hearing, Khan declined to comment but stated, "The FTC's broader concerns include ChatGPT and other AI services being trained on massive amounts of data without scrutiny about what kind of data these companies are using. We've heard reports of sensitive information being exposed when responding to others' inquiries. Defamatory statements and outright untruths are happening. That's the deception we're worried about."
With generative AI products gaining global popularity, they are increasingly becoming the focus of regulatory authorities worldwide. In May this year, the Biden administration sought to draft a national AI strategy to guard against misinformation and other potential drawbacks of this technology. In March, Italian regulators briefly banned ChatGPT while examining the company's collection of personal information, among other issues. A few weeks later, ChatGPT was allowed to operate in Italy again after OpenAI enhanced the accessibility of its privacy policy and introduced tools for verifying users' ages.