On June 6, U.S. Treasury Secretary Janet Yellen issued a stark warning about the potential risks associated with the widespread adoption of artificial intelligence (AI) in the financial sector. Yellen's remarks came during the opening of a Financial Stability Oversight Council (FSOC) meeting focused on AI and financial stability.

Yellen's comments mark her most comprehensive statement on AI to date. A U.S. Treasury official revealed that Yellen has personally experimented with AI chatbots. The FSOC, established by the Treasury Department and other regulatory bodies following the 2008 financial crisis, held this two-day event in collaboration with the Brookings Institution. The head of the Office of the Comptroller of the Currency (OCC) was also scheduled to speak.

The meeting aimed to discuss potential systemic risks from AI in financial services and share insights on encouraging innovation and effective regulation. Attendees included regulators, executives from tech companies, insurance firms, asset management institutions, scholars, and bank representatives.

Yellen highlighted that AI's implications for financial stability are a high-priority topic for the Biden administration. She noted that the opportunities and risks posed by AI have become a primary focus for both the Treasury Department and FSOC.

"Advances in natural language processing, image recognition, and generative AI, for example, create new opportunities to make financial services less costly and easier to access," Yellen said, citing its predictive capabilities for portfolio management, its ability to detect fraud and illicit financing, and the automation of customer service.

Yellen also recognized the benefits of AI in the automation of customer support services, improved efficiency, fraud detection and combating illicit finance. 

However, Yellen cautioned that the use of AI in financial institutions carries "significant risks." She identified potential vulnerabilities stemming from the complexity and opacity of AI models, inadequate risk management frameworks, and the interconnected nature of market participants relying on the same data and models.

The concentration of suppliers developing models, providing data, and offering cloud services could exacerbate existing third-party risks. Insufficient or flawed data might perpetuate existing biases or introduce new biases into financial decision-making. Analysts have suggested that Yellen is concerned about the possibility that widespread reliance on AI tools yielding similar outcomes could lead to crowded trades, thereby amplifying market volatility.

Moreover, if only a few companies are capable of providing AI models, any issues with one of these companies could have widespread repercussions across many financial institutions. While Yellen did not address the potential for AI systems to generate erroneous answers-known as "hallucinations"-she underscored the broader risks of biased or discriminatory outcomes from AI models.

A Treasury Department official indicated that FSOC is actively investigating how AI might threaten the financial system and intensifying efforts to monitor AI's use in the financial industry. Yellen admitted that the Treasury has been utilizing AI to combat illegal financial activities, such as money laundering, terrorist financing, and sanctions evasion. The IRS, she added, employs AI to enhance fraud detection in tax evasion cases.

Moving forward, the U.S. government and financial regulators will continue to expand their use of the latest technologies, deepen their understanding of AI applications in financial services, and seek input from market participants, consumers, scholars, and the public on AI's role in finance.

The Treasury plans to host a roundtable on AI and the insurance industry, focusing on preventing AI-driven lending discrimination and other consumer protection measures. Regular dialogues with domestic and international financial regulators will explore and monitor AI's impact on the global financial system and economy.

Yellen emphasized that, beyond promoting information sharing and dialogue, FSOC and its member agencies will enhance their regulatory capabilities concerning AI. Building on existing risk management frameworks, they will introduce "scenario analysis" to help identify potential future vulnerabilities and determine measures to bolster resilience.

In its 2023 annual report, FSOC first warned that AI posed a potential threat to the financial system, identifying its widespread use in financial services as an "emerging vulnerability." The report cited risks such as cybersecurity, compliance, and user privacy protection, expressing concerns over the complexity of generative AI models like ChatGPT. These models could produce flawed results, and their opaque nature makes it difficult to evaluate their reliability and suitability, thus increasing uncertainty.

Yellen reiterated these concerns in her speech, noting that the Treasury aims to address operational risks, cybersecurity, and fraud challenges related to AI. The FSOC report highlighted that as AI methods grow more complex, errors and biases become harder to detect and correct, stressing the need for vigilance among AI developers, financial companies using AI, and their regulators.