The Biden administration is set to intensify its strategy to safeguard critical U.S. artificial intelligence (AI) technologies from China by proposing restrictions on the export of advanced AI models, including those that underpin systems like ChatGPT. The U.S. Commerce Department is actively considering measures to limit the export of proprietary or closed-source AI models, which could significantly impact how these technologies are shared globally.

This regulatory push aims to prevent sensitive software and the extensive data it is trained on from falling into the hands of a strategic rival, thereby curbing China's capability to harness cutting-edge technologies for military purposes. Sources familiar with the matter indicate that the administration is exploring a computing power threshold as a basis for these export controls, which would require AI developers to report significant projects to the Commerce Department for oversight.

According to discussions with U.S. officials and industry insiders who preferred to remain anonymous due to the sensitivity of the information, the threshold would trigger regulatory oversight once a certain level of computational power used to train AI models is reached. This measure is seen as a first step in managing the risks associated with the proliferation of sophisticated AI technologies.

Peter Harrell, a former National Security Council official, highlighted the challenges of implementing such controls, stating, "Whether you can, in fact, practically speaking, turn it into an export-controllable chokepoint remains to be seen." This sentiment underscores the complex nature of regulating a technology that is evolving more rapidly than the regulatory frameworks designed to control it.

The concerns are not unfounded, as think tanks and the U.S. intelligence community have raised alarms about the potential misuse of AI. Advanced AI models can mine vast amounts of text and images to summarize information and generate content, capabilities that could be exploited to wage aggressive cyber attacks or even develop biological weapons. The Department of Homeland Security has also warned that cyber actors are likely to leverage AI to develop new tools that enable more large-scale, efficient, and evasive attacks.

Despite these significant steps, regulating AI remains a formidable challenge due to the dual-use nature of much of the technology and the global availability of AI research and development. Many AI models are open-source, making them difficult to control under the proposed export regulations. This aspect of AI development necessitates a nuanced approach that considers both the open and proprietary nature of AI technologies.

Alan Estevez, who oversees U.S. export policy at the Department of Commerce, acknowledged the department's ongoing evaluation of regulatory options, emphasizing the need for industry feedback before finalizing any rules. The proposed controls are expected to focus initially on un-released models and particularly those that require substantial computational resources, as indicated by the AI executive order issued last October.