The history of internet technology is filled with battles between open-source and closed systems. While some companies tightly clutch the most essential tools for building future computational platforms, others give these tools away.

In February of this year, amidst a heated AI competition, Meta took an unusual step. It decided to let go of the "jewel in its AI crown" by publicly releasing the underlying code of its large language model, LLaMA.

Essentially, Meta offered its AI technology as open-source software (computer code that can be freely copied, modified, and reused), providing everything needed for outsiders to rapidly build their own chatbots. Scholars, government researchers, or anyone else can download the code by providing an email address to Meta and passing the company's review.

Meta believes the wisest approach, as the AI competition escalates, is to share its foundational AI engine to increase influence and expedite progress towards the future. Meta's Chief AI Scientist, Yann LeCun, mentioned in an interview with The Information that "the winning platform will be the open one."

Meta's approach sharply contrasts with Google and OpenAI, who are leading the new wave of AI competition. Due to fears that AI tools like chatbots could be used to spread false information, hate speech, and other toxic content, these companies are increasingly secretive about the methods and software supporting their AI products.

Google, OpenAI, and other companies have criticized Meta, arguing that an unregulated open-source approach is dangerous. The rapid rise of AI in recent months has rung alarm bells about the risks of the technology, including potential job market disruption if deployed improperly. Within days of LLaMA's release, the system leaked onto 4chan, an online message board infamous for spreading false and misleading information.

Zoubin Ghahramani, Google Research's vice president overseeing AI work, stated that we need to consider more carefully whether to disclose AI technology details or open-source code, thinking about potential misuse.

There are also concerns within Google about whether open-source AI technology could pose a competitive threat. A Google engineer warned colleagues in a leaked internal memo that the rise of open-source software like LLaMA could cause Google and OpenAI to lose their leading positions in AI.

As this Google engineer put it, "for users, if there is a restriction-free, free, and high-quality alternative, who would pay for Google's products?"

However, Meta sees no reason to keep its code secret. Dr. LeCun says the increasing secrecy of Google and OpenAI is a "massive mistake" and shows a "very poor understanding of what's happening." He believes that unless AI is out of control of large tech companies like Google and Meta, consumers and governments will reject AI.

Dr. LeCun poses the question, "Do you want every AI system to be controlled by a few powerful American companies?"

At Stanford University, researchers used Meta's new technology to build their own AI system and published it on the internet. Screenshots viewed by The New York Times showed that a Stanford researcher named Moussa Doumbouya quickly generated problematic texts using the LLaMA system. In one instance, the system provided instructions on disposing of a body without getting caught, and generated racist material, including comments supporting Adolf Hitler's views.

During a private chat among researchers, Doumbouya likened promoting this technology to the public to "everyone in a grocery store being able to get a grenade."

Stanford University swiftly removed this AI system from the internet. Stanford professor Tatsunori Hashimoto, who led the project, stated, "We pulled the demo because we are increasingly worried about potential misuse outside of research."

Dr. LeCun believes the technology isn't as dangerous as it seems. He notes that a minority of individuals can already create and disseminate false information and hate speech, and adds that toxic content could face strict restrictions on social networks like Facebook.

He says, "You can't stop people from creating toxic information, but you can stop the spread of toxic information."