On Thursday, September 28, reports emerged that NVIDIA's offices in France were raided by the country's competition authority. The tech giant is under suspicion of engaging in anti-competitive practices. This marks the first significant regulatory scrutiny NVIDIA has faced since becoming a leading supplier of artificial intelligence chips.

While the French Competition Authority disclosed a raid operation on Wednesday, it did not specify which practices were under investigation or which company was targeted, only mentioning a firm in the "graphics card industry." However, insiders have identified NVIDIA, the world's largest manufacturer of AI and graphics processing units (GPUs), as the target.

Such raids typically last several hours, with officials arriving at the company's premises early in the morning to search the location, seize physical and digital materials, and interview employees on-site. The French Competition Authority confirmed that its raid had received judicial approval.

French authorities noted that this week's action followed a broader investigation into the cloud computing industry, where concerns were raised about larger cloud computing firms potentially leveraging their computational power to edge out smaller competitors.

Following the news, NVIDIA's stock gains quickly narrowed to less than 1%.

Riding the AI Wave

Amid the booming AI trend, NVIDIA has become a market darling due to its in-demand AI chips. However, the company has also attracted controversy over allegations of market monopolization. Tech giants, including Microsoft, Google, and Amazon, are actively purchasing NVIDIA's AI chips. Meanwhile, numerous tech startups are exploring every avenue to acquire such chips to keep pace with the current AI frenzy.

Yet, NVIDIA isn't without competition. To reduce the cost of AI chips, many Silicon Valley firms are collaborating with companies like Arm or developing their AI chips.

Meta disclosed that it's building its first custom chip specifically for running AI models, named the MTIA (Meta Training and Inference Accelerator) chip. Using the open-source RISC-V chip architecture, it's expected to debut in 2025.

Back in 2013, Google secretly developed a chip focused on AI machine learning algorithms, intending to use it in its internal cloud computing data centers as a replacement for NVIDIA's GPUs. By May 2016, this in-house chip, known as the TPU, was unveiled. TPUs can execute large-scale matrix operations for deep learning models, such as those used in natural language processing, computer vision, and recommendation systems. Initially, it was designed exclusively for Google's high-end cloud computing data centers.