California Governor Gavin Newsom vetoed a landmark bill on Sunday that aimed to establish some of the nation's first safety regulations for large artificial intelligence (AI) models. The decision has drawn widespread attention, as the proposed legislation would have positioned California as a leader in regulating the rapidly advancing AI industry. However, Newsom expressed concerns that the bill's stringent requirements could stifle innovation and harm the state's homegrown tech sector.

The AI safety bill, Senate Bill 1047, authored by Democratic State Senator Scott Wiener, sought to impose regulations on advanced AI models, requiring developers to conduct safety testing and disclose risk protocols. It targeted AI systems with development costs exceeding $100 million, reflecting concerns about the power and potential dangers of these emerging technologies. The bill also included whistleblower protections for workers in the industry and required companies to establish a mechanism to deactivate AI models in the event of misuse.

Newsom's veto was seen as a significant setback for proponents of AI regulation, who argue that without oversight, AI's rapid growth could lead to serious societal risks, including job loss, misinformation, and security threats. Speaking at Dreamforce, an annual tech conference hosted by Salesforce, earlier this month, Newsom acknowledged the need for AI regulation but warned that the proposed legislation could have "a chilling effect" on the industry. He reiterated this concern in a statement following his veto.

"While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments or involves critical decision-making or sensitive data," Newsom said. "Instead, it applies stringent standards to all large systems. I do not believe this is the best approach to protecting the public from real threats posed by AI."

Instead of implementing the bill, Newsom announced plans to collaborate with leading AI experts to develop a more nuanced framework. Among these experts is AI pioneer Fei-Fei Li, who has voiced opposition to the bill. Newsom's office said it would work to create "workable guardrails" that balance safety with the need for innovation in the AI sector.

The bill faced strong opposition from tech giants and startups alike. Companies such as Google, OpenAI, and Meta Platforms-leaders in developing generative AI models-argued that the legislation could have unintended consequences, potentially driving AI development out of California. Former U.S. House Speaker Nancy Pelosi also opposed the bill, warning that it could "kill California tech" by discouraging investment in advanced AI models.

Proponents of the legislation, including Tesla CEO Elon Musk and Anthropic, a prominent AI safety company, praised the effort to bring accountability and transparency to AI development. Musk, who runs the AI firm xAI, has been vocal about the potential dangers of uncontrolled AI development, advocating for stronger regulatory oversight. "This bill could have injected some transparency and accountability into a rapidly evolving industry," said Daniel Kokotajlo, a former researcher at OpenAI, who resigned over concerns about AI risks. Kokotajlo emphasized the growing power and potential dangers of large AI models, noting that "this is a crazy amount of power for any private company to control unaccountably."

The veto reflects a broader debate about the balance between innovation and regulation in the tech industry, particularly as AI becomes increasingly integrated into various sectors. AI models, capable of generating text, images, and videos in response to prompts, have sparked both excitement and concern. The potential for these systems to spread misinformation, automate jobs, and invade privacy has led to calls for stronger safeguards.

Senator Wiener expressed disappointment with Newsom's decision, warning that the lack of regulation leaves Californians vulnerable to the risks posed by AI. "The veto makes California less safe," Wiener said. "Voluntary commitments from industry are not enforceable and rarely work out well for the public. We cannot afford to wait for a major catastrophe before taking action."

The bill's rejection marks another victory for the tech industry, which has spent considerable resources lobbying against state-level AI regulations. The California Chamber of Commerce and other industry groups argued that the bill's broad requirements could stifle innovation and hurt the state's competitive edge in AI development. Chamber of Progress, a tech industry coalition, praised Newsom's veto, stating that "California's tech economy has always thrived on competition and openness."

Despite the veto, Newsom acknowledged the growing concerns about AI and announced that the state would continue assessing the risks posed by the technology. He has directed state agencies to expand their evaluations of potential threats, particularly to critical infrastructure such as energy and water systems. This move comes as the Biden administration advances its own regulatory proposals for AI, though legislation in Congress has stalled.

Newsom also hinted that a California-specific approach to AI regulation could be on the horizon, especially in the absence of federal action. "A California-only approach may well be warranted," he said, adding that he plans to work with the state legislature on AI regulation in its next session.