A new global audit of artificial intelligence safety practices has placed OpenAI, Google, Meta and xAI under renewed scrutiny, after the Future of Life Institute reported that none of the leading AI developers meets what researchers describe as emerging global safety standards. The findings, released in the Institute's latest AI Safety Index, add to growing concern among policymakers in the U.S., Europe and Asia over whether companies racing to release increasingly powerful AI systems have adequate controls in place to govern them.
The independent panel behind the report evaluated each firm's readiness to manage risks posed by advanced AI models, including systems approaching superintelligent capabilities. According to the assessment, the companies lack credible strategies for identifying, monitoring or mitigating dangers associated with frontier-scale machine learning development. The shortcomings span governance, transparency, and operational safety, despite the industry spending what the report characterizes as "hundreds of billions" on increasingly capable model families.
Several previous incidents involving chatbots and user self-harm have amplified concerns about weak oversight, though causation in those cases remains indirect. The report argues that the rapid deployment of chatbots and generative systems has far outpaced the establishment of rigorous safeguards. Researchers noted that companies continue to release successive model upgrades - including systems such as Google's Gemini 2.5 Pro - even when public risk disclosures or detailed model cards have not yet been published.
Max Tegmark, president of the Future of Life Institute, offered one of the sharpest critiques, saying U.S. AI companies are "less regulated than restaurants" even as they resist binding federal safety requirements. His comments reflect growing tension between large AI labs and scientists who have publicly warned about unmitigated development of superintelligent systems.
Prominent researchers including Geoffrey Hinton and Yoshua Bengio have recently called for a temporary moratorium on such development until stronger guardrails are established. Their appeal follows increasing government interest in AI risks, as regulators weigh liability reforms, mandatory disclosures, and operational testing requirements for high-risk models.
The report's most technical assessment evaluated firms across four safety dimensions - risk identification, risk analysis, risk treatment and governance. Most companies achieved only 8% to 35% compliance with established criteria used in safety-critical industries such as aviation or nuclear energy. The reviewers cited missing risk thresholds, a lack of standardized tests for unknown or emergent behaviors, and limited documentation of pre-deployment model evaluations.
Cybersecurity analysts say the capabilities of frontier models exacerbate the urgency. As newer systems grow more competent, they become more effective when misused - enabling sophisticated malware generation, automated vulnerability exploitation, or step-by-step weapon-building instructions, according to assessments cited by CNBC. Critics argue that industry enthusiasm for rapid commercialization has overshadowed the discipline needed to contain such risks.