Everything you need to know about California’s new AI law

AI

This is educational material and does not constitute legal advice nor is any attorney/client relationship created with this article, hence you should contact and engage an attorney if you have any legal questions.


SB 53 Overview

On September 29, 2025, California became the first U.S. state to establish binding safety requirements for frontier artificial intelligence systems. Governor Gavin Newsom signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act, into law after vetoing a more aggressive predecessor just one year earlier.

While the European Union spent years developing its EU AI Act and other jurisdictions debate theoretical frameworks, California stepped into the vacuum created by federal inaction. With 32 of the top 50 global AI firms headquartered in California, this state-level regulation effectively sets the de facto standard for frontier AI development worldwide.

This article examines what SB 53 actually requires, who it affects, how enforcement works, and why California's approach differs fundamentally from both last year’s failed attempt and the European Union’s regulatory model.

Who This Actually Covers

Practically speaking, SB 53 will only apply to OpenAI, Anthropic, Google, Meta, and possibly xAI. That’s the list.

The law targets “large frontier developers” through two simultaneous thresholds. First, compute usage exceeding 10^26 floating-point operations (26 FLOPS) for model training. This captures systems like GPT-4, Claude 3.5, Gemini 1.5, and Llama 3 while excluding specialized models like AlphaFold or narrow AI applications. Second, annual revenue exceeding $500 million. Both criteria must be met.

This dual-threshold approach eliminates ambiguity. Companies know immediately whether they’re subject to SB 53 based on objective metrics rather than subjective capability assessments. The California Department of Technology must review these definitions annually and recommend updates, providing built-in flexibility as the field advances.

The law defines “foundation models” as AI systems trained on broad datasets, designed for generality of output, and adaptable to diverse tasks. This captures large language models and multimodal systems while excluding narrow applications. Critically, SB 53 takes an entity-based rather than model-based approach, regulating developers rather than individual systems.

“Catastrophic risk” receives specific definition: foreseeable and material risks that could meaningfully contribute to death or serious injury to more than 50 people from a single incident, damage exceeding $1 billion, or specific categories including chemical, biological, radiological, or nuclear; autonomous criminal activity; and loss of control over the system. This separates genuine existential concerns from routine AI failures.

The territorial reach extends to any frontier developer offering models to California residents or businesses, regardless of corporate structure or development location; so in this respect, it’s strikingly similar to the reach of the EU’s GDPR and EU AI Act.

The Disclosure Regime: Public vs Confidential

SB 53 builds a two-track transparency system. Some information must be made public, and some must be delivered confidentially to regulators.

On the public side, developers must publish a “frontier AI framework” describing how they assess and manage catastrophic risks, plus transparency reports before releasing new or substantially modified models. These are like mandatory model cards: capabilities, limitations, and results of safety testing must be on record, not just voluntarily shared when convenient.

On the confidential side, developers must also report “critical safety incidents” to California’s Office of Emergency Services — anything from major breaches to evidence of deceptive model behavior that could mask risks. This is somewhat analagous to how autonomous vehicle companies must report driverless vehicle accidents. These reports remain exempt from public records laws, allowing regulators to see sensitive data without exposing vulnerabilities.

The net effect is simple: companies can no longer cherry-pick what they share. Safety practices and risk events are on file, either for the public or for regulators.

Unusually, SB 53 lets California’s Office of Emergency Services designate federal reporting standards as satisfying state law — even if federal rules don’t preempt. This creates a safe harbor: comply federally once, and you’ve complied in California. It’s a voluntary harmonization mechanism, and it signals California is eager for Washington to step up with national standards.

Whistleblower Protections

SB 53 prohibits frontier developers from preventing or retaliating against “covered employees” who disclose information about catastrophic risks or violations. The definition extends remarkably broadly: full-time employees, contractors, subcontractors, freelancers, unpaid advisors, vendors, and board members involved in assessing, managing, or addressing catastrophic risk.

Companies must establish internal mechanisms allowing anonymous reporting and must provide updates on how concerns are addressed. Protected disclosures can go to the Attorney General, federal authorities, managers, or other employees with investigative authority if the employee has reasonable cause to believe information reveals specific and substantial danger to public health or safety from catastrophic risk, or that the developer violated SB 53.

Why this matters: Internal employees often identify risks before external observers, but fear of retaliation prevents disclosure. SB 53’s broad definition and anti-retaliation provisions create formal channels for surfacing safety issues early. The inclusion of board members and unpaid advisors ensures virtually anyone with access to privileged risk information receives protection. This addresses concerns raised when former OpenAI employees criticized safety practices but faced potential legal exposure.

Independent Audits: Delayed Accountability

Starting in 2030, large frontier developers must engage independent third-party auditors annually to verify two elements: whether the developer follows their published frontier AI framework, and whether the framework is sufficiently clear that compliance can be objectively determined.

This five-year delay gives the industry time to establish baseline practices before mandatory verification begins. The requirement addresses a fundamental challenge: voluntary commitments remain unverifiable without external oversight. By requiring auditors to assess both compliance and framework clarity, SB 53 creates pressure for concrete, measurable safety practices rather than aspirational principles.

Why this matters: The delayed implementation distinguishes SB 53 from last year’s failed SB 1047, which mandated immediate annual audits. That prescriptive approach contributed to industry opposition and veto. The 2030 trigger represents legislative learning: establish transparency first, add verification once practices mature. But it also means five years of self-attestation before independent confirmation begins.

Enforcement and Penalties

The California Attorney General holds exclusive enforcement authority. Civil penalties are capped at $1 million per violation.

The focus is not on punishing catastrophic harms themselves, but on penalizing failures of transparency — for example, not publishing a required framework, misrepresenting risk management, or failing to report incidents.

SB 53’s liability structure reinforces its transparency-first philosophy. Companies that document their processes and report honestly cannot be sued under this statute even if their models cause catastrophic harm. But they do face significant penalties if they fail to live up to their own disclosures.

CalCompute and State Preemption

SB 53 establishes a consortium to develop CalCompute, a state-backed public cloud computing cluster providing free or low-cost resources to startups, researchers, and public-interest projects. The consortium must submit a framework by January 1, 2027, with preference for housing at the University of California. The initiative only proceeds if the state budget allocates funding.

The law preempts local ordinances specifically related to frontier developer management of catastrophic risk adopted after January 1, 2025, preventing conflicting city and county regulations within California while leaving federal preemption unresolved.

Practical Implementation Timeline

SB 53 becomes effective January 1, 2026, creating a tight implementation window for covered developers:

Before January 1, 2026:

  • Determine whether you meet both compute and revenue thresholds

  • Draft and publish frontier AI framework if covered

  • Establish catastrophic risk assessment processes

  • Create incident detection and reporting mechanisms

  • Implement whistleblower protections and anonymous reporting channels

  • Update employment agreements, contractor relationships, and board policies

Ongoing Obligations:

  • Publish transparency reports before releasing new or modified models

  • Submit confidential risk assessments to OES for internal model use

  • Report critical safety incidents within required timelines

  • Update frontier AI framework annually

  • Maintain whistleblower investigation and response procedures

Starting 2030:

  • Engage independent auditors annually to verify framework compliance and clarity

Companies approaching thresholds should monitor growth trajectories. Revenue nearing $500 million or compute usage approaching 10^26 FLOPs triggers planning obligations before formal coverage begins.

Looking Forward

The law includes built-in adaptation mechanisms. The California Department of Technology must annually recommend updated definitions for “frontier model,” “frontier developer,” and “large frontier developer” based on multistakeholder input, technological developments, and international standards. This acknowledges that today’s thresholds might become obsolete as hardware improves and business models change.

OES must publish annual anonymized incident reports starting in 2027, creating the first longitudinal dataset tracking frontier AI safety at state level. The Attorney General must similarly report on enforcement actions and whistleblower activity. These transparency mechanisms allow assessment of whether SB 53’s accountability measures produce meaningful behavior change.

The 2030 independent audit requirement triggers the law’s most significant escalation. Once external verification begins, procedural compliance shifts from self-attestation to third-party review. However, technological development might outpace regulatory adaptation. If frontier capabilities advance faster than annual definition updates, dangerous systems might emerge outside SB 53’s scope.

Conclusion

SB 53 represents the United States’ first serious attempt to regulate frontier AI development through mandatory transparency requirements backed by civil penalties. Whether this approach succeeds depends on questions that cannot be answered today.

California’s decision to regulate frontier AI safety marks a turning point in technology policy. For decades, Silicon Valley operated with minimal regulatory constraints, relying on voluntary industry commitments. SB 53 converts some voluntary commitments into binding legal obligations while deliberately avoiding the heavy-handed mandates that doomed last year’s attempt.

The law embodies a particular philosophy about AI governance: trust but verify, transparency over prescription, accountability without stifling innovation. Whether that philosophy proves adequate for managing catastrophic risks from increasingly capable AI systems remains the defining policy question of our time.

Next
Next

Everything you need to know about GDPR