On September 29, 2025, Governor Gavin Newsom signed Senate Bill (SB) 53, the Transparency in Frontier Artificial Intelligence Act. The law establishes California’s first public reporting requirements for developers of advanced AI systems. It focuses on safety frameworks, incident disclosure, and governance transparency.
Last year’s version that was vetoed would have required yearly independent audits, had a lower threshold for companies that qualified for disclosure rules, and had much higher penalties. Still, it marks a clear shift toward open accountability for companies training high-capacity models. With federal legislation still pending, California has taken the lead in defining public safety and risk management of emerging AI technologies.
Who It Covers
The law applies to companies that train AI systems exceeding a compute threshold of 10²⁶ FLOPs, which captures frontier models such as large multimodal or generative systems.
Developers reporting more than $500 million in annual revenue fall under stricter disclosure rules. The structure narrows the focus to major platforms with the capacity to influence global AI behavior, leaving smaller research teams largely outside the statute.
Core Requirements
Under SB 53, these developers must publish an AI safety and governance report each year describing how they identify and mitigate catastrophic risks. The framework must describe how the developer assesses and mitigates catastrophic risks, including the methods used to evaluate model safety and the effectiveness of mitigations, the governance processes that ensure compliance, and the cybersecurity practices used to protect unreleased model weights.
If a critical safety incident occurs relating to a frontier model, the developer must notify the California Office of Emergency Services (Cal OES) within 15 days of discovering it. If the incident poses imminent risk of death or serious injury, the developer must report it within 24 hours to the appropriate authority. Starting in 2027, the Office of Emergency Services and the Attorney General must publish anonymized, aggregated summaries of critical safety incidents and whistleblower reports annually.

What It Leaves Out
The law emphasizes transparency and disclosure rather than direct regulatory control. It does not require government pre-approval before releasing a model, nor does it mandate kill-switch mechanisms or independent audits. Developers have flexibility to decide which safety standards, frameworks, and testing practices to follow, which encourages innovation but limits consistency across the industry. The final version reflects a political compromise: transparency was prioritized over tighter controls to achieve bipartisan support following the collapse of SB 1047 in 2024.
FortiLayer: Bridging Policy and Practice
The spirit of SB 53 is simple: show the public how AI safety is managed, and make those claims verifiable. Achieving that goal requires reliable tools for collecting and publishing evidence.
FortiLayer helps organizations meet these expectations by recording testing activity, risk assessments, and governance evidence within a secure audit trail. It supports alignment with established standards such as the NIST AI RMF and MITRE ATLAS, and automatically generates transparency documentation. These capabilities turn policy obligations into daily operational practice, making it easier to demonstrate that safety and oversight are built into the development process.
Looking Ahead
For developers and integrators, SB 53 marks the beginning of a documentation era in AI governance. The companies that build reliable records of testing, incident response, and oversight today will find it far easier to adapt when national or international standards emerge. Those that wait may face a steep catch-up once transparency moves from state policy to global expectation.