This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minute read
Reposted from A Fresh Take

Compliance in a Global AI Market: Examining the Overlaps Between California’s SB 53 and the EU AI Act

On September 29, Governor Gavin Newsom signed into law Senate Bill 53, the “Transparency in Frontier Artificial Intelligence Act” (“TFAIA”), marking a notable development in U.S. regulatory efforts around AI. The legislation focuses on increasing transparency and governance requirements pertaining to “frontier models” built by “large frontier developers,” entities viewed as capable of producing AI systems that may pose catastrophic risks.

For companies already developing compliance programs under the EU AI Act, the new California law warrants close attention. While the EU AI Act applies broadly to providers of high-risk systems and general-purpose AI models, California has chosen a narrower path, limiting obligations to models above a high compute threshold and developers meeting revenue criteria. Still, both frameworks converge in requiring formalized risk assessments, public disclosures, and incident reporting.

We have written a briefing analyzing California’s new AI law and how it aligns with, and differs from, the EU AI Act. The full piece is available here. What follows is a summary of the key points.

Covered Entities and Models

The TFAIA defines a “frontier model” as one trained using computing power greater than 10^26 operations, and a “large frontier developer” as a company with annual revenues exceeding $500 million. This dual threshold for various TFAIA requirements reflect California’s intent to direct obligations at a small number of powerful developers, in contrast with the EU AI Act’s broader reach. When triggered, the TFAIA imposes a number of obligations, described hereafter briefly, and in more detail here.

Core Obligations

Large frontier developers must:

  • publish a Frontier AI Framework setting out their standards, governance practices, and risk mitigation strategies; 
  • issue transparency reports when deploying new or substantially updated models, including summaries of catastrophic risk assessments;
  • update reports when they make substantial modifications to existing models;
  • report certain defined critical safety incidents; and,
  • develop certain anonymized whistleblower processes; 

In addition, developers must report “critical safety incidents” to the California Office of Emergency Services within 15 days, or within 24 hours if imminent risk of serious harm is discovered. These requirements mirror, but do not replicate, the EU AI Act’s obligations on incident reporting, where providers of high-risk systems must notify regulators of “serious incidents” within a similar timeframe.

Penalties and Enforcement

Failure to meet these obligations can result in civil penalties of up to $1 million per violation, enforceable solely by the California Attorney General. The TFAIA also includes whistleblower protections for employees engaged in AI safety functions, requiring companies to maintain internal processes for anonymous reporting.

Key Takeaway

The TFAIA represents California’s most direct move yet to regulate advanced AI models. While narrower in scope than the EU AI Act, it imposes overlapping requirements in areas such as transparency, governance, and incident reporting. Companies subject to both regimes will need to develop compliance strategies that address distinctions between the two while recognizing their shared emphasis on safety and accountability.

Read the full briefing for a detailed comparison of the TFAIA and EU AI Act.

 

 

 

Tags

ai, artificialintelligence, cybersecurity, data protection, compliance, regulatory framework, corporate governance, us, europe