California’s state senate granted final passage to a significant AI safety measure early Saturday morning, imposing new transparency obligations on major companies.
According to its author, state senator Scott Wiener, SB 53 compels major AI labs to be transparent about their safety practices, introduces whistleblower rights for [workers] at AI labs, and establishes a public cloud to broaden compute access (CalCompute).
California Governor Gavin Newsom will now either sign or veto the law. He has not made any public remarks on SB 53, but last year he signed more focused legislation aimed at problems like deepfakes while vetoing a more comprehensive safety law that was also written by Wiener.
While acknowledging the necessity of safeguarding the public from actual threats posed by this technology, Newsom criticized Wiener’s previous bill for imposing “stringent standards” on large models, regardless of whether they were used in sensitive data, high-risk environments, or [involved] critical decision-making.
Wiener stated that the new law was influenced by proposals from a policy panel of AI experts that Newsom established following his veto.
Additionally, Politico notes that SB 53 was recently changed to require corporations using “frontier” AI models with yearly sales under $500 million to reveal just high-level safety facts, while those with annual revenue over $500 million must submit detailed reports.
Several Silicon Valley enterprises, venture capital firms, and lobbying groups have expressed opposition to the plan. In a recent letter to Newsom, OpenAI did not particularly address SB 53, but stated that to minimize “duplication and inconsistencies,” firms should be considered compliant with statewide safety requirements if they fulfill federal or European standards.
And Andreessen Horowitz’s head of AI policy and chief legal officer recently stated that “many of today’s state AI bills — like proposals in California and New York — risk” overstepping constitutional boundaries on how states can regulate interstate commerce.
The co-founders of a16z had previously cited tech regulation as one of the reasons they supported Donald Trump’s reelection. Later, the Trump administration and its supporters demanded that state regulation of AI be prohibited for ten years.
On the other hand, Anthropic has endorsed SB 53.
In a post, Anthropic co-founder Jack Clark stated, “We have long stated that we would prefer a federal standard.” “But without that, this establishes a strong framework for AI governance that is impossible to ignore.”







