Big Tech just received what it wanted with California’s recently enacted AI bill

Governor Gavin Newsom of California signed the Transparency in Frontier Artificial Intelligence Act into law on Monday. It requires AI organizations to disclose their safety procedures, but it does not require real safety testing. The legislation mandates that businesses with yearly revenues of at least $500 million post security protocols on their websites and notify state authorities of any incidents. However, it is not as robust in enforcement as the bill that Newsom rejected last year due to stiff lobbying from tech giants.

Senator Scott Wiener’s first attempt at regulating AI, known as S.B. 1047, which called for safety testing and “kill switches” for AI systems, has been replaced with the legislation, S.B. 53. Rather of defining the criteria or mandating independent verification, the new legislation encourages businesses to explain how they integrate “national standards, international standards, and industry-consensus best practices” into their AI development.

While the law’s actual protective measures are mostly voluntary beyond basic reporting requirements, Newsom said in a statement that California has demonstrated that we can create policies to safeguard our communities while simultaneously ensuring that the expanding AI sector continues to prosper.

More than half of worldwide venture capital financing for AI and machine learning firms went to Bay Area companies last year, and the state of California is home to 32 of the top 50 AI companies in the world, according to the state government. Although the newly signed measure is state-level legislation, the implications of California’s AI regulation will be far-reaching, both in terms of legislative precedence and the influence on businesses that develop AI systems that are utilized globally.

Transparency instead of testing

The current bill emphasizes disclosure, whereas the defeated SB 1047 would have required safety testing and kill switches for AI systems. Businesses are required to notify California’s Office of Emergency Services of what the state refers to as “potential critical safety incidents” and to safeguard workers who voice safety concerns as whistleblowers. The legislation defines catastrophic risk narrowly as situations possibly inflicting 50+ lives or $1 billion in damage through weapons aid, autonomous criminal activities, or loss of control. If these reporting obligations are not followed, the attorney general has the authority to impose civil fines of up to $1 million per infraction.

The transition from required safety testing to voluntary disclosure comes after a year of heated campaigning. According to The New York Times, Meta and venture capital company Andreessen Horowitz have contributed up to $200 million to two different super PACs that favor AI-friendly lawmakers, while corporations have advocated for federal legislation to replace state AI restrictions.

The original SB 1047 was prepared by AI safety proponents who worried about existential risks from AI, drawing heavily on hypothetical scenarios and science fiction clichés, but it was greeted with opposition from AI corporations, which deemed the criteria too unclear and potential reporting responsibilities too onerous. The new law is based on proposals from AI experts gathered by Newsom, including Stanford’s Fei-Fei Li and former California Supreme Court Justice Mariano-Florentino Cuéllar.

Similar to SB-1047, the new law establishes CalCompute, a Government Operations Agency collaboration dedicated to creating a framework for public computing clusters. Annual modifications to the law will be suggested by the California Department of Technology; however, no legislative action is necessary to implement these suggestions.

Senator Wiener called the law’s safeguards “commonsense guardrails,” and Jack Clark, a co-founder of Anthropic, called them “practical.” However, the disclosure requirements may provide little protection against future AI harms because they lack enforcement mechanisms and may mirror practices already common at large AI companies.

Source link