Legislators in California are debating legislation that would force artificial intelligence firms to test their systems and incorporate security features to ensure that they cannot be possibly used to develop chemical weapons or destroy the state’s electrical grid—scenarios that experts believe could occur in the future as technology advances at a breakneck pace.
This groundbreaking legislation, which attempts to lessen the hazards posed by AI, will be put to a vote by lawmakers on Tuesday. Tech corporations are vehemently against it, notably Google and Meta, the parent company of Facebook and Instagram. They contend that rather than targeting developers, the legislation ought to target individuals who abuse and misuse AI technology.
The bill’s Democratic sponsor, state senator Scott Wiener, claimed that by averting potential “catastrophic harms” from exceedingly potent AI models, the legislation would offer adequate safety requirements. Only systems that would take more than $100 million in computing power to train would be subject to the requirements. As of July, no AI model to date has reached that level.
In a recent legislative hearing, Wiener stated that smaller AI models are not the issue. This concerns extraordinarily massive and potent models that, to the best of our knowledge, do not yet exist but most likely will in the near future.
Gov. Gavin Newsom, a Democrat, has praised California for being an early adopter and regulator of artificial intelligence (AI), stating that the state may soon use generative AI techniques to reduce traffic, improve road safety, and offer tax advice. His administration is currently debating new regulations to outlaw AI discrimination in employment procedures. Though he has cautioned that excessive regulation could place the state in a “perilous position,” he declined to comment on the bill.
Some of the most well-known AI experts favor the proposal, which would also establish a new state body to supervise creators and offer best practices. If there are any infractions, the state attorney general may potentially file a lawsuit.
Large AI system development and open-source technology are said to be discouraged by the requirements, according to a growing coalition of tech businesses.
In a letter to lawmakers, Rob Sherman, vice president of Meta and deputy chief privacy officer, stated that the bill will bring about regulatory fragmentation, jeopardize open-source models that startups and small businesses rely on, and make the AI ecosystem less safe.
According to the state Chamber of Commerce, the idea may potentially force businesses to relocate out of state in order to evade the restrictions.
Opponents prefer to hold off until the federal government issues new directives. California must wait, according to the bill’s supporters, who cited painful lessons from its failure to move sooner to regulate social media corporations.
On Tuesday, state legislators were debating yet another bold proposal to combat automation discrimination, which occurs when businesses utilize AI models to filter applications for rental apartments and job resumes.