On Wednesday, four influential senators published a roadmap for artificial intelligence regulation that calls for at least $32 billion to be allocated annually for non-defense AI innovation.
The long-awaited plan was finally released by the members of the AI Working Group, which included Senate Majority Leader Chuck Schumer (D-NY), Mike Rounds (R-SD), Martin Heinrich (D-NM), and Todd Young (R-IN), following months of holding AI Insight Forums to educate their colleagues about the technology. Leaders in academia, labor, and civil rights as well as AI professionals such as OpenAI CEO Sam Altman and Google CEO Sundar Pichai attended the events.
The roadmap is not intended to be a set of precise laws that might be passed quickly. The working group outlines the main areas on which pertinent Senate committees should concentrate their AI efforts in a 20-page report.
These consist of: protecting private information and copyrighted content from AI systems; minimizing AI energy costs; training workers in AI; and handling AI-generated content in certain domains, such as child sexual abuse material (CSAM) and election content. The working group claims that the paper is not a comprehensive list of possibilities.
Schumer stated that the roadmap was not meant to establish a large, comprehensive statute covering all of AI, but rather to assist Senate committees as they take the lead in developing regulations.
Not waiting for the plan, some politicians immediately introduced their own AI-related proposals.
On Wednesday, for instance, the Senate Rules Committee embraced several AI proposals pertaining to elections. However, it’s unclear how quickly such suggestions will become law, especially in an election year, given the wide range of industries that AI touches and the divergent opinions on the proper degree and types of regulation.
The working group encourages other lawmakers to collaborate with the Senate Appropriations Committee to increase AI spending to the levels advocated by the National Security Commission on Artificial Intelligence (NSCAI). They argue that the funds should be used to support government-wide AI and semiconductor research and development, as well as the National Institute of Standards and Technology (NIST) testing infrastructure.
The plan does not explicitly state that all future AI systems must undergo safety evaluation before being sold to the public, but rather requests that a framework be developed to determine when such an evaluation is required.
This differs from some suggested bills, which would demand prompt safety assessments for all current and future AI models. The senators also did not immediately call for a revision of existing copyright rules, despite the ongoing legal battle between AI businesses and copyright holders. Instead, it invites politicians to evaluate if new laws regarding transparency, content provenance, likeness protection, and copyright are required.
The policy roadmap is an encouraging beginning, according to Adobe general counsel and chief trust officer Dana Rao, who attended the AI Insight Forums. She added in a statement that it will be crucial for governments to provide protections across the wider creative ecosystem, including for visual artists and their concerns about style.
Other groups, on the other hand, are less complimentary of Schumer’s vision, with several voicing worries about the anticipated expenses of regulating the technology.
Following the report, Amba Kak, co-executive director of AI Now—a policy research organization backed by organizations including Mozilla, Omidyar Network, and Open Society Foundations—issued a statement stating that the report’s “long list of proposals are no substitute for enforceable law.” Kak expressed disapproval of the proposal’s high taxpayer cost, stating that it “risks further consolidating power back in AI infrastructure providers and replicating industry incentives — we’ll be looking for assurances to prevent this from taking place”.
The research makes it abundantly evident that Schumer is not taking artificial intelligence seriously, according to a statement from Rashad Robinson, president of the civil rights organization Color of Change. This is unfortunate considering his prior record of integrity, problem-solving skills, and leadership on the subject. Furthermore, he said that the report is creating a risky precedent for future technological development. It’s critical that lawmakers realize the dangerous, unchecked spread of bias AI poses and act swiftly to prevent it from being used to manipulate, hurt, and disenfranchise Black communities. Additionally, they must strengthen existing safeguards for AI.
In a statement, Rashad Robinson, the president of the civil rights organization Color of Change, stated that Schumer’s lack of interest in AI is evident from the report, which is disappointing considering his track record of integrity, problem-solving, and leadership in the field. He continued by saying that the study is creating a risky precedent for how technology will develop in the future. The lawmakers must act swiftly to address the dangerous, unchecked spread of bias AI poses in addition to enacting stricter regulations for AI to prevent it from being used to damage, disenfranchise, and manipulate Black communities.
In a statement, Divyansh Kaushik, vice president of the national security advisory firm Beacon Global Strategies, stated that guaranteeing that the hefty price tag can actually be allocated to the agencies and programs that need to employ the funds is crucial for the success of any legislative attempts. According to Kaushik, there cannot be another CHIPS and Science Act in which large sums of money are authorized without appropriations.