During a Senate committee hearing on Tuesday, Sen. Mitt Romney publicly discussed his concerns regarding artificial intelligence and the danger it poses to the country’s security.
Romney, a Republican from Utah, stated in his opening remarks that he is more afraid by artificial intelligence than he is of those who believe it will improve everything in the world. He serves as the subcommittee’s ranking member for both the spending oversight and emerging threats subcommittees.
AI and smallpox
Speaking about the danger of deep fakes and the use of AI by nations who are foes of the United States, Romney stated last month at a gathering in Utah that he saw AI more as a threat and risk than as an opportunity.
He described a horrifying scenario that AI could make a possibility.
There are currently roughly 100 scientists in the globe who can replicate the smallpox disease, according to a briefing we recently received. But with AI, a million people will be able to reproduce the smallpox disease worldwide, and many of them are truly nasty people, he claimed.
According to Romney, regulating AI would be very challenging for the government.
The only AI idea he heard of so far that might help us contain the threat is to get better at identifying who has the really strong chips needed for running AI and limiting access to those chips to specific nations, he said.
A difficult balance between artificial intelligence and bioweapons
The committee’s chair, Sen. Maggie Hassan, D-New Hampshire, voiced her concerns about the scenario Romney had previously described. In her opening remarks, she claimed that Congress isn’t focusing “on so-called ‘catastrophic’ risks posed by AI — such as the ability of AI to help terrorists develop and use unconventional weapons.
Dario Amodei, the CEO of the AI business Anthropic, testified before the Senate in July that it takes a high level of skill and can’t be found on Google or in textbooks to design a bioweapon in detail.
We discovered that some of these stages can be replaced by current AI techniques, Amodei noted, according to Reuters.
At the session on Tuesday, Romney questioned the experts present what procedure would be effective for putting any safeguards in place and how much time we have.
In terms of regulation, they want to make it difficult for malicious conduct to occur, but don’t want to outlaw all of these beneficial activities, according to Gregory C. Allen, director of the Wadhwani Centre for AI and Advanced Technologies, which is devoted to policy research.
He said that AI makes it simpler to manufacture bioweapons in the context of biosecurity. Allen responded, “Well, part of it is the nature of existing regulations.
For instance, he said, because the anthrax pathogen is listed as a regulated pathogen, someone attempting to acquire it will be unable to do so because it can be used to spread bioweapons.
Allen went on to say that the problem with AI systems is that they might help in the creation of new pathogens that aren’t on any lists.
However, AI will also be required to assist businesses that synthesize DNA in identifying a pathogen that wasn’t previously known to exist. For regulators, this creates a difficult balance.
Should a new federal agency oversee AI regulation?
In his opening remarks, Romney stated that the discussions he’s taken part in thus far highlight the necessity for collaboration with other countries and potentially some form of global consortium or AI-related accord.
He continued, “I’m not sure how it would operate, where it would be housed, how we’d start that, or even if that’s realistic.”
Additionally, according to Romney, there has been discussion about establishing a distinct department or agency with a team of experts to guide policymakers like himself, monitor industry development, and devise policies.
He asked if it would be a good idea, saying, “Frankly, a lot of, in my instance, 76-year-olds are not going to figure out how to regulate AI because we can hardly use our smartphones.”
Jeff Alstott, a senior information scientist at Rand Corporation, a policy think tank and public sector consulting firm, argued that the majority of current government agencies can handle the impact of AI on their sector in response to Romney’s questions about putting safeguards in place.
There are a few exceptions to this rule, such as autonomous vehicles that fall under the Department of Transportation’s jurisdiction or the use of AI in healthcare that is regulated by the Department of Health and Human Services.
One is that, according to Alstott, there is no government agency has a clear mandate for dealing with the issue of someone creating or deploying an AI that will inevitably murder millions of people. That must therefore be created.
As Romney stated, this may be accomplished by forming an independent agency. Or organizations with the necessary power, such as the Departments of Homeland Security, Commerce, and Defense, might strive to control and lessen the effects of AI.