The CEO of an artificial intelligence “safety and research company” cautioned the Senate on Tuesday that it might only be a few years before AI enables terrorists to use biological weapons in assaults.
Anthropic’s CEO, Dario Amodei, explained to a Senate Judiciary subcommittee that his company is currently considering how AI can help people create and deliver these weapons as a medium-term risk.
Anthropic has performed a thorough investigation on the possibility for AI to contribute to the misuse of biology over the past six months in partnership with top-notch biosecurity experts, he said.
He continued, “One of the things that now keeps us safe from assaults is the fact that some phases in the manufacturing of bioweapons need knowledge that can’t be obtained on Google or in textbooks and requires a high level of specialized expertise.
He said that “some of these steps” can be filled in by today’s AI technologies, however they may do so inexactly and unreliablely. However, he claimed that current AI is already displaying these “nascent signs of danger,” and that his business anticipates that in a few years, it will be much closer.
He stated there is a significant risk that AI systems would be able to fill in all the gaps, making it possible for many more actors to carry out widespread biological strikes. This risk is suggested by a simple extrapolation of current systems to those we anticipate seeing in two to three years. We think this poses a serious risk to the national security of the United States.
According to Amodei, Anthropic informed government authorities on this assessment, and all of them found the findings unsettling.
The serious risk posed by AI, he warned, will not go away despite the fact that his company supports the safe development of AI.
He argued that personal initiative is insufficient. A systemic policy response is necessary in response to this risk and numerous others like it.
In the Senate session intended to determine the “principles” of AI regulation, Amodei offered the government three stages, which he then described. First, he suggested that the government take action to restrict exports of machinery used to create AI systems.
In order to keep up its advantage and prevent these technologies from falling into the hands of bad actors, the U.S. must secure the AI supply chain, he said.
In addition, he suggested that all new, potent AI models be subjected to stringent testing and auditing procedures, and that these models shouldn’t be made available to the general public until they have successfully passed these tests. Thirdly, he asserted that more effort has to be done testing the mechanisms used to audit AI products.
The risk arises from the fact that it is now challenging to identify all the undesirable actions an AI system is capable of without first widely implementing it among users, according to him.
One of seven businesses, including Amodei’s, endorsed last week a set of White House-promoted standards aimed at creating reliable, secure, and safe AI technologies.
Amodei, as well as executives from Amazon, Google, Inflection, Meta, Microsoft, and OpenAI, were present when Biden unveiled the project last week at the White House.