On Tuesday, the Biden administration said that it was starting the process of developing important guidelines and standards for the secure application of generative artificial intelligence as well as methods for system testing and security.
The National Institute of Standards and Technology (NIST) of the Commerce Department said on February 2 that it was looking for public feedback on important testing that is essential to ensuring the security of AI systems.
President Joe Biden’s October executive order on AI, according to Commerce Secretary Gina Raimondo, spurred the initiative, which aims to create industry standards around AI safety, security, and trust so that the United States can maintain its position as a global leader in the responsible development and application of this quickly developing technology.
The organization is creating standards-development rules, offering testing settings, and creating recommendations for assessing AI systems. The public and AI firms are asked to provide input regarding generative AI risk management and mitigating the hazards associated with misinformation generated by AI.
In recent months, there has been enthusiasm surrounding generative AI, which can produce text, images, and videos in response to open-ended suggestions. However, there have also been concerns that this technology may eventually replace humans, disrupt elections, and lead to disastrous consequences.
Agencies were instructed by Biden’s order to establish guidelines for such testing and handle associated risks connected to chemicals, biological, radiological, nuclear, and cybersecurity.
NIST is working on developing testing recommendations, including best practices for AI risk assessment and management and where “red-teaming” would be most helpful.
For years, cybersecurity professionals have employed external red-teaming to detect novel threats. The phrase originates from American Cold War role-playing exercises in which the adversary was referred to as the “red team.”
The inaugural public assessment “red-teaming” event in the United States took place in August, coinciding with a significant cybersecurity conference. It was organized by AI Village, SeedAI, and Humane Intelligence.
In an effort to better grasp the risks that these systems provide, thousands of participants attempted to see if they “could make the systems produce undesirable outputs or otherwise fail,” according to the White House.
It also showed how external red-teaming can be a useful strategy for identifying new AI risks.