OpenAI CTO Mira Murati made it abundantly clear how the company feels about AI regulation: “Yes, ChatGPT and other generative AI tools should be regulated.”
She stated in the interview that it’s crucial for OpenAI and businesses like ours to bring this into the public consciousness in a way that’s controlled and responsible. They are a small group of individuals, so they need a lot more input from regulators, governments, and everyone else in order to make this system work.
She said, “It’s not too early,” when asked if it was too soon for policymakers and regulators to get involved due to concerns that such engagement could slow innovation. Given the influence new technologies will have, it is crucial for everyone to start participating.
AI audits and laws are on the way
In some ways, Murati’s viewpoint matters little: According to Andrew Burt, managing partner of BNH AI, a boutique legal company established in 2020 that focuses exclusively on analytics and AI, AI regulation is on the way, and it’s coming rapidly.
Companies need to get ready now, he added, as much of this legislation will call for AI audits.
We didn’t expect that there would [already] be these new AI rules on the books that state that you require audits if you’re using an AI system in this field or if you’re using AI in general, he told. The Automated Employment Decision Tool (AEDT) law in New York City and a related bill in the works in New Jersey are just two examples of the many AI regulations and auditing requirements that will soon be on the books in the U.S., he said, most of which are at the state and municipal level and vary wildly.
Burt said that in a rapidly developing sector like AI, audits are a necessity.
Regulators don’t have a properly comprehensive grasp of the technologies because AI is advancing so quickly, he claimed.
What can you do as a regulator if they are attempting not to inhibit innovation? The greatest solution that regulators can think of is to hire an outside party to examine your system, identify any hazards, manage those risks, and then document how you handled everything.
Getting ready for AI audits
The main conclusion is that you don’t have to be a fortune teller to foresee that audits will play a significant role in AI regulation and risk management. How can organizations get ready, is the question?
The solution, according to Burt, is getting simpler and simpler. He believes that the best course of action is to adopt an AI risk management program first. You need a program to handle AI risk consistently and systematically throughout your organization.
Second, he underlined that businesses should use the brand-new NIST AI risk management framework (RMF), which was just made public.
The creation of a risk management framework and alignment with the NIST AI risk management framework within an organization are both relatively simple, according to him. He thinks it’s simple to install and operationalize because it’s versatile.
Prepare for AI audits with these four essential tasks
He listed the four primary purposes of the NIST AI RMF: Assessing the threats that the AI could introduce comes first. Then, evaluate your results quantitatively or qualitatively so that you may create a program to test. Once testing is complete, manage the risks that are appropriate for the system by minimizing them or otherwise documenting and supporting them. Last but not least, govern. Ensure that you have policies and processes in place that don’t only apply to one particular system.
You’re not doing this randomly; rather, you’re doing it consistently and at the corporate level, Burt said. Around this, you might build an extremely adaptable AI risk management program. They have assisted a Fortune 500 company in doing it, and a small business can too.
Consequently, the RMF is simple to operationalize, he continued, but he added that he did not want people to mistake the flexibility of the RMF for something that is too general to be put into practice.
He said, It is meant to be helpful and they have already begun to notice that. They received requests from clients who want to adopt this standard.
Companies need to organize their AI audit processes now
Companies need to organize their AI audit processes now If your company is investing in AI, now is the time to get your auditing of AI in order.
The simplest solution, according to him, is to align with the NIST AI RMF because, for large business organizations, the way AI is trained and deployed is not standardized, thus the way it is assessed and recorded is also not. This is in contrast to cybersecurity, which has defined playbooks.
Everything is subjective, but that shouldn’t lead to liability because it increases danger, he said. According to his advice, for clients, that model documentation is the best and simplest place to begin. Create a common documentation template and ensure that each AI system is documented in accordance with that template. As you develop it, you begin to receive what he will refer to as a report for each model that can serve as the basis for all of these audits.
Concerned about AI? Invest in risk management
Burt contends that enterprises will not benefit fully from AI if they do not consider its hazards.
You can implement an AI system and benefit from it today, but something is going to hit you in the future, he warned. Therefore, if AI is important to you, make investments to manage its risks.
He went on to say that in order to maximize the return on investment (ROI) from your AI efforts, businesses must take precautions to avoid violating privacy rights, opening security holes, or fostering bias, all of which could expose them to legal action, regulatory penalties, and reputational harm.
He went on to say that in order to maximize the return on investment (ROI) from your AI efforts, businesses must take precautions to avoid violating privacy rights, opening security holes, or fostering bias, all of which could expose them to legal action, regulatory penalties, and reputational harm.
According to him, auditing is just a fancy phrase for an outside party looking at the system and knowing how you evaluate it for threats and how you manage those risks. And the audit will be rather obvious if you didn’t accomplish one of those things. It will be largely detrimental.