Avoid biases and inaccuracies in your artificial intelligence-based business decisions with these tips from KPMG.
Salesforce’s Architect for Ethical AI Practices sat down with Dan Patterson to discuss how artificial intelligence can be used to enhance business processes and reduce bias.
As more organizations adopt artificial intelligence (AI) and machine learning into daily workflows, they must consider how to govern these algorithms to avoid inaccuracies and bias, according to KPMG’s Controlling AI report, released last week.
Organizations that build and deploy AI technologies are using various tools to gain insights and make decisions that exceed human capabilities, the report noted. While this is a large opportunity for businesses, the algorithms used can be destructive if they produce results that are biased or incorrect. For this reason, many company leaders remain hesitant to allow machines to make important decisions without understanding how and why those decisions were made, and if they are fair and accurate, according to KPMG.
To make AI a useful and accurate tool, KPMG developed the AI in Control framework, to help organizations drive greater confidence and transparency through tested AI governance constructs. The framework addresses the risks involved in using AI, and includes recommendations and best practices for establishing AI governance, performing AI assessments, and integrating continuous AI monitoring, the report noted.
“Transparency from a solid framework of methods and tools is the fuel for trusted AI—and it creates an environment that fosters innovation and flexibility,” the report stated.
Here are six tips for improving AI governance in your organization, according to KPMG:
- Develop AI design criteria and establish controls in an environment that fosters innovation and flexibility.
- Assess current governance framework and perform gap analysis to identify opportunities and areas that need to be updated.
- Integrate a risk management framework to identify and prioritize business-critical algorithms and incorporate an agile risk mitigation strategy to address cybersecurity, integrity, fairness, and resiliency considerations during design and operation.
- Design and implement an end-to-end AI governance and an operating model across the entire lifecycle: strategy, building, training, evaluating, deploying, operating, and monitoring AI.
- Design a governance framework that delivers AI solutions and innovation through guidelines, templates, tooling, and accelerators to quickly, yet responsibly, deliver AI solutions.
- Design and set up criteria to maintain continuous control over algorithms without stifling innovation and flexibility.
“The power and potential of AI will fully emerge only when the results of algorithms
become understandable in clear, straightforward language,” the report stated. “Companies that
don’t prioritize AI governance and the control of algorithms will likely jeopardize their overall AI strategy, putting their initiatives and potentially their brand at risk.”