Putting humans at the centre of AI

With tremendous advances in data handling, computing power, and an increased need for complexity management, real-world use cases that use artificial intelligence (AI) have seen a sudden surge in recent years. AI has certainly increased the financial performance of companies, customer experiences and quality of products.

At the same time, the need to make sure responsible use of AI has also increased. Organizations recognize the need to develop and operate AI systems on an equal footing, without racial, gender or other prejudice, and to ensure security, privacy and society. These elements lead to one of the most important debates in the world of AI today: how to ensure “responsible AI” or RAI.

Private organizations, governments and international bodies come together to measure and analyze the technical and social impact of AI systems and develop principles and regulations to curb these prejudices. Most companies have yet to be adopted by RAI. In this article, we investigate the concrete measures that companies must take when implementing RAI programs.

AI Biases

As the number of AI-based use cases increases, we are also experiencing the bias that these systems can exhibit in decision-making. For example, there have been cases where AI-based recruitment systems have favored men over women for technical roles such as software development. There have also been cases where AI-based health systems have labeled white people with high-risk scores, which is due to the fact that AI systems derive the results using the cost of health care as an input, and they believe that dark-skinned people do not are able to pay for high quality medical care.

These risks are even greater for a country like India, where we have different cultures and AI systems can naturally absorb these regional or caste-based prejudices. Companies across the globe have recognised the need to develop and operate AI systems with fairness and without any biases, while ensuring the security and privacy of society in general.

Principles and Regulations for Responsible AI

In April 2021, the European Union (EU) published a bill to regulate AI. The Artificial Intelligence Act requires a proportionate and risk-based approach based on the risk level of each use case. It also requires AI systems to be trained in high quality data. Similar efforts are underway in other countries, including India, the United States, and Singapore.

While regulations will take time and more dialogue, organizations will need to establish fundamental principles for RAI to gain a competitive advantage. The competitive advantage lies at the intersection of data science, technology, people and deep business knowledge. This potential can only be tapped if AI is embedded in processes and working methods, everything is responsible and with people at the center. These principles should include:

  • Accountability

• Fairness and equity

• Transparency and explainability

• Safety, security and robustness

• Data and privacy governance

• Social and environmental impact

More importantly, organizations must commit to developing systems that put people at the heart of AI and that empower and maintain the authority and wellbeing of those who develop, implement, and use those systems. This core principle will unite all other principles and guarantee real RAI.

We are at the beginning of the AI ​​revolution. In this constantly evolving era of AI, there is still a long way to go for companies to establish processes and committees that define, implement and pursue RAI principles.

RAI maturity: Current State

BCG collected and analyzed data from 1,000 large organizations to assess their progress in implementing RAI programs, and then categorized the organizations into four different levels of maturity:

1. Lagging (14 percent): Starting to implement RAI with a focus on data and privacy

2. Developing (34 percent): Expanding across remaining RAI dimensions and initiating RAI policies

3. Advanced (31 percent): Improving data and privacy, but lagging behind in human-related activities

4. Leading (21 percent): Performing at a high level across all RAI dimensions

A surprising aspect of this exercise was that the perception is far from reality: Most companies overestimate the maturity level of the RAI (55 percent of all organizations are less advanced than they think). Another noteworthy aspect is that most of the C-suite and board of directors are concerned about the risks of failure of AI systems. It was found that risk reduction is the second most important reason for adopting RAI; the first to get commercial benefits.

Companies should view RAI as an opportunity to strengthen stakeholder relationships that benefit customers and society at large, while realizing business benefits.

What next? Action Items for Organisations

As organizations move toward implementing RAI programs, they need to understand the imperative to define metrics in order to track the success of those programs. Metrics must span multiple dimensions of the organization. First of all, introducing an AI program is nothing less than bringing about a cultural transformation: active participation of managers in communication and participation in RAI programs is essential. Second, organizations need to measure program adoption coverage, such as the percentage of use cases covered by RAI programs. Third, companies should focus on training employees on the principles, tools, and implementation of RAI. Last but not least, metrics should be in place to measure the effectiveness of RAI programs, such as dollar savings from implementing RAI.

Conclusion

We are in the early stages of the AI ​​maturity curve and few companies have understood the process of defining and tracking RAI practices. Many companies need to set out to adopt these practices, and many others need to take concrete action to become leaders in RAI Maturity Adoption. Both AI systems and thus RAI programs are developing rapidly. Surely this is going to be an exciting place to look out for for years to come.

Source link

Most Popular