Management guru W. Edwards Deming famously said: “In God we trust. All others must bring data.” But how far can we trust the data?
Tellingly, the businesses that are fighting AI are having major trust issues with the insights being delivered by the technology. That’s the foremost takeaway from a recent survey of 1,000 senior executives released by ESI ThoughtLab and Cognizant, supported the input of 1,000 senior executives.
The use of and trust in AI go hand-in-hand, the survey’s authors find. “The more that companies use AI in decision-making, the more confident they become in these technologies’ ability to deliver.” In their study, 51% of AI leaders trust the choices made by AI most of the time, much more than the 31% of non-leaders who feel the identical. It’s notable that hardly 1/2 even the foremost AI-savvy companies have full confidence in AI decisions.
Limited understanding of AI’s potential fuels uncertainty about what AI can and can’t accomplish, the report states. “This, in turn, undermines trust in it.” quite nine in 10 leaders, 92%, say AI has improved their confidence levels in their decisions. However, only 48% of others have seen such an improvement in confidence levels. additionally, than half leaders trust AI-made decisions most of the time, compared to one-third of their lagging counterparts. “While this gap is impressive, the actual fact that almost half (47%) of leaders only trust AI decisions a number of the time (rather than most of the time or always) indicates that building trust within the use of AI to form superior decisions takes time.”
Lack of trust comes from a spread of places. There could also be fear of AI altering or replacing jobs. There could also be issues with the standard of the information being fed into AI algorithms. The algorithms themselves is also flawed, biased or outdated, subject to the approaches of the developers, moreover as their understanding of user requirements. Plus, the interactions or data and algorithms may deliver outcomes which will confound even the info scientists that designed them.
The challenge for all companies, the report’s authors advise, is to “promote widespread understanding of and trust in the use of data and AI in decision-making.” This trust can be built by promoting the benefits AI will deliver to organizations, and “putting humans at the center of AI decision-making by using technology to empower, rather than replace, them.”
AI proponents also can overcome trust issues by presenting “significant case studies and highlight specific areas of their company where AI can improve decision-making,” the survey’s authors suggest. “Businesses should first define the decisions they want to make with AI support, and the business outcomes they want to achieve and then ensure they have the relevant data.”
Skeptical C-level executives “may need an extra push to embrace wider AI participation in decision-making. Data scientists can help by ensuring the company’s AI is fed with modern data — within the right format, refreshed and available for informing up-to-date algorithmic models — which the selections it produces are aligned with corporate strategies. this can fortify trust while ensuring AI is a crucial tool for all executives, including its first proponents, on their daily jobs.”