While it comes down to people to decide which recommendation they will prefer, in reality, both have their plus points
The usage of recommendation engines is growing in consumer services. Be it Spotify, Netflix, or Amazon; brands are leveraging artificial intelligence-based recommendation systems to provide more personalized services and enhance user experience. Apps like Google Maps and UBER also rely on AI to provide accurate directions and estimated travel time, respectively. So, it is obvious that many of us depend on AI to make numerous daily life decisions. However, will artificial intelligence-based recommendations pass the litmus test when fared against human recommendations, against the backdrop of concerns against it?
A person or machine makes a recommendation after learning their preferences. Basically, after filtering through varied information, suggestions are made by tailoring data to users’ interests, preferences, or behavioral history on an item.
There are several instances when artificial intelligence has been alleged as biased when making a suggestion or recommendation. For instance, few years ago, Reuters reported that the e-commerce giant Amazon.com Inc’s secret AI recruiting tool showed bias against women. The software penalized applicants who attended all-women’s colleges, as well as any resumes that contained the word “women’s”. Bias has also been observed in facial recognition algorithms that tend to misidentify people due to their gender or race. These biases may have existed because of the presence of bias in the training dataset or faulty programming. This also brings us to another major concern regarding artificial intelligence, i.e. the black box problem.
Though it is argued that AI-based decisions tend to be logical and adheres specified set of rules, we aren’t sure of how those decisions are made. Fortunately, to counter this, researchers have proposed Explainable AI (XAI), fine-tuning, unmasking AI, and more. Addressing the black box issue is important to understand the cause of the mistake or bias, or decision made by artificial intelligence models, boost transparency, and tweak it later. Recently researchers at Duke University proposed a method that targets reasoning process behind the AI predictions and recommendations.
AI vs Humans
When pitted against recommendations by humans, AI need not necessarily always have a win-win situation. It is true that data-driven recommendations are always preferred; however, the preferences to accept humans and artificial intelligence based recommendations differ with respect to situation and use case. It all stems from the ‘word-of-machine effect.’
Recently, an article on “When Do We Trust AI’s Recommendations More Than People’s?” by University of Virginia’s Darden Business School Professor Luca Cian and Boston University’s Questrom School of Business Professor Chiara Longoni, was published in the Harvard Business Review. In the article, they explained this phenomena as a widespread belief that AI systems are more competent than humans in dispensing advice when utilitarian qualities are desired and are less competent when the hedonic qualities are desired.
The article authors clarify that, it doesn’t imply that artificial intelligence is competent than humans at assessing and evaluating hedonic attributes nor are humans in the case of utilitarian attributes. As per their experiment results, suppose someone is focused on utilitarian and functional qualities, from a marketer’s perspective, the word of a machine is more effective than the word of human recommenders. For someone focused on experiential and sensory qualities, human recommenders are more effective.
Out of one of the 10 studies by Cian and Longoni, one study involved recruiting 144 participants from the University of Virginia campus and informing them about testing chocolate-cake recipes for a local bakery. During the experiment, the participants were offered two options: one cake created with ingredients selected by an AI chocolatier and one created with ingredients selected by a human chocolatier. Both cakes were identical in appearance and ingredients. Participants were asked to eat the cakes and rate them on the basis of two experiential/sensory features (indulgent taste and aroma, pleasantness to the senses) and two utilitarian/functional attributes (beneficial chemical properties and healthiness). It was observed that while participants found the AI recommended cake less tasty than human recommended one, yet it was healthier than the other.
Longoni and Cian also assert that consumers will embrace artificial intelligence recommendations if they believe a human was part of the recommendation process.
Replacing Human Intelligence?
The human brain has an edge over AI for its cognitive skills. It acquires knowledge and improves reasoning by learning from experience, abstract concepts, several cognitive processes, and more, its ability to manipulate one’s environment. Whereas, artificial intelligence models try to mimic human intelligence by following certain program rules and continuous self-learning (machine learning). Regardless of their learning method, both are capable of giving good and bad recommendations. Second, people nowadays are slowly starting to trust AI. Independent surveys have found that people may opt for AI for higher flexibility and control. Respondents believe that the relationship between humans and AI and trust will likely improve in the future, given that AI proves itself safe, and transparent.
As recommendations become an effective marketing tool, developers and marketers have to be careful in leveraging artificial intelligence algorithms. They can program the AI system to identify what the customer is actually looking for, before making any suggestion. As AI becomes more tangible than ever, its ability to offer recommendations that are unique and personal in nature will increase too. It is true that currently, AI lacks quick thinking, creativity and other attributes associated with human intelligence, but with innovations around the clock, who knows what AI will be capable of in the future. At the same time, humans and AI can live in a symbiotic or collaborative relation to avail of each other’s benefits.
This article has been published from a wire agency feed without modifications to the text. Only the headline has been changed.