Artificial intelligence (AI) has a vital role to play in every aspect of life. However, the way of thinking between a human and an artificial intelligence varies. Generally, bots or language models are fed with information, and their response will only be based on this stored information. When having conversations, humans also consider common sense, mutual beliefs, and implicit knowledge gleaned from real-life experiences or incidental activities.
When such factors are considered they change the way of conversation, which does not apply to the existing AI models, and therefore they cannot respond accurately as humans do.
Sometimes virtual assistants may not understand what we are trying to say or answer incorrectly to our requests which may be annoying. What will happen if we never have to face such problems again?
To achieve this, we must bridge the gap between the way humans interact and respond and the way artificial intelligence does, and this will be a huge breakthrough for the future of AI’s role in our society.
As a result, any machines, and bots such as Siri, Alexa, etc. with which we interact will be able to respond more accurately and with strong communication.
In the paper, one example was telling a conversation partner that you wanted to buy flowers for your wife. Humans intuitively recognize that purchasing flowers is an act of love and that roses are a type of flower associated with love.
By suggesting purchasing roses, our research trained dialogue agents to be more human-like by generating these types of common sense thoughts and speaking more meaningfully, explained Jay Pujara, research lead at ISI and co-author of this study.
“Think Before You Speak: Explicitly Generating Implicit Commonsense Knowledge for Response Generation,” led by Pei Zhou, Ph.D. candidate at ISI, was accepted in ACL 2022. (60th Annual Meeting of the Association for Computational Linguistics).
It tested whether using implicit knowledge as a factor improves the accuracy of AI-generated responses using self-talk models rather than traditional end-to-end models that do not account for implicit knowledge.
The study was inspired by author Pei Zhou’s previous work on enhancing human-machine communication.
One of the main angles for Pei’s research was to investigate the role of common sense in human-machine communication.
Current models lack common sense knowledge and are incapable of making inferences in the same way that humans do, said Dr. Xiang Ren, ISI research team lead and assistant professor, and co-author of this paper.
We wanted to observe if it would benefit the models as well as humans if they could mimic the same thought process that humans do, Ren added. They, in fact, do.
The study proved that when artificial intelligence models were provided with the tools to think similarly to humans, they were able to create more of their own common sense. Ren explained that, when the model is told explicitly regarding the type of common knowledge that is useful to the current conversation, the model gives more engaging and accurate responses. Some may believe that the models already have common sense, but these findings show that teaching the models common sense knowledge results in more human-like and sensible responses.
Pei Zhou also discussed the study’s findings, focusing on factors other than response quality, such as how it improved the abilities of self-talk models. After being given generated knowledge from a common-sense database, the models began to develop their own thought process: with only implicit information provided at the source, they were able to generate new common-sense knowledge.
When artificial intelligence becomes more precise and trained towards human characteristics and thought processes, the more it can be utilized as a tool for technological advancement, and have interactions similar to humans with programs in which we converse with artificial intelligence instead of a real person.
Source link