Common sense is not common, especially when it comes to artificial intelligence. Computers struggle to make subtle distinctions that people take for granted. This is why websites require you to authenticate your humanity before signing up or making a purchase – most bots cannot tell the difference between a zebra crossing and a zebra.
At the USC AI Futures Symposium on Common Sense AI earlier this month, more than 20 USC researchers shared the technical reasons why this is happening and the various research avenues to fix it. Humanitarian services, from enhanced social services to better serve society, to personal assistants who better predict our context and needs.
Artificial intelligence systems can now converse with us to order a book, find a song or vacuum our floors, said Yannis Yortsos, director of USC Viterbi School of Engineering. But they don’t have the common sense to know that we read books for learning and for fun, that music relaxes us and tidy homes are more enjoyable. Mindsets that take human interaction into account must be applied to address the common sense challenge for AI as we lay the groundwork for AI to be responsible and ethical and to have a meaningful impact on society.
AI still makes ‘silly mistakes’
Today’s AI systems cannot make assumptions about situations or information that people encounter on a daily basis. For example, your phone’s camera reads the visual information in the frame and uses AI to focus on a specific subject. However, a distinction can be made between a white shirt and a white wall. They cause the AI to fail because it doesn’t see the other differences between a shirt and a wall, just the color.
To overcome this challenge, researchers use various common sources of knowledge, such as Wikidata, to get an “informed” AI response. Filip Ilievski, researcher at USC’s Institute of Information Sciences (ISI) and organizer of the symposium, has developed an AI-based program that uses multiple sources of common sense to complete a human-initiated story. For example, a user could enter, “I’m home and want to warm up, but there is no blanket” and the AI would reply, “Use a jacket”.
We continue to find that a lack of common sense is one of the main barriers preventing us from incorporating AI skills, he said. On the one hand we have an AI that can do very impressive things, but at the same time we have an AI that makes stupid mistakes. Currently, we tend to build one AI agent per task. We want to have Identifying sources of knowledge that enable AI agents to perform well on many tasks.
Expert input, crowdsourcing, and mining of large amounts of text are some of the approaches researchers use to support common sense reasoning. These different sources of knowledge are particularly useful when faced with incomplete information. By using everyday assumptions in their logic, AI agents can make plausible assumptions for familiar and unexpected situations.
In general, we see common sense as something that we expect another adult to know or things that allow us to interact with and interpret the world around us,” Marjorie said Freedman, ISI research team leader. AI takes common sense to accurately interpret the world and serve in a useful collaborative capacity. Depending on what aspect of common sense you want to learn and how you plan to use that information, AI could use data from collaborative sources to improve those insights automatically.
Creativity driving innovation in AI robots and agents
With a comprehensive knowledge base, AI can develop new ideas and approaches through computer-aided thinking and creativity. Mayank Kejriwal, Research Assistant Professor for Industrial and Systems Engineering and Head of Research at ISI, examines which properties a computer model needs to produce ideas effectively.
This is a very exciting time for AI creativity,” said Kejriwal. A recent artificial intelligence project allowed mathematicians to come up with an idea that may seem unintuitive at first, but then it turns out they can solve those very complicated mathematical theorems where artificial intelligence gives the idea of how to solve it. And despite these advances, there are still very simple things humans can do, but AI has problems such as determining whether two things are the same or different. There is still a gap between what we can intuitively do and what what AI can do intuitively.
A challenge for artificial intelligence is to read emotions. Jonathan Gratch, professor of computer science and psychology and director of virtual human research at the USC Institute for Creative Technologies, created a model that adds situational awareness to facial recognition techniques currently used in artificial intelligence to recognize an emotion. In this way, AI can begin to understand people’s goals and model an appropriate reaction to a particular emotion.
AI didn’t tend to deal with emotions until recently, but it’s inevitable when dealing with human behavior, Gratch said. It would be great if machines could recognize and understand what people or groups are feeling, and then also predict and model the downstream consequences of those feelings. The difficult thing is that much of what determines a person’s emotional response is hidden.
The understanding of human motivation remains a major challenge in the Community AI, and the work in the USC, which integrates research of AI with the exploration of social sciences such as cognitive science or psychology, leads to better approaches, according to Yolanda Gil, Research Professor for Computer Science and Senior Director of Strategic Initiatives in Artificial Intelligence and Data Science at ISI. This crucial area of research will drive innovation in AI and USC researchers will lead the way, she said.
USC and ISI are doing extraordinary research on artificial intelligence,” said Bart Selman, president of the Association for the Advancement of Artificial Intelligence and professor of computer science at Cornell University. Current research is at the heart of AI’s open challenges in common sense, knowledge and reasoning.