Social interaction as ‘the dark matter of AI’
Two researchers from the University of Montreal today published a preprint of a research paper creating a “smarter artificial agent” that mimics the human brain. We’ve heard it before, but this time it’s a little different.
The big idea is to give the artificial intelligence agent room to operate.

According to the researchers:

Despite the progress made in social neuroscience and in developmental psychology, only in the last decade, serious efforts have started focusing on the neural mechanisms of social interaction, which were seen as the “dark matter” of social neuroscience.

Basically, it’s something other than algorithms and architecture that stimulates our brain. According to researchers, this “dark matter” is a matter of social interaction. They argue that AI must be capable of “subjective cognition” in order to develop the neurological connections required for advanced cognition.

Per the paper:

The study of consciousness in artificial intelligence is not a mere pursuit of metaphysical mystery; from an engineering perspective, without understanding subjective awareness, it might not be possible to build artificial agents that intelligently control and deploy their limited processing resources.

Making AI as intelligent as humans is not easy to build larger supercomputers that can execute faster algorithms.

Today’s AI systems are by no means dependent on human cognitive abilities. According to researchers, agents need three things to fill this gap.
  • Biological plausibility
  • Temporal dynamics
  • Social embodiment

The “biological validity” aspect involves creating an AI architecture that mimics the human brain. This means creating a layer of subconsciousness that is different from the dynamic layer of consciousness, but associated with it.

Since our subconscious mind is closely related to the control of our body, scientists seem to propose to build an AI with similar connections between the brain and the body.

According to the researchers:

Specifically, the proposal is that the brain constructs not only a model of the physical body but also a coherent, rich, and descriptive model of attention.

The body schema contains layers of valuable information that help control and predict stable and dynamic properties of the body; in a similar fashion, the attention schema helps control and predict attention.

One cannot understand how the brain controls the body without understanding the body schema, and in a similar way one cannot understand how the brain controls its limited resources without understanding the attention schema.

Regarding “temporal dynamics,” researchers suggest that artificial agents must exist in the world as humans do. This is comparable to how the brain works because it not only interprets the information, but also processes the information in relation to the environment.

As the researchers put it:

In nature, complex systems are composed of simple components that self-organize in time, producing ultimately emergent behaviors that depend on the dynamical interactions between the components.

This makes understanding how time affects both an agent and its environment a necessary component of the proposed models.

And that brings us to “social embodiment,” which is essentially the creation of a literal body for the agent. The researchers claim the AI would need to be capable of social interaction on a level playing field.

According to the paper:

For instance, in human-robot interaction, a gripper is not limited to its role in the manipulation of objects. Rather, it opens a broad array of movements that can enhance the communicative skills of the robot and, consequently, the quality of its possible interactions.

Ultimately, there is no actual roadmap for human-level AI. Researchers are trying to connect the world of cognitive science and computer science with engineering and robotics in unprecedented ways.

But this may be another attempt to wring a miracle out of deep learning technology. Aside from new classes of calculus or algorithms, we can get as close to human-level AI agents as traditional reinforcement learning can can take us.

Source link