Achieving AGI is not simple – Why?

It is not surprising that AI-enabled devices are rapidly infiltrating our lives, and we fear that human-like machines – equipped with a human-like brains – will take over our lives. Though the fears remain genuine and valid, at least to ordinary people, it is possible that it will not become a possibility shortly. AGI (Artificial General Intelligence) is being developed, and its methods of operation should be far superior to those of AI. The question remains, however, whether it is even possible to train AGI for human consciousness, whose abstract and complex mechanism of operation scientists themselves could not decipher.

What is Artificial General Intelligence?

Anyone who has used intelligent voice assistants such as Alexa or Cortana is familiar with how artificial intelligence applications feel. Neural networks are a significant advancement in Artificial Intelligence. They enable machine learning, paving the way for memory-based tasks and allowing them to improve themselves each time the task is performed. Though capable of self-correction, this type of AI is considered weak AI because its capabilities are limited. Machines can not only behave like humans but also think like them, thanks to AGI. Cognitive modeling, which involves approximating one or more cognitive processes in humans or animals for comprehension and prediction, is used by researchers to create AGI models.

Why is it so difficult to construct an AGI model?

The role of consciousness in developing strong AI or AGI is highly debatable because human consciousness includes many dimensions such as abstract reasoning, element composition, creativity, empathy, and perception, to name a few. Though developing AGI appears impossible given the breadth and extent of variability in the human mind, Igor Aleksander, emeritus professor of Neural Systems Engineering in the Department of Electrical and Electronic Engineering at Imperial College London, believes that the principles for creating a conscious machine already exist and that applying them for further research is critical to the development of AGI. However, such consciousness may take decades to become a reality.

AGI is far from making broad generalizations:

The human mind can think beyond the circumstances in which it has been trained, i.e., it can make decisions when confronted with circumstances that are outside of its state of consciousness. ‘ObjectNet,’ a well-known MIT experiment, is an example. In the experiment, the algorithms are trained to recognize objects in a previously unseen environment, and 40 to 45 percent of state-of-the-art algorithms fail. This implies that they could only extrapolate within the distribution, which explains their lack of imagination.

The ability to adapt to consciousness improvising is limited:

Humans are natural meta-learners, which means that we learn how to learn. Certain traits from our ancestors are ingrained in our neurons as a result of genes, which is nature’s way of training the human mind. When presented with an entirely unexpected circumstance, performance improves with experience and several tasks without any external stimuli. The inability of meta-learning algorithms to act outside the boundaries of datasets limits them, which leads to the problem of compositionality. Humans excel at this trait, which makes them excellent communicators with context-specific reasoning. However, achieving this level of quality with algorithms will be difficult because algorithms only respond to situations with a high probability, rendering them insufficient for improbable situations.

AGI may be possible, but it will not be similar to human intelligence:

According to current research and its findings, there is no reason to believe that AGI can replace the human mind. We can only achieve peak performance in situational awareness among machines because the path AI can take to achieve human consciousness has not yet been invented.

Source link

Most Popular