Audio version of the article
Uncertainty over AI is a long-lasting issue
Artificial Intelligence (AI) is the future. But can we call the future dumb? Some possibilities could drive us to such a situation. AI is designed to make human jobs easy and the technology is trying its best to restrain the position.
However, what it fails to comply with are the basic human activities that we find very easy. Everyone has seen AI beating the world champion in the board game Go, the quiz game Jeopardy, the card game Poker and the video game Dota 2. AI has come a long to what it is today.
When we look at the history of AI, Charles Babbage began creating a prototype machine which he called ‘The Analytical Engine,’ in 1837. He thought that it will only be used to do calculations and algorithms. His friend Ada Lovelace created the first-ever computer program. But she too was not foreseeing the future of AI to be ruling the world. The first physical robot ELEKTRO was put on display at the world’s fair in 1939, marking a key beginning for automation. Soon after the automation and robotics started evolving, AI gave them a purpose to enrich the skills in various sectors.
However, this doesn’t stop anyone from having questions on AI technology. The world was always curious about what AI might bring. However, AI never disappointed the exciting people.
Uncertainty is not a new thing when talking about AI. The artificial intelligence technology has undergone many stages where humans were suspicious of its moves. It first started in 1980 when Hans Moravec wondered why AI has such an easy time doing stuff that humans find hard, but at the same time having a hard time while doing stuff that humans find easy. Hans had a discussion with other scientists like Rodney Brooks, Marvin Minsky and others articulated with AI paradox. They didn’t come up with a proper answer. But the explanation behind Morevax’s paradox revolves around three major things,
• Evolution of AI
• Understanding of AI
• Perception of AI
AI is capable of doing what a 30-year will do. The technology can even go further and do humanly undoable. However, it is not the same while comparing a one-year-old child’s action to AI. The technology can’t cope with the skills of the kid when it comes to perception and mobility.
AI needs evolution like humans
A vital reason behind AI lacking the basic skill is due to humans taking AI only for high-profile works. We forgot or don’t know to configure general intelligence yet. To mention the fact, AI is brilliant at very narrow competencies; whereas humans are good at pretty much everything.
If we compare the human’s path to making it to the tech era with AI, it contrasts. The reason is that AI didn’t evolve. Humans find things easy to do because we have been through all the tasks for a very long time starting from Neanderthal men. But the timeline of AI is different.
Remarkably, the only way that humans can teach AI is by giving it a set of instructions to do certain jobs. AI doesn’t use its brain to think what steps need to be taken in completing a work. It just follows human guidelines leaving the slow evolution hopeless.
Artificial General Intelligence (AGI) might undo the paradox
Artificial General Intelligence (AGI) is a mechanism of AI that is capable of understanding the world as well as any human with the same capacity to learn how to carry out a huge range of tasks. AGI doesn’t exist so far. Just because AI is capable of adapting to anything that humans teach doesn’t mean AGI is around the corner.
However, the new technological breakthroughs like AI with vision, listening and learning capabilities are slowly walking towards AGI. If AGI turns real, then the Moravec’s Paradox will no longer exist. Computer vision that identifies objects and does facial recognition and Natural Language Processing (NLP) devices like Alexa and Google Duplex are some of the primary steps to a wider future.
Walking away from being over mature
The reasoning which is high-level in human requires very little computational power. On the other hand, sensorimotor skills which are comparatively low-level in humans require enormous computational power. With all this in mind, the computational power increases while machines could eventually match and exceed human capabilities.
But when AGI is programmed and AI learns to do everything like humans, the next question arises. Humans will start having mistrust on the technology and would constantly think if AI could replace them. Even though this is a long menace to people, it will yield further saturation with the unveiling of AGI.
This article has been published from a wire agency feed without modifications to the text. Only the headline has been changed.