Though artificial intelligence has evolved recently and appears to be a new phenomenon in modern society, it is much older than you would imagine. Being actively involved in the global AI community, I’ve noticed that many people still associate AI with sci-fi Hollywood movies displaying the distant future powered by intelligent robots and machines. However, this perception is waning as AI becomes more commonplace in our daily lives.
The early instances of intelligent machines were found in ancient Greek mythology with conceptions of mechanical robots made to help the Greek god Hephaestus. Following were some milestones in the history of AI, which started as a field of research in the late 1950s with the development of the first algorithms to solve complex mathematical problems. The term was thought up by John McCarthy in 1955. Then came the famous Turing test, the so-called “imitation game” for assessing the intelligence level of artificial neural networks — which was first passed in 2014 by Eugene, a computer program simulating a 13-year-old boy. The field continued to make progress, experiencing a real breakthrough in the first decades of the 21st century when the biggest companies like IBM took rapid strides to integrate AI.
Some of the AI algorithms we currently use may be traced back to the early 1700s. At that time, it was challenging to collect and manage such large amounts of data manually. However, with the rapid technological growth, technical capabilities of data processing have also improved and enabled faster collection and storage of data all in one place. In today’s digital age, data collection has been automated to identify the most interesting and hidden patterns, which AI can extract and use.
An AI model is trained on large amounts of data and algorithms, learning from an experience like a child and growing into intelligence. We feed and nurture it with lots of data so it can find relationships, develop understanding and make decisions from the training data it is provided. In most cases, the dataset used to teach an AI model usually consists of an annotated text, image, audio and video, and it needs to be labeled to ensure a more accurate performance. Big data is like AI’s library: The more data it gets, the smarter it becomes.
With such enormous datasets and the ability to learn and perform calculations that most humans can’t, it might seem that AI is not far off from reaching human-level intelligence. Indeed, AI can often exceed human performance in many tasks, but unlike the experiences that humans live throughout their lifetime, AI learns from a single experience only. Usually, AI is trained to solve only one specific problem at a time without the ability to complete the task on its own.
So far, AI has shown some impressive results in narrow application areas only, like chess-playing computers beating world chess champions and supercomputers beating human Jeopardy champions. However, these are computers programmed to solve one specific problem and cannot interpret more complex and multilayered challenges beyond the given task. This is exactly what Moravec’s paradox states; though it may be easy to get computers to beat human chess champions, it may be difficult to give them the skills of a toddler when it comes to perception and mobility.
While AI has not reached human performance, it brings valuable solutions to many real-world problems quickly and effectively. From enhanced healthcare, innovations in banking and improved environmental protection to self-driving vehicles, automated transportation, smart homes and chatbots, AI can offer simpler and more intelligent ways of accomplishing many of our daily tasks.
But how far can AI go? Will it ever be able to function autonomously and mimic cognitive human actions? We cannot envision how AI will end up evolving in the far-off future, but at this point, humans remain smarter than any type of AI. As noted in a 2017 Mother Jones article, AI systems could have “about one-tenth the power of a human brain” by 2035 and “will be capable of performing any task currently done by humans” by around 2060. Even so, I think AI is still far from reaching the level where it can adapt to changing conditions and apply the correct algorithms to the given task without humans programming it to do so.
The real concern about AI is not that people will lose control over it — because after all, AI systems rely on the criteria humans apply to their development and the historical data they are using. The real risk will come either from data being biased or the so-called knowledge acquisition bottleneck that could result in unintended consequences of using this technology. For example, if an AI specialist analyzes medical data, and this data is given to a patient without the doctor’s review, this could be risky. To avoid and address these problems, we need not only quality data structured by AI specialists but also domain experts’ final evaluation; only in this way can we load AI with enough knowledge before it can start learning.
For now, AI is not able to operate effectively in all domains by itself, as it needs context and feedback from humans. AI requires not only data and algorithms but also knowledge and experience that our brains learn throughout our lives. This is the so-called narrow AI we’ve encountered so far. Here, AI models are capable of learning and optimizing independently, but they are limited to an extremely narrow area, and their capabilities do not go outside of the specific predefined task.
Scientists have not succeeded in developing a real strong AI, and the one we have right now is made of many narrow AI combinations and cannot be considered a real strong AI. No matter how smart and intelligent AI is going to become in the far-off future — and whether or not the self-aware AI will ever be achievable — it’s important to ensure that these developments serve the needs of humanity and not harm it.
This article has been published from the source link without modifications to the text. Only the headline has been changed.