The United States and almost every country today identify AI as a critical strategic area for the future of computing. More than ever, companies are interested in finding out how AI can bring advantages in their competitive markets. According to a report released earlier this year by Appen Limited, AI budgets increased 55% year-over-year, from $ 500,000 to $ 5 million, with a greater focus on internal processes, better data understanding and efficiency gains.
That interest is being fueled by super-accelerated digital transformations fueled by a “digital or die” theme that eases the constraints imposed by the Covid19 pandemic. With digitization, the volume, variety and speed of data have been increasing exponentially for many years. AI gained in importance with the promise not only to cope with the flood of data, but to exploit it.
Despite the common association of AI with a view to the future, it is an ancient field with decades of roots. In 1956, John McCarthy coined the term “artificial intelligence” during a summer workshop at Dartmouth College, where he introduced the idea that machines could simulate human intelligence if described precisely enough.
Mathematicians like William Rowan Hamilton, Kurt Gödel, Alfred Tarski, and Alan Turing, whose theories of computability, completeness, recursive sets, and logical hierarchies helped lay the foundations of arithmetic, go back much earlier; each of them tried to capture and recreate the capacity of the human brain in an abstract way. But despite more than 70 years of trying to make computers “smart”, natural language, computer vision and healthy thinking remain open problems to this day. In fact, AI has generally been a disappointment on almost all fronts, except one.
Machine learning as one of the survival tools for the future of AI.
In the early 1970s and again in the late 1980s, AI saw a sharp decline in interest and funding. Those AI winters were fueled by a lot of hype and imagination about what AI could achieve, coupled with exaggerated promises to provide solutions to the problems of the Great Challenge, and crowned by an inability to address these problems as they are.
Machine learning is the only part of AI that survived the two AI winters and is likely to survive the next, not because we developed big algorithms to learn from data, but because we have so much more data available. compensate for its weaknesses by taking advantage of the abundance of data sets now available and growing exponentially in the digital world.
Machine learning within AI is used as a shortcut to avoid the problem of understanding how human intelligence works by using a variety of examples of desired results on a range of inputs. It is essentially a way of doing a more flexible nonlinear regression to obtain results, decisions, classifications, or inferences from statements or similar future inputs. This process can produce powerful results, and it often outperforms humans in speed, consistency, and precision.
But that’s not enough to produce general or autonomous AI. The best way to make AI robust and resilient is to accept the notion that human intervention is required to offset the limitations of algorithms and data. As data or the world around it changes, human intervention (performed efficiently and correctly) can save us from harmful results. Human intelligence has the unique ability to establish, understand and adapt to uncertainties and changes. Machines are best suited to processing large amounts of data, performing repetitive tasks, and relentlessly looking for combinations and contexts.A human-centered approach that brings together the best of human and machine intelligence can create robust, resilient, and intelligently adaptable solutions.
A good example of this approach is the relevance of machine learning (MLR) for search engines like Google. MLR is used to use data and human feedback to refine and optimize the search engine’s relevance ranking algorithm. With this information, a search engine gradually improves its accuracy until it almost magically understands what we’re looking for when we type those few words into a search box.
Solving real problems in real settings is key to advancing the science and practice of AI.
Almost every advance in AI over the past few decades has been achieved by figuring out how machine learning works in real-world applications. For the past 15 years, China has had a manic focus on making machine learning technology work in real world applications. It could explain how China is bridging the gap in 70-year-old AI leadership in the hands of the US and Europe.
Many difficult lessons from my years of experience have convinced me that the key to advancing the science of AI is figuring out how AI works in real-world problems. The philosophy we followed when we founded Yahoo Research Labs in 2005 has been stepped up today with our focus on the Institute for Experiential AI at Northeastern University research in relevant areas when it comes to AI science. If we study why a particular AI technology will or will not work in certain applications and determine what it takes to make it work in a systematic way, we will solve crucial and fundamental problems.
The AI Solutions Factory workshop is littered with Nobel class problems waiting to be resolved. By making AI solutions practical and relevant to most businesses, we can advance AI science and practice and avoid another AI winter that I believe is near.