Introducing AI for beginners

Something amazing has happened. We didn’t quite see it coming, but it was, in retrospect, inevitable — the reemergence of artificial intelligence (AI). Nearly everywhere we look today, we see intelligent systems talking to us (Siri), offering recommendations (Netflix and Amazon), providing financial advice (Schwab’s Intelligent Portfolio), and winning game shows (IBM’s Watson). And we see systems emerging to improve voice recognition, image interpretation, face recognition, and even driving cars, based on techniques such as Google and Facebook’s deep learning efforts. Other work aims to advance natural language understanding and generation (Narrative Science’s Quill) so that the machines can communicate with us on our own terms.

Defining Artificial Intelligence

The resurgence of AI causes a little bit of confusion, especially since so many companies and capabilities have exploded on the scene. Where do we start making sense of it all?


Let’s start with a definition. Artificial intelligence (AI) is a subfield of computer science aimed at the development of computers capable of doing things that are normally done by people — in particular, things associated with people acting intelligently.


The Dartmouth Conference

Stanford researcher John McCarthy coined the term in 1956 during what is now called the Dartmouth Conference, in which the core mission of AI was defined.

In the original proposal for the conference, McCarthy framed the effort with the following:

An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.


Starting from the definition quoted in the sidebar “The Dartmouth Conference,” any program is an AI system simply by the fact that it does something that we would normally think of as intelligent in humans. How it does so is not the issue; just that it is able to do it at all is the key point.

One summer turned into 60 years. Building intelligence just turned out to be a little harder than they thought.

Welcoming Back AI

AI has had a few excellent runs. In the Sixties, it was the great promise of what we would be able to do with the machine. In the Eighties, it was going to revolutionize business. But in both of these eras, the promise far outstripped our ability to deliver.

What makes the latest round of AI different? What makes today’s systems different to the expert systems, diagnostic programs, or neural nets of the past?

There are many reasons behind AI’s rebirth, but they can be summarized into five core drivers:

We have the raw horsepower of increased computational resources (our computers can think harder and faster). These techniques worked well in the past but couldn’t scale up; now they run well on our expanded computational grid.

We have an explosive growth of data available to our machines (our systems have more to think about). This means that learning systems in particular that get better with more data can now look at billions of examples rather than a few hundred.

We have seen a shift away from broad AI to a deeper focus on specific problems (our AI applications are now thinking about something rather than daydreaming without focus). Systems like Siri and Cortana work within limited domains of action that can be well modeled and focused on pulling very specific words out of what you have said rather than performing language understanding in general.

We have transformed the problem of knowledge engineering, or putting rules into a system, into learning. (Our systems are using rules that they learn on their own.) The bottleneck in AI systems of the past was our ability to put in all the rules needed to reason in an area (keying in five years of a medical education is hard to do). Many modern approaches are focused on learning these rules automatically.

We have adopted alternative reasoning models based on the understanding that systems don’t have to reason like people in order to be smart. (Let the machine think like a machine.)

All these factors together have given us the first real renaissance of intelligent machines that are now part of our lives today and being adopted as necessary tools in the workplace of tomorrow.

In the workplace, understanding how these new technologies work is absolutely vital. Black boxes that give us answers without explanations, or systems that fail to communicate with us, cannot become our trusted partners. We need to understand the basics of how these systems reason and the systems need to be able to explain how they come up with their answers.

Exploring AI: Weak and Strong

For some researchers and developers, the goal is to build systems that can act (or just think) intelligently in the same way that people do. Others simply don’t care if the systems they build have humanlike functionality, just so long as those systems do the right thing. Alongside these two schools of thought are others somewhere in between, using human reasoning as a model to help inform how we can get computers to do similar things.

Strong AI

The work aimed at genuinely simulating human reasoning tends to be called strong AI in that any result can be used to not only build systems that think but also explain how humans think as well. Genuine models of strong AI or systems that are actual simulations of human cognition have yet to be built.

Weak AI

The work in the second school of thought, aimed at just getting systems to work, is usually called weak AI in that while we might be able to build systems that can behave like humans, the results tell us nothing about how humans think. One of the prime examples of this was IBM’s Deep Blue, a system that was a master chess player but certainly did not play in the same way that humans do and told us very little about cognition in general.

Everything in between

Balanced between strong and weak AI are those systems that are informed by human reasoning but not slaves to it. This tends to be where most of the more powerful work in AI is happening today. This work uses human reasoning as a guide, but is not driven by the goal to perfectly model it. Now if we could only think of a catchy name for this school of thought! I don’t know, maybe Practical AI?

A good example is advanced natural language generation (NLG). Advanced NLG platforms transform data into language. Where basic NLG platforms simply turn data into text, advanced platforms turn this data into language indistinguishable from the way a human would write. By analyzing the context of what is being said and deciding what are the most interesting and important things to say, these platforms communicate to us through intelligent narratives.

The important takeaway is that in order for a system to be AI, it doesn’t have to be smart in the same way that people are. It just needs to be smart.

Narrow AI, broad AI, and is that AI at all?

Some AI systems are designed around specific tasks (often called narrow AI) and some are designed around the ability to reason in general (referred to as broad AI or general AI). As with strong and weak AI, the most visible work tends to focus on specific problems and falls into the category of narrow AI.

The major exceptions to this are found in emerging work such as Google’s deep learning (aimed at a general model of automatically learning categories from examples) and IBM’s Watson (designed to draw conclusions from masses of textual evidence). But in both of these cases, the commercial impact of these systems has yet to be completely played out.

The power of narrow AI systems is that they are focused on specific tasks. The weakness is that these systems tend to be very good at what they do and absolutely useless for things that they don’t do.

Different systems use different techniques and are aimed at different kinds of inference. So there’s a difference between systems that recommend things to you based on your past behavior, systems that learn to recognize images from examples, and systems that make decisions based on the synthesis of evidence.

Consider these differences when looking at systems. You probably don’t want a system that is really good at finding the nearest gas station to do your medical diagnostics.

Many systems fall under the definition of narrow AI even though some people don’t think of them as AI at all. When Amazon recommends a book for you, you don’t realize that an AI system is behind the recommendation. A system collects information about you and your buying behavior, figures out who you are and how you are similar to other people, and uses that information to suggest products based on what similar people like. You don’t need to understand how the system works. Amazon’s ability to look at what you like and figure out what else you might like is pretty darn smart.

Understanding What’s in a Name

As more and more AI systems appear, we are seeing a proliferation of new names for AI (check out Figure 1-1 for some examples). In an effort to brand and rebrand, marketing departments around the globe keep trying out new words for “smart.”

You can call them cognitive computing, smart machines, intelligent assistants, predictive analytics, recommendation systems, deep learning, machine learning, self‐driving cars, question‐ answering systems, natural language generation platforms, or a host of other fancy titles. They are all names for different aspects of AI. Each in its own way is doing something that we would see as part of what it means to be an intelligent human.

This article has been published from the source link without modifications to the text. Only the headline has been changed.

[ad_2]

Source link

 

Most Popular