[ad_1]
What is artificial intelligence (AI)?
Artificial intelligence (AI) is, at its core, the science of simulating human intelligence by machines. One definition is the branch of computer science that deals with the recreation of the human thought process. The focus is on making computers human-like, not making computers human. The goals of artificial intelligence usually fall under one of three categories: to build systems that think the same way that humans do; to complete a job successfully but not necessarily recreate human thought; or, using human reasoning as a model but not as the ultimate goal.
With the advent of the internet of things (IoT), the interconnection via the Internet of computing devices in everyday objects, AI is poised to play a large role. Artificial intelligence plays a growing role in IoT, with some IoT platform software offering integrated AI capabilities.
There are several sub-specialities that comprise the whole. Although many of these subsections are used interchangeably with artificial intelligence, each of them has unique properties that contribute to the topic.
Machine Learning vs. AI
Artificial intelligence and machine learning (ML) are terms that are often used interchangeably in data science, though they aren’t the exact same thing. ML is a subset of artificial intelligence that believes that data scientists should give machines data and allow them to learn on their own. ML uses neural networks, a computer system modeled after how the human brain processes information. It is an algorithm designed to recognize patterns, calculate the probability of a certain outcome occurring, and “learn” through error and successes using a feedback loop. Neural networks are a valuable tool, especially for neuroscience research. Deep learning, another term for neural networks, can establish correlations between two things and learn to associate them with each other. Given enough data to work with, it can predict what will happen next.
There are two frameworks of ML: supervised learning and unsupervised learning. In supervised learning, the learning algorithm starts with a set of training examples that have already been correctly labeled. The algorithm learns the correct relationships from these examples and applies these learned associations to new, unlabeled data it is exposed to. In unsupervised learning, the algorithm starts with unlabeled data. It is only concerned with inputs, not outputs. You can use unsupervised learning to see group similar data points into clusters and learn which data points have similarities. In unsupervised learning, the computer teaches itself, wherein supervised learning, the computer is taught by the data. With the introduction of Big Data, neural networks are more important and useful than ever to be able to learn from these large datasets. Deep learning is usually linked to artificial neural networks (ANN), variations that stack multiple neural networks to achieve a higher level of perception. Deep learning is being used in the medical field to accurately diagnoses of more than 50 eye diseases.
Predictive analytics is composed of several statistical techniques, including ML, to estimate future outcomes. It helps to analyze future events based on what outcomes from similar events in the past. Predictive analytics and ML go hand in hand because the predictive models used often include an ML algorithm. Neural networks are one of the most widely used predictive models.
Natural Language Processing
Natural language processing (NLP) began as a combination of artificial intelligence and linguistics. It is a field that focuses on “computer understanding and manipulation of human language.” NLP is a way for computers to analyze and extract meaning from human language so that they can perform tasks like translation, sentiment analysis, and speech recognition, among others. Each of these topics deals with textual data in a different way. One such task is machine translation, where a computer automatically converts one natural language into another while preserving the meaning. It is difficult even by artificial intelligence standards, as it requires knowledge of word order, sense, pronouns, tense, and idioms, which vary widely across languages. In machine translation, the computer scans words that are already translated by humans to look for patterns. Like machine learning, NLP has progressed leaps and bounds by using neural network models that allow it to learn pattern recognition. Services like Google Translate use statistical machine translation techniques. There is still a long way to go until a computer can be considered completely fluent in a given language, though.
Classification and clustering are two different ways that ML creates pattern recognition. Classification is assigning things to a specific label, while clustering is grouping similar things together. You can apply either of these approaches to NLP. Text classification aims to assign a document or fragment of text to one or more categories to make it easier to sort through. Text classification is a technique used in spam detection and sentiment analysis, where effect is assigned to a given set of text being analyzed. Successful text classification, or document classification, occurs when an algorithm takes text input and reliably predicts what custom category that text falls into. Document clustering is a technique that clusters, or groups, similar documents into categories to allow structure within a collection of documents. The algorithm can do this even without understanding or being fluent in the language of the text input because it learns statistical associations between inputs and the categories. It is able to perform information extraction from a chunk of text.
Question answering works in a similar way. A question answering system answers questions posed on natural language. This practice is often used in customer service chatbots that can answer the most frequent or basic questions before escalating the query to a real human, if needed. These are different than bots, which are automated programs that crawl the internet looking for a specific type of information. The highest form of a question answering algorithm would pass the Turing test, a test to see if a machine’s text-based chat capabilities can fool a human into thinking they are talking to another human. A machine using text generation could arguably pass the Turing test. Text generation is the ability of a machine to generate coherent, human-like dialogue. Ethical concerns exist for AI text generation because they are so similar to human text.
Speech
A major area of speech in AI is speech to text, which is the process of converting audio and voice into written text. It can assist users who are visually or physically impaired and can promote safety with hands-free operation. Speech to text tasks use machine learning algorithms that learn from large data sets of human voice samples. Data sets train speech to text systems to meet production-quality standards. Speech to text has value for businesses because can aid in video or phone call transcription. Text to speech converts written text into audio that sounds like natural speech. These technologies can be used to assist individuals who have speech disabilities. Amazon’s Polly is an example of a technology that uses deep learning to synthesize speech that sounds human for e-learning, telephony and content creation applications.
Speech recognition is a task where speech is received by a system through a microphone and checked against a vocabulary bank for pattern recognition. When a word or phrase is recognized, it will respond with the associated verbal response or a specific task. You can see examples of speech recognition from Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana and Google’s Google Assistant. These products need to be able to recognize the speech input from a user and assign the correct speech output or action. Even more advanced are attempts to create speech from brainwaves for those who lack or have lost the ability to speak.
Expert Systems
An expert system uses a knowledge base about its application domain and an inference engine to solve problems that would normally require human intelligence. Examples of expert systems include financial management, corporate planning, credit authorization, computer installation design and airline scheduling. Expert systems have potential value in IoT applications. For example, an expert system in traffic management can aid with the design of smart cities by acting as a “human operator” for relaying traffic feedback to the appropriate routes.
A limitation of expert systems is that they lack the common sense that humans have, such as the limits of their skills and how recommendations they make fit into the larger picture. They lack the self-awareness that humans have. Expert systems are not substitutes for decision makers because they do not have human capabilities, but they can drastically reduce the human work required to solve a problem.
Planning, scheduling and optimization
AI planning is the task of determining the course of action for a system to reach its goals in the most optimal way possible. It is choosing a sequence of actions that have a high likelihood of transforming the state of the world in a step-wise fashion to achieve its goal. When this task is successful, it allows for task automation. These solutions are often complex. In dynamic environments with constant change, they require frequent trial and error iteration to fine tune. Scheduling is the creation of schedules, or temporal assignments of activities to resources while taking into account the goals and constraints are necessary.
Where planning is determining an algorithm, scheduling is determining the order and timing of actions generated by the algorithm. These are typically executed by intelligent agents, autonomous robots and unmanned vehicles. When they are done successfully, they can solve planning and scheduling problems for organizations in a cost-efficient manner compared to hiring more staff which increases overhead costs. Optimization can be achieved by using one of the most popular ML and deep learning optimization strategies: gradient descent. It is used to train a machine learning model by changing its parameters in an iterative fashion to minimize a given function to its local minimum.
Robotics
Artificial intelligence is at one end of the spectrum of intelligent automation, while robotic process automation (RPA), the science of software robots that mimic human actions, is at the other. One is concerned with replicating how humans think and learn, while the other is concerned with replicating how humans do things. Robotics develops complex sensorimotor functions that give machines the ability to adapt to their environment. Robots can sense the environment using computer vision.
Robotics are used in the global manufacturing sector in assembly, packaging, customer service and sold as open source robotics where users can teach robots custom tasks. Collaborative robots—or cobots—are robots that are designed to physically interact with humans in a shared workspace. They can be valuable to organizations who wish to eliminate human participation in dirty, dull and/or dangerous tasks.
The main idea of robotics is to make robots as autonomous as possible through learning. Despite not achieving human-like intelligence, there are still many successful examples of robots executing autonomous tasks, such as swimming, carrying boxes, picking up objects and putting them down. Some robots can learn decision making by making an association between an action and a desired result. Kismet, a robot at M.I.T.’s Artificial Intelligence Lab, is learning to recognize both body language and voice and how to respond appropriately.
Computer vision
Computer vision is defined as computers obtaining a high-level understanding from digital image or videos—on other words, image recognition. It is a fundamental component of many IoT applications, including household monitoring systems, drones, and car cameras and sensors. When computer vision is coupled with deep learning, it combines the best of both worlds: optimized performance paired with accuracy and versatility. Deep learning allows IoT developers greater accuracy in object classification.
Machine vision takes computer vision one step further by combining computer vision algorithms with image capture systems to better guide robot reasoning. An example of computer vision is a computer being able to “see” a unique set of stripes on a UPC and scan and recognize it as a unique identifier. Optical character recognition (OCR) uses image recognition of letters to decipher paper printed records and/or handwriting despite a multitude of different fonts and handwriting variations across people. Another example is how Apple’s Face ID allows your iPhone to recognize your face only to unlock your screen. A machine can use image recognition to interpret input it receives through computer vision and categorize what that input is. With training, its computer vision can learn to recognize input in different states, like humans. Computer vision can also enable machine-assisted moderation of images.
[ad_2]
This article has been published from the source link without modifications to the text. Only the headline has been changed.