HomeData EngineeringData EducationPast and Future of Deep Learning

Past and Future of Deep Learning

[ad_1]

Deep learning is growing in both popularity and revenue. In this article, we will shed light on the different milestones that have led to the deep learning field we know today. Some of these events include the introduction of the initial neural network model in 1943 and the first use of this technology, in 1970.

We will then address more recent achievements, starting with Google’s Neural Machine Translation and moving on to the lesser known innovations such as the Pix2Code – an application that is used to generate a specific layout code to defined screenshots with 77% accuracy.

Towards the end of the article, we will briefly touch on automated learning-to-learn algorithms and democratized deep learning (embedded deep learning in toolkits).

The Past – An Overview of Significant Events

1943 – The Initial Mathematical Model of a Neural Network

For deep learning to develop there needed to be an established understanding of the neural networks in the human brain.

A logician and a neuroscientist – Walter Pitts and Warren McCulloch respectively, created the first neural network mathematical model. Their work, ‘A logical Calculus of Ideas Immanent in Nervous Activity’ was published, and it put forth a combination of algorithms and mathematics that were aimed at mimicking the human thought process. The McCulloch-Pitts neurons remains the standard today.

1950 – The Prediction of Machine Learning

Alan Turing, a British mathematician, is widely known for the contribution he made to code-breaking efforts during the 2nd World War. In the year 1947, Turing foresaw the development of machine learning and the impact it would eventually have on regular jobs of the time.

In 1950, Alan Turing proposed the idea of a computation machine with genetic algorithms via his paper ‘Computing Machinery and Intelligence.’ In the paper, he put forth the ‘Turing Test’, a test used to determine whether or not a computer had the ability to think.

1957 – Setting the Foundation for Deep Neural Networks

In 1957, a psychologist, Frank Rosenblatt submitted a study called ‘The Perceptron: A Perceiving and Recognizing Automation.’ In his study, he put forward the idea of constructing an electromechanical system that can learn to recognize identities and similarities between patterns of electrical, optical and tonal information—this was to be done in a way that imitates the workings of the human brain.

This idea was more inclined towards hardware than software. It did, however, lay the foundations for bottom-up learning and is accepted as the basis of Deep Neural Networks (DNN).

1965 – The First Working Deep Learning Networks

The mathematician Alexey Ivakhnenko and his associate VG Lapa proposed the first working deep learning network in 1965.

Ivakhnenko put in place the Group Method of Data Handling (GMDH) –a “family of inductive algorithms for computer-based mathematical modeling of multi-parametric datasets that features fully automatic structural and parametric optimization of models.” He applied this to neural networks.

Many experts consider Ivakhnenko to be the father of modern deep learning.

1979-80 – An ANN Understands How to Recognize Visual Patterns

A well regarded neural network innovator, Kunihiko Fukushima is probably best known for creating Neocognitron. Neocognitron is an artificial neural network that can recognize visual patterns. It is used for handwritten character and similar recognition tasks, natural language processing, and recommender systems.

Fukushima’s work is heavily influenced by Hubel and Wiesel; this paved the way for the development of the first convolutional neural networks. These networks are based on the way the visual cortex is organized in animals. They are also variations of multilayer perceptrons that are designed to use minimum amounts of preprocessing.

1982 – The Development of the Hopfield Networks

In the year 1982 John Hopfield developed and circulated the system called Hopfield Networks. Hopfield Networks are a recurrent neural network which acts as a content-addressable memory system. This system is still used for deep learning today.

1986 – Improvements in Shape Recognition and Word Prediction

In 1986 a paper titled “Learning Representations by Back-propagating Errors,” by David Rumelhart, Geoffrey Hinton, and Ronald Williams described in greater detail the process of backpropagation.

The paper outlined how existing neural networks could be vastly improved to assist with tasks such as shape recognition. Hinton is widely accepted as the godfather of deep learning.

1993 – Jurgen Schmidhuber Solves a ‘Very Deep Learning’ Task

In the year 1993 German computer scientist Jurgen Schmidhuber solved a “very deep learning” task which needed an excess of 1,000 layers in the recurrent neural network.

1997 – Long Short-Term Memory was Suggested

In the year 1997, Jurgen Schmidhuber and Sepp Hochreiter put forward the idea of a  short-term memory (LSTM) recurrent neural network framework.

This improved efficiency as well as the practicality of recurrent neural networks by abolishing of the long-term dependency problem. LSTM networks can “remember” information for a longer duration of time.

Today, LSTM networks are widely used in DL circles. Google implemented a LSTM network into its speech-recognition software for Android OS smartphones.

2005 – Gradient-based learning

Yann LeCun was fundamental in the advancement of deep learning in 2005 when he published his “Gradient-Based Learning Applied to Document Recognition” paper. His proposed Stochastic gradient descent algorithm along with the backpropagation algorithm has fast become the preferred approach to deep learning.

The Future – What to Expect?

Deep Learning Will Benefit with Native Support Within Spark

The Spark community is predicted to boost the platform’s native deep learning capabilities in the coming 1-2 years. Looking at the recent Spark Summit sessions, the community seems to be headed towards better support for TensorFlow, with Caffe, BigDL, and Torch also among those up for adoption.

Deep Learning is Set to Create a Stable Niche in the Open Analytics Ecosystem

Many different deep learning programs depend on Spark, Kafka, Hadoop, and various other open source platforms. It is becoming apparent that it is not enough to train, manage, and execute deep learning algorithms without having access to the complete suite of big data analytics capabilities, which are provided by support platforms.

Spark has quickly established itself as an essential platform for scaling and accelerating deep learning algorithms that have been built using a varied set of tools.

Deep Learning Will Be Embedded in All Security Tools

Most next-generation security tools rely on deep learning and machine learning, to establish behavioral baselines, identify anomalous behavior and prioritize alerts for security analysts. Open source security tools leverage deep learning to analyze open source components and uncover unknown or hidden vulnerabilities. In the future, it is predicted that no security tool will be effective without a deep learning analytic or predictive component.

Deep Learning Tools will Incorporate Simplified Programming Frameworks to Enable Quick and Efficient Coding

The application developer community insists on making use of APIs and similar programming abstractions for quick coding of necessary algorithmic capabilities using minimal lines of code. In the future, deep learning developers will be able to adopt integrated, open, cloud-based development environments.

These would grant access to a full gamut of off-the-shelf and pluggable algorithm libraries, which would allow for API-driven development of deep learning applications.

Deep learning Toolkits to Support Visual Development of Reusable Components

It is predicted that deep learning toolkits will incorporate modular capabilities to assist with easy configuration, visual design, and training of fresh models derived from existing building blocks.

A number of these recyclable components can be sourced via “transfer learning” from past projects that have addressed similar use cases. Recyclable deep learning artifacts that have been included in standard interfaces and libraries will consist of representations, neural-node layering, training methods, weights, learning rates, and related features from earlier models.

Deep Learning Tools Will be Included in Every Design Surface

In the next 5 to 10 years, deep learning tools, languages, and libraries will likely establish themselves as standard components of every software development toolkit.

Equally important would be a user-friendly deep learning development capability that will be included in generative design tools that would be utilized by designers, artists, architects, and professionals of similar streams who would have earlier never thought of approaching a neural network.

In the driving seat would be a mania for tools powered by deep learning that can assist with, for example, image search, photorealistic rendering, style transformation, auto-tagging.

Conclusion

As the market for deep learning gathers momentum and heads toward mass adoption, it will be used for business intelligence, data visualization and predictive analytics in virtually every computer system or consumer electronics product.

[ad_2]

This article has been published from the source link without modifications to the text. Only the headline has been changed.

Source link

Most Popular