HomeArtificial IntelligenceArtificial Intelligence EducationHistory of Artificial Neural Network (ANN)

History of Artificial Neural Network (ANN)

The history of neural networking arguably began in the late 1800s with scientific endeavors to study the activity of the human brain. In 1890William James published the first work about brain activity patterns. In 1943McCulloch and Pitts created a model of the neuron that is still used today in an artificial neural network. This model is segmented in two parts

  • A summation over-weighted inputs.
  • An output function of the sum.

Artificial Neural Network (ANN):

In 1949Donald Hebb published “The Organization of Behavior,” which illustrated a law for synaptic neuron learning. This law, later known as Hebbian Learning in honor of Donald Hebb, is one of the most straight-forward and simple learning rules for artificial neural networks.

In 1951Narvin Minsky made the first Artificial Neural Network (ANN) while working at Princeton.

In 1958, “The Computer and the Brain” were published, a year after Jhon von Neumann’s death. In that book, von Neumann proposed numerous extreme changes to how analysts had been modeling the brain.

Perceptron:

Perceptron was created in 1958, at Cornell University by Frank Rosenblatt. The perceptron was an endeavor to use neural network procedures for character recognition. Perceptron was a linear system and was valuable for solving issues where the input classes were linearly separable in the input space. In 1960Rosenblatt published the book principles of neurodynamics, containing a bit of his research and ideas about modeling the brain.

Despite the early accomplishment of the perceptron and artificial neural network research, there were many individuals who felt that there was a constrained guarantee in these methods. Among these were Marvin Minsky and Seymour Papert, whose 1969 book perceptrons were used to dishonor ANN research and focus attention on the apparent constraints of ANN work. One of the limitations that Minsky and Papert’s highlight was the fact that the Perceptron was not capable of distinguishing patterns that are not linearly separable in input space with a linear classification problem. Regardless of the disappointment of Perceptron to deal with non-linearly separable data, it was not an inherent failure of the technology, but a matter of scale. Hecht-Nielsen showed a two-layer perceptron (Mark) in 1990 that is a three-layer machine that was equipped for tackling non-linear separation problems. Perceptrons introduced what some call the “quiet years,” where ANN research was at a minimum of interest.

The backpropagation algorithm, initially found by Werbos in 1974, was rediscovered in 1986 with the book Learning Internal Representation by Error Propagation by Rumelhart, Hinton, and Williams. Backpropagation is a type of gradient descent algorithm used with artificial neural networks for reduction and curve-fitting.

In 1987, the IEEE annual international ANN conference was begun for ANN scientists. In 1987, the International Neural Network Society(INNS) was formed, along with INNS neural Networking journal in 1988.

This article has been published from the source link without modifications to the text. Only the headline has been changed.

Source link

Most Popular