By emulating how sleep helps people retain what we learn during the day, artificial intelligence can learn and remember how to do a variety of activities.
According to Maxim Bazhenov of the University of California, San Diego, there is a great push presently to apply ideas from neuroscience and biology to improve existing machine learning — and sleep is one of them.
Many AIs are limited to mastering a single set of clearly defined tasks; they are unable to learn new information later without losing everything they have already learnt. The problem arises, according to Pavel Sanda of the Czech Academy of Sciences, if you wish to design systems that are capable of so-called lifelong learning. Humans acquire knowledge through lifelong learning in order to respond to and overcome obstacles in the future.
A spiking neural network, a connected grid of synthetic neurons replicating the structure of the human brain, was taught by Bazhenov, Sanda, and their colleagues to learn two distinct tasks without overwriting connections learned from the first assignment. They achieved this by alternating periods of intense training with slumber-like moments.
By stimulating the network’s artificial neurons in a noisy pattern, the researchers were able to imitate sleep in the neural network. In order to reinforce the connections learned from both activities, they also made sure that the sleep-inspiring noise generally mirrored the rhythm of neuron activity during the training sessions.
The neural network was first trained on the first task, then on the second task, and finally with the addition of a sleep time at the conclusion. However, scientists soon realized that the neural network connections formed during the initial job were still lost after this sequence.
Instead, subsequent research by Erik Delanois at the University of California, San Diego revealed that it was crucial to “have fast alternating sessions of training and sleep” when the AI was mastering the second task. The connections from the first task that might have otherwise been forgotten were strengthened as a result.
Experiments demonstrated how an AI agent might learn two distinct foraging patterns by using a spiking neural network that had been trained in this manner to search for simulated food particles while avoiding dangerous ones.
According to Hava Siegelmann of the University of Massachusetts Amherst, “Such a network will be able to combine sequentially obtained knowledge in clever ways and apply this learning to unexpected contexts, just like animals and people do.”
Because they are challenging to train, Siegelmann claims that spiking neural networks, with their intricate, biologically inspired design, haven’t yet shown to be useful for widespread application. Demonstrations using more challenging tasks on the popular artificial neural networks utilized by tech businesses are necessary for the next significant steps in proving the utility of this approach.
Spiking neural networks have the advantage of being more energy-efficient than other neural networks. According to Ryan Golden of the University of California, San Diego, he thinks over the next decade or so there will be kind of a significant drive for a switch to more spiking network technology instead. It’s wise to work those things out in the beginning.