Helping AI Remember

The majority of reports claim that AI will soon be everywhere kind of like sugar or Taylor Swift.

Experts predict that in the near future, AI systems will become the cognitive foundation of everything we use or do, transforming entire industries and society.

Imagine that you’re in the friendly skies going somewhere when your aircraft’s central nervous system, which is now controlled by AI, suddenly shuts down, rendering the aircraft powerless.

Alternately, the New York Stock Exchange may decide to shut down suddenly and completely, which would cause the economy and your life savings to crash.

Or your toaster won’t deliver the one serving of food your body will accept before you start your daily slog to work.

Believe it or not, the killer technology that was meant to destroy humanity has a small issue to fix before it can live up to its deadly potential, a belief shared by none other than Geoffrey Hinton, the guy who designed it.

It just cannot remember previous stuff, which makes it a dreadful problem to have.

And when it forgets, it immediately shuts down, engaging in a process known as “catastrophic forgetting” that may eerily mirror your entire high school curriculum.

In remembrance of things past

Any fundamental AI system that is worth its salt should be able to successfully learn a series of tasks through a method known as constant learning.

Learning occurs in humans when our brain is able to recall memories of earlier instances of performing something.

This is dependent on the REM cycle, which occurs during the sleep phase and pushes recent memories into the long-term memory bank to make room for new memories.

Since AI neural networks fundamentally replicate how the human brain functions, there has long been an expectation that an algorithm can use its stored knowledge of carrying out all of its previous tasks to learn new ones in a manner similar to way we humans do. However, this just doesn’t seem to be the case.

Huge gaps in cognition are being caused by something that is going on, or not going on, depending on the situation, in the training of artificial neural networks. While learning new things, the neural networks will completely forget all of their prior knowledge, after which they will begin to freeze.

Researchers came up with a creative solution to remedy this problem. They started feeding AI systems old data while processing new ones, a method known as interleaved training, which they believed matched how the brain functions while it is asleep.

From a purely practical standpoint, there isn’t nearly enough time for the brain — or its machine imitator — to digest all of this previous learning material while you are asleep, therefore it turns out that this process doesn’t actually occur in the brain.

The solution has to be somewhere.

Researchers from the University of California, San Diego and the Institute of Computer Science of the Czech Academy of Sciences in Prague, Czech Republic, examined sleep as well, but from a different angle.

They eschewed a traditional neural network – one that constantly adjusts its synapses (the connections between neurons) until it finds a solution – in favour of a’spiking’ one that they thought most closely resembled the human brain.

The researchers found that a “spiking” network consumes substantially less power and bandwidth since it only transmits an output after receiving a large number of signals over time. Thus, it is able to reactivate neurons that are involved in recalling previously learned tasks. It seems to function.

After passing through sleep-like stages, the spiking neural network was capable of carrying out both tasks.

According to one of the study’s researchers from the University of California, San Diego, Jean Erik Delanois, their work highlights the value in developing biologically inspired solutions.

In the image of thy creator

More recently, while addressing the same issue of catastrophic forgetting in deep-learning neural nets, researchers at Ohio State University avoided sleep.

To tackle this issue, they adopted a completely unique and clever strategy.

Ness Shroff, an Ohio State professor of computer science and engineering, said that his study delves into the difficulties of continuous learning in these artificial neural networks and uncovers ideas that start to close the gap between machine and human learning.

Traditional machine learning algorithms are force-fed data in one huge push, Shroff and his colleagues found, but that’s not always beneficial for the machine. In fact, how closely related activities are to one another, what characteristics they share, and even the order in which the tasks are presented all have an impact on how well the algorithm recalls them.

In what could simply be one of the more amusing ironies of our day, Shroff and his colleagues discovered that algorithms, like humans, were significantly better able to remember when supplied with a succession of highly distinct tasks rather than a series of identical tasks.

This is how human brains also work. If they repeatedly go to the same place or have the same experience, the same events—parties, vacations, even days of the week—blend into one another. However, the distinctive ones stand out.

In order for the AI to learn new things as well as activities that are similar to previous ones, the Ohio State researchers found that dissimilar tasks need to be presented extremely early in the continuous learning process.

Their work is especially crucial because, according to Shroff, a better understanding of AI may be possible if we can identify the parallels between computers and the human brain.

Algorithms must be scalable, capable of handling many and unforeseen scenarios, and able to learn better for AI to be really successful and safe.

This goal should be greatly aided by these two machine memory options.

Source link