AI brain to outperform humans?

A multidisciplinary team of researchers from Technische Universität Berlin recently developed a neural ‘network’ that, with a single neuron, could one day outperform human brainpower.

Our brains contain about 86 billion neurons. They form one of the most advanced organic neural networks known to exist when combined.

Current cutting-edge artificial intelligence systems attempt to mimic the human brain by creating multi-layered neural networks that are designed to cram as many neurons into as little space as possible.

Unfortunately, such designs need enormous amounts of power and yield results that pale when compared to the robust, energy-efficient human brain.

According to Katyanna Quach’s article in The Register, scientists estimate that the costs of training just one neural “super network” will exceed the costs of a nearby space mission:

The size of neural networks, as well as the hardware required to train them using massive data sets, is increasing. Consider GPT-3: it has 175 billion parameters, which is 100 times more than its predecessor GPT-2.

Bigger may be better in terms of performance, but at what cost to the environment? According to Carbontracker, training GPT-3 just once requires the same amount of energy as 126 homes in Denmark use in a year, or driving to the Moon and back.

By creating a neural network with a single neuron, the Berlin team decided to challenge the notion that bigger is better.

A network typically requires more than one node. However, in this case, the single neuron can network with itself by spreading out over time rather than space.

According to the team’s research paper:

We developed a method for fully folding-in-time of a multilayer feed-forward DNN. A single neuron with feedback-modulated delay loops is all that is required for this Fit-DNN approach. An arbitrarily deep or wide DNN can be realized by temporal sequentialization of nonlinear operations.

Each neuron in a traditional neural network, like GPT-3, can be weighted to fine-tune results. As a result, more neurons produce more parameters, and more parameters produce finer results.

The Berlin team, on the other hand, discovered that they could perform a similar function by weighting the same neuron differently over time rather than spreading differently-weighted neurons across space.

According to a press release issued by Technische Universität Berlin:

This would be analogous to a single guest simulating a large dinner table conversation by rapidly switching seats and speaking each part.

To put it mildly, “rapidly” is an understatement. The team claims that by instigating time-based feedback loops in the neuron with lasers — neural networking at or near the speed of light — their system can theoretically reach speeds approaching the limit of the universe.

What does this mean for artificial intelligence? According to the researchers, this could help to offset the rising energy costs associated with training strong networks. If we continue to double or triple usage requirements with larger networks over time, we will eventually run out of feasible energy to use.

The real question is whether a single neuron trapped in a time loop can produce the same results as billions of neurons.

The researchers used the new system to perform computer vision functions in preliminary testing. It was able to remove manually added noise from clothing images for producing an accurate image, which is considered fairly advanced for modern AI.

The scientists believe that with further development, the system could be expanded to create an infinite number of neuronal connections from neurons suspended in time.

It’s possible that such a system could outperform the human brain and become the world’s most powerful neural network, which AI experts call a “superintelligence.”

Source link

Most Popular