HomeArtificial IntelligenceArtificial Intelligence NewsContainment algorithms can no longer hold super-intelligent AI

Containment algorithms can no longer hold super-intelligent AI

Theoretical calculations suggest it would be impossible to build an algorithm that could control such machines

A team of computer scientists has used theoretical calculations to argue that algorithms could not control a super-intelligent AI.

Their study addresses what Oxford philosopher Nick Bostrom calls the control problem: how do we ensure super-intelligence machines act in our interests?

The researchers conceived of a theoretical containment algorithm that would resolve this problem by simulating the AI‘s behavior, and halting the program if its actions became harmful.

But their analysis found that it would be fundamentally impossible to build an algorithm that could control such a machine, said Iyad Rahwan, Director of the Center for Humans and Machines:

If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable.

The study found that no single algorithm could calculate whether an AI would harm the world, due to the fundamental limits of computing:

Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) impossible.

This type of AI remains confined to the realms of fantasy — for now. But the researchers note the tech is making strides towards the type of super-intelligent systems envisioned by science fiction writers.

“There are already machines that perform certain important tasks independently without programmers fully understanding how they learned it,” said study co-author Manuel Cebrian of the Max Planck Institute for Human Development.

“The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity.”

This article has been published from a wire agency feed without modifications to the text. Only the headline has been changed.

Source link

 

Most Popular