In a recent publication, researchers from the University of Oxford and Google Deepmind presented a terrifying caution. According to the article, which was published in AI Magazine last month, the threat posed by AI is more serious than previously thought. In fact, it’s so great that humanity may someday be wiped out by artificial intelligence.
AI is amazing. It has frequently helped academics find answers to complex problems, and it can even mimic Val Kilmer’s voice in movies. However, academics now assert that the threat posed by AI is more serious than we previously believed.
The researchers made their findings public at the end of August. According to the report, the threat presented by AI may fluctuate based on a variety of aspects that the researchers have discovered.
The researchers’ main presumption, according to Michael Cohen, one of the paper’s authors, is that “AIs intervening in the administration of their incentives will have repercussions that are quite unpleasant.” According to Cohen, the circumstances the researchers found demonstrate that the likelihood of a dangerous result is higher than in any prior publication.
The team asserts that the escalating threat posed by AI is the result of an existential catastrophe. Additionally, this is highly likely, not just a possibility. The fact that additional effort can always be directed toward an issue in order to raise it to the point where the AI gets rewarded for its efforts, according to Cohen and his fellow researchers, is one of the major problems. This can cause us to disagree with the machines.
The simple version,” Cohen says in his Twitter thread, “is that we require some energy to grow food, but more energy may always be employed to raise the probability that the camera sees the number 1 forever. We are now forced to compete with a far more competent agent. In light of this, it appears that the threat posed by AI may vary depending on how it is trained.
The figure Cohen is using here is based on a test that required the AI agent to achieve a specific number. And after being instructed on how to do so, it might expend more energy in order to maximize achieving that objective. It’s an intriguing idea, and it demonstrates why so many people are concerned about AI’s potential dangers.
Of course, there are other outcomes besides those illustrated in the paper. And perhaps we could learn how to operate the machinery. This new research does, however, raise some concerns about how much we want to trust AI moving ahead and how we could regulate it in the event of an apocalyptic event.