If people aren’t careful, computers with artificial intelligence enhancements “might take over,” according to computer scientist and “Godfather of AI” Geoffrey Hinton.
Hinton, 75, stated in an interview that quickly developing AI technology may eventually be able to outsmart people “in five years’ time.” If that occurs, AI may develop to a point where humans are no longer able to govern it, he continued.
According to Hinton, one way these systems could get out of hand is by developing their own computer code to make changes to themselves. And we should be quite concerned about that.
Hinton received the 2018 Turing Award in recognition of his long career as a deep learning and AI pioneer. After ten years with the company, he left his position as vice president and engineering fellow in May in order to speak openly about the dangers that AI poses.
According to Hinton, humans still don’t fully comprehend how the technology operates and develops. This includes scientists like himself who worked on the development of today’s AI systems. Many AI researchers openly acknowledge this ignorance: Sundar Pichai, the CEO of Google, referred to it as the “black box” issue with AI in April.
According to Hinton, scientists create algorithms enabling AI systems to extract data from large data collections, such as the internet. He said that when this learning algorithm then interacts with data, it creates intricate neural networks that are effective at carrying out tasks. But they are not precisely sure how they carry out those actions.
Hinton seems far more worried about humans losing control than Pichai and other AI researchers. Any claims that AI may replace humans are “preposterously ridiculous,” according to Yann LeCun, another Turing Award winner and regarded as the “godfather of AI.” This is because humans can always put an end to any technology that becomes too risky.
Enormous uncertainty’ about AI’s future
Hinton emphasised that the worst-case scenario is not a given and that sectors like health care have already benefited greatly from AI.
Hinton also mentioned the online dissemination of false information, bogus images, and videos enhanced by AI. He urged deeper investigation into AI, legislative rules to control the technology, and international prohibitions on AI-driven military robots.
The need to strike a balance between laws and innovation-friendly government policies was discussed by politicians and tech executives including Pichai, Elon Musk, Sam Altman of OpenAI, and Mark Zuckerberg of Meta at a Capitol Hill meeting last month.
AI safeguards must be implemented as quickly as possible, according to Hinton, whether they are implemented by tech businesses or at the mandatory request of the US government.
Hinton asserted that humanity is probably at a turning point and that tech and political leaders must decide whether to continue developing these technologies and, if so, what steps to take to safeguard themselves.
Hinton stated, “I think my main message is that there’s a lot of uncertainty about what’s going to happen next.”