There is a great deal of dispute and uncertainty on the risks associated with the potential future development of superhuman AI, but many artificial intelligence experts believe that it has a non-trivial likelihood of triggering the extinction of humans.
These conclusions are based on the largest-ever poll conducted among 2700 AI researchers who have recently presented their work at six of the most prestigious AI conferences. Participants in the study were asked to discuss potential timelines for future technological milestones related to artificial intelligence as well as the positive or negative effects those developments may have on society. About 58% of researchers stated that they thought there was a 5% risk of AI leading to the extinction of humans or other dire consequences.
According to Katja Grace of the Machine Intelligence Research Institute in California, one of the paper’s authors, it’s a significant indication that the majority of AI researchers do not believe it is highly impossible that advanced AI will wipe out humans. She believes that the overall perception of a non-minuscule risk is far more informative than the precise % risk.
However, Émile Torres of Case Western Reserve University in Ohio thinks that there isn’t any reason to panic just yet. They claim that many AI experts “don’t have a good track record” of predicting upcoming advancements in AI. Although Grace and her colleagues agreed that experts in predicting the future course of AI are few, they demonstrated that a 2016 iteration of their survey performed a “fairly good job of forecasting” major advancements in the field.
Numerous AI experts forecast that AI will reach crucial milestones sooner than expected when comparing responses from a 2022 edition of the same question. This is in line with Silicon Valley’s push to widely implement comparable AI chatbot services based on large language models and ChatGPT’s November 2022 launch.
According to the study researchers, within the next ten years, AI systems have a fifty percent or greater chance of doing the most of the 39 sample activities. These tasks include producing brand-new songs that are identical to Taylor Swift hits or completely developing a payment processing website from scratch. It is anticipated that some operations, like physically installing electrical wiring in a new home or cracking old mathematical problems, would take longer.
A 50% chance was assigned to the development of AI that could surpass humans on any task by 2047, while a 50% chance was assigned to the likelihood that all human employment would be totally automatable by 2116. Compared to the estimations from the previous year’s poll, these forecasts are 13 and 48 years earlier.
However, Torres warns that the high hopes for AI development can potentially be disappointed. Many of these discoveries are somewhat unforeseen. They add that it’s quite feasible that the area of artificial intelligence experiences another winter, alluding to the decline in corporate interest and investment in AI in the 1970s and 1980s.
Aside from the superhuman AI risks, there are other pressing concerns. Seventy percent or more of AI researchers said that scenarios including deepfakes, manipulating public opinion, creating weapons, controlling populations through authoritarian means, and increasing economic disparity using AI were extremely or seriously concerning. Torres also emphasized the risks associated with artificial intelligence spreading false information about existential concerns like climate change or deteriorating democratic governance.
According to Torres, we currently possess the technology that has the potential to gravely jeopardize [US] democracy. We’ll see what transpires in the election of 2024.