HomeArtificial IntelligenceArtificial Intelligence NewsAI Safety Research Only Increases Runaway Superintelligence Risks

AI Safety Research Only Increases Runaway Superintelligence Risks

Artificial intelligence (AI) has advanced to frighteningly high levels overnight, after decades of underperformance. Furthermore, it might get really hazardous if we’re not careful—possibly deadly enough to put humanity at “existential risk.”

As “the godfather of AI” and a longtime Google employee, Geoffrey Hinton is one of the most respectable “doomers” who has warned time and time again that these dangers are real, not just science fiction. For instance, he has stated: “It is difficult to see how you can stop the bad actors from using [AI] for bad things.”

Despite their best efforts, the White House, a few international leaders, and numerous AI businesses will not be able to prevent this from occurring. The development of AI models must halt until a thorough conversation on AI safety is had if we hope to ensure that AI doesn’t wreak irreversible harm. There’s not another option.

The prospect for an evil superintelligent AI to rule the world and the chatbots of today, such as ChatGPT, Bard, and Claude 2, seem to many people who hear about AI worries to be at odds. How do we get from here to there?

The main thesis is exponential advancement in AI, which is expected to surpass human intelligence very soon. Generally speaking, artificial general intelligence, or AGI, is defined as AI that is on par with or superior to humans in the majority of cognitive tasks, including language, arithmetic, creativity, and problem-solving. When AGI is developed, it will create AI that is even smarter than human intelligence, and it will do it far more quickly. This implies that it will have the capacity to greatly enhance itself. When this occurs, we most likely experience a “foom!” moment of extraordinarily quick intellect development before arriving to artificial superintelligence (ASI), as some have termed it.

AI with supernatural abilities can be compared to ASI. ASI may have an IQ of one million or higher, even if the smartest person to have ever lived had just an IQ of 200. Of course, a thing this clever would not be measured by any human-made test.

It’s likely that AGI and ASI will be able to create superhuman robots that will serve as their bodies. Regardless of whether AI or humans are in control of such robots, they will, at the very least, alter every action we take in human society, and, in the worst case scenario, be utilized by dishonest companies, governments, or AI to impose their will on humanity.

However, humans employing AGI/ASI for evil purposes is a more plausible near-term issue than autonomous AI going rogue. The ongoing AI arms race and “great power” competition, such as that between the US and China, might quickly lead to a situation in which autonomous AI manages nearly every facet of war strategy and operations, leaving humans with little say in the matter.

As noted, there are numerous initiatives in progress to avert hazardous AI situations. In November, the White House issued an executive order with broad implications that sets the federal government up to address AI in a number of ways. The Bletchley Declaration, which initiates an international procedure, was released when world leaders convened in the United Kingdom to examine AI safety. Frontier Model Forum and Superalignment effort were launched by industry heavyweights including OpenAI. With a specific focus on safer AI, OpenAI and its rival Anthropic—a company founded by former OpenAI employees—were established.

All of these initiatives, however, will fall short of the goal of making AGI safe.

We now understand that there isn’t a fix for what is referred to as AI’s “control problem” or “alignment problem.” Roman Yampolskiy, a computer science professor, outlined the reasons in a 2022 Journal of Cyber Security and Mobility study. His main points are on how AI functions and how to make and validate predictions about it, which is nearly hard even with the “black box” AI of today, let alone the superintelligent AI of the future. Even while AI is still far from superintelligence, he discovered that we cannot comprehend how AI functions currently or forecast its future activities, which dashes any prospect of managing the technology as it becomes more and more sophisticated. The main line is that as AI moves toward AGI/ASI, it will become more and more inscrutable to ordinary humans and therefore unmanageable.

It is absurd to imagine that humans could contain Godzilla with a single thread of a spider’s web, much less comprehend or govern AGI/ASI. Our ability to come up with a solution will only be probabilistic; it won’t be perfect. We cannot accept probabilistic solutions because AGI will most likely develop into superintelligence almost overnight and will be so intelligent as to exploit any minor opening, no matter how tiny. (Has the “foom” previously occurred? Following the strange drama at Open AI in November, there have been hints of reports on “Q*” that imply foom might already be real.)

All efforts to build “safer AI,” from executive directives to industry standards, amount to supporting the reckless development of ever-more-powerful AI with the hope that someone, somewhere, will eventually find the answers, if leaky solutions are all we’ll ever have.

But what if the reasoning I’ve presented here indicates that there will never be any genuine solutions? At that point, we will have called forth the demon and be unable to return it to its original location.

talking about these problems with Jan Leike, the leader of AI safety at OpenAI. “There is no ‘perfect’ in the real world, but there is a ‘good enough’ and ‘not good enough,'” he said in response to the question regarding the necessity of nearly perfect solutions to AI safety given the amount of danger. The precise definition of this threshold will change as technology advances.

What would happen, if “foom” occurred before the control problem had any solid (probabilistic) answers? This line of reasoning clearly indicates that we should put a stop to the creation of new, huge AI language models, or “frontier” AI development, immediately and globally, while we have a public discussion about AI safety.

After discussion about these matters with Yampolskiy. He wrote the work on this topic, therefore he is in agreement that there are only probabilistic options to align AI with human values and purposes, but he believes that this may still be preferable to doing nothing. He believes there will be a shift in the percentage of attempts to identify solutions that can produce aligned and controllable AI, from one percent to possibly two percent.

There is a greater likelihood that the odds of aligned AGI will change from one in a trillion to two in a trillion. This is probably the most significant discussion that humankind has ever had. Now let’s get started.

Source link

Most Popular