Artificial intelligence’s potential dangers to civilization were long thought to be far off concerns. But it’s no secret that as this technology advances, efforts to reduce its risks are being left behind. Guard rails are not in place.
Elon Musk and over 1,000 other people joined forces to sign an open letter in which they express their belief that these threats are present and will become more so if we don’t slow down the development of potent AI systems. According to a Reuters article, the backers—who include the CEO of Stability AI Emad Mostaque, scientists at Alphabet-owned DeepMind, and renowned AI experts Yoshua Begio and Stuart Russell—join the Future of Life Institute, which is primarily supported by the Musk Foundation, Founders Pledge, and Silicon Valley Community Foundation.
And this situation is urgent. The organization is requesting a six-month pause from any “big AI experiments.”
The authors of the letter requested a six-month halt on the creation of advanced AI systems—defined as those that are more potent than OpenAI’s GPT-4—in the letter.
Powerful AI systems should only be developed if we are convinced that their impacts will be positive and their risks will be manageable, the letter states. Society has put a stop to other technologies that could have disastrous impacts on society. Here, we can accomplish it.
The letter’s supporters claim that while AI has the potential to bring about a profound change in the history of life on Earth, there isn’t currently a level of planning and management taking place that matches this potential. This is especially true given that AI labs are engaged in a out-of-control race to grow and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.
The letter poses a series of “should we” questions regarding whether or not we should allow machines to inundate information channels with propaganda, automate job duties, develop nonhuman minds that could replace humans, or risk the loss of civilizational control in our quest to develop ever-better neural networks as AI systems catch up with human abilities in general tasks.
But as might be expected, not everybody agrees. Sam Altman, the CEO of OpenAI, has not signed the petition, and Umea University AI researcher Johanna Bjorklund tells Reuters that the AI issue is unfounded. According to Bjorklund, These kinds of claims are intended to create hype. It’s intended to cause anxiety in people. He don’t believe it is necessary to apply the handbrake.
According to OpenAI, it may be necessary at some point to obtain independent assessment before beginning to train future systems, and the most sophisticated effort must consent to restrict the rate of growth of new models.