According to a new book by two AI experts, mankind may face extinction as a result of the race to create superintelligent AI.
According to Eliezer Yudkowsky and Nate Soares’ book “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All,” the development of AI is advancing too quickly and without the necessary safeguards.
Yudkowsky told ABC News, “We tried a lot of things besides writing a book, and if you’re trying to prevent the utter extinction of humanity, you really want to try all the things you can.”
According to Yudkowsky, big tech firms predict that within two to three years, superintelligent AI—a speculative type of AI that could have considerably more sophisticated cognitive capacities than humans—will be available. However, he cautions that these businesses might not be completely aware of the dangers they are incurring.
According to Soares, superintelligent AI may be more harmful and fundamentally different from the chatbots that many people use now. He told, “Chatbots are a first step. “They [companies] are racing to build smarter and smarter AIs.”
The authors clarify that contemporary AI systems are more difficult to manage since they are “grown” as opposed to constructed in traditional ways. Developers can’t just fix the code when these systems perform unanticipated things.
“When they threaten a New York Times writer or engage in extortion, that is simply the character of these AIs as they develop. “It’s not intentional behavior,” Soares explained.
Soares contrasted artificial intelligence abilities to human abilities using the example of a professional NFL squad competing against a high school team.
“You’re not sure what the plays are. “You know who will win.” He argued that AI may eventually take over robots, develop hazardous diseases, or build infrastructure that overwhelms mankind.
While others believe AI can assist address humanity’s most pressing problems, Yudkowsky remains unconvinced.
The authors call for the development of superintelligent AI to be completely stopped.
According to Yudkowsky, “I don’t think you want a plan to get into a fight with something that is smarter than humanity.” “That’s a dumb plan.”






