Naturally, everyone wanted to know how Stephen Hawking predicted the end of the world because he was the most well-known scientist of his era. Over the course of his lengthy career, Hawking stated several times that mankind will perish, but he was especially concerned about the development of artificial intelligence.
Hawking cautioned that by speeding up industrial automation at the expense of workers, AI will increase the global poverty gap. This is already happening, as businesses all over the world have declared intentions to reduce their staff in favor of AI. He also issued a warning about how a strong few may use AI to create ever-more-lethal weapons and other forms of violent oppression. But Hawking was more afraid of what AI could do to humanity than of what humans could do to it.
Hawking was closely associated with the AI sector. Near the end of his life, Hawking, who was unable to speak because of the consequences of amyotrophic lateral sclerosis (ALS), employed a computer software that used predictive text driven by artificial intelligence. While acknowledging the numerous potential advantages of AI technologies, Hawking warned that the technology “would take off on its own, and re-design itself at an ever increasing rate” in the future when he was questioned about it in a 2014 BBC interview.”The development of full artificial intelligence could spell the end of the human race,” was his last verdict.
How valid are Hawking’s concerns?
Hawking was concerned that if these technologies advanced to the point where they could redesign themselves, AI would surpass mankind. Humans vary only slightly from generation to generation, and their evolution is extremely sluggish. Conversely, with almost every new version of artificial intelligence that has been introduced, it has advanced significantly.
The possibility of a technological singularity—the moment when artificial intelligence surpasses human control and starts to determine its own destiny—has raised concerns since the early days of robotics. The singularity may happen this decade, according to projections, as AI rapidly approaches and in some instances even surpasses human ability levels in a number of sectors. It’s not a matter of if, but rather when.
Hawking didn’t think AI would truly turn evil, but many people associate the idea of rogue AI with science fiction stories of machines murderously betraying their masters. He cautioned in his posthumously published book, “Brief Answers to the Big Questions,” that if AI were to harm people, it would not be intentionally; rather, it would be because the technology would be so focused on efficiency and accomplishing its objectives that it would destroy anything that stood in its way.
Regarding the future of AI, Hawking wasn’t totally pessimistic. He not only used it for speaking but also frequently conveyed the optimism that AI may aid in the eradication of problems like starvation and illness. He emphasized that humans must strictly regulate AI technology in order to achieve this.





