HomeArtificial IntelligenceArtificial Intelligence NewsIs AI as more formidable as nuclear weapons?

Is AI as more formidable as nuclear weapons?

Of all the potentially new and revolutionary technologies, Artificial Intelligence (AI) is possibly the most disruptive of them all. Put simply, AI refers to systems capable of performing tasks that would normally require human intelligence, such as: B. Visual and speech recognition, decision making, and maybe one, thinking.

Thinking? The AI ​​has already defeated the best Pokémon GO and chess players in the world. Suppose AI exceeds human intelligence. So what?

Could AI super-intelligence cure cancer, improve wellbeing, correct climate change, and fight many of the worst diseases on the planet? Or could super-intelligent AI turn against humanity, as described in the Terminator films? After all, is the potential of AI being overestimated?

Albert Einstein described the universe as “finite but unbounded”. This definition could fit future AI applications, but how do we know?

Perhaps the only comparable disruptive technology was nuclear and thermonuclear weapons. These weapons interrupted and irrevocably changed the nature, conduct and character of war policy.The reason: no winners, but only victims and losers after a thermonuclear holocaust eradicated the belligerents.

What then are the common links?

Nuclear weapons have often provoked a heated debate over the moral and legal implications and over when or how these weapons could or should be used by a first counter-force strike against military targets to thwart the retaliatory roles against the population and industrial centers and “tactically” to limit escalation or rectify imbalances in conventional weapons. AI has rekindled the debate on equally critical issues and questions about its place in society.

Nuclear weapons ultimately led to doctrine and “rules of the game” to discourage and prevent their spread and use in part through gun control. Will AI lead to a regulatory regime or is the technology too universal for a government code?

Nuclear weapons are existential. Are there any conditions under which artificial intelligence could become just as dangerous? The proliferation of these weapons has led to international agreements to prevent their spread. Will this apply to artificial intelligence?

It has been argued that if one party gained superiority over another, conflict or more aggressive behavior would ensue. Does AI raise similar concerns?

There are some important differences.Nuclear weapons have affected national security. Artificial intelligence will certainly have an impact on a wider radius of society, as will industrial and information revolutions with positive and negative consequences.

Second, the destructive power of these weapons is what made them so important. At this stage, AI needs an interconnection in order to be able to develop its full destructive power. Ironically, however, as societies advanced, these two revolutions also had the unintended consequence of creating greater vulnerabilities, weaknesses, and dependencies that are subject to great and even catastrophic disruptions.

COVID19, massive storms, fires, droughts and cyber attacks are unmistakable symptoms of the power of the new MAD – Massive Attacks of Disruption. AI is a potential multiplier for its ability to interact with these and other disruptors and to exploit inherent social weaknesses and vulnerabilities and create new ones and prevent their harmful effects.

After all, unlike nuclear weapons, if used correctly, AI will have enormous and even revolutionary benefits for the human species.

The critical question is what mechanisms can identify what former Defense Secretary Donald Rumsfeld called “the known knowns; known unknowns; and unknown unknowns” regarding AI.

A national AI commission has just completed its efforts. Commissions can often bury a difficult subject. The 9/11 Commission did an excellent job. But only some of the most important recommendations have been implemented. the director of the National Intelligence Service failed to bring about the necessary reform because these agencies eventually expanded the layers of an already bloated government bureaucracy.

Apart from these points of criticism, a permanent supervisory board for artificial intelligence with considerable research funding should be created instead of a new commission for artificial intelligence in order to examine the social effects of artificial intelligence. Membership must come from the public as well as from the legislative and executive branches of the government.

Funding must go to the best research institutions, another parallel to nuclear weapons. During the Cold War, the Pentagon signed countless studies on all aspects of nuclear balance. The same should be true for AI, but with a broader scope.

This council should also coordinate, liaise and consult with the international community including China, Russia, allies, friends and others in order to expand intellectual openness and take confidence-building measures.

By applying the lessons of the study of nuclear balance, not only can the potentially destructive effects of AI be mitigated; Most importantly, as Einstein noted about the universe, AI, when used properly, has nearly limitless opportunities to advance the public good.

Source link

Most Popular