The future of warfare might involve advanced artificial intelligence (AI) algorithms that would have the ability and authority to assess situations and engage enemies without having a human in control of every robot or drone involved in the operation.
It might sound like the kind of scenario that sci-fi movies like the Terminator and Matrix series have depicted. Technology had advanced to the point where a computer might take matters into its own hands during an armed conflict. In the movies, AI usually ends up attacking humans. In real life, AI might help the military conduct operations where independent human control over each drone would slow down the mission. One obvious downside is that the enemy might employ similarly sophisticated technology.
The Pentagon is already studying combat scenarios where AI would be allowed to act on its own accord based on orders that a human issued. Wired has an example of such a drill that took place near Seattle last August.
Several dozen military drones and tank-like robots were deployed with a simple mission. Find terrorists suspected of hiding among several buildings. The number of robots involved made it impossible for a human operator to keep an eye on all of them. As a result, they were given instructions to find and eliminate enemy combatants when necessary.
Run by the Defense Advanced Research Projects Agency (DARPA), the drill involved radio transmitters that the robots used to simulate interactions with hostile entities rather than actual weapons.
The drones and the robots were about the size of a large backpack, and all had an overall objective. The robots had access to AI algorithms to devise plans of attack. Some of the robots surrounded buildings; others carried out surveillance. Some identified beacons designating enemy combatants, and others were destroyed by simulated explosives.
This was just one of the AI drills conducted last summer to simulate automation in military systems for situations that are too complex and fast-moving for humans to make every critical decision along the way.
The Wired report explains there’s increasing interest at the Pentagon for giving autonomous weapons a degree of liberty at executing orders. A human would still make high-level decisions, but AI could adapt to the situation on the ground better and faster than humans. Wired also points out that a report from the National Security Commission on Artificial Intelligence (NSCAI) recommended this May that the US resist calls for an international ban on developing autonomous weapons.
Even so, the debate of using AI weapons in military operations isn’t settled, with some arguing that the same algorithms the US might employ to power swarms of drones and robot tanks could also fall into the hands of adversaries.
“Lethal autonomous weapons cheap enough that every terrorist can afford them are not in America’s national security interest,” MIT professor Max Tegmark told Wired. Tegmark, the co-founder of the Future of Life Institute non-profit that opposes autonomous weapons, added that “I think we’ll one day regret it even more than we regret having armed the Taliban.” He said that AI weapons should be “stigmatized and banned like biological weapons.”
This article has been published from the source link without modifications to the text. Only the headline has been changed.