Drones with AI controls that can decide for themselves whether to kill human targets are getting closer to becoming a reality.
Countries like the US, China, and Israel are developing lethal autonomous weapons that can use AI to choose their targets.
Critics claim that the deployment of the so-called “killer robots” would be a troubling development because it would leave military decisions involving life and death up to machines without human input.
The US is one of several countries, including Russia, Australia, and Israel, that are opposing any move towards a binding resolution that would restrict the use of AI killer drones at the UN. Instead, it is reported that these countries are pushing for a non-binding resolution.
Alexander Kmentt, Austria’s lead negotiator on the matter, told that he believed this to be one of humanity’s most important turning points. “What role do humans play in the use of force? This is a legal, ethical, and fundamental security concern.”
According to a notice released earlier this year, the Pentagon is aiming to deploy tens of thousands of AI-enabled drones.
During an August speech, US Deputy Secretary of Defense Kathleen Hicks stated that the US would be able to counter China’s People’s Liberation Army’s (PLA) numerical advantage in both people and weapons through the use of technology such as AI-controlled drone swarms.
According to her, they will match the PLA’s mass with their own and it will be more difficult to anticipate, hit, and defeat them.
The Air Force secretary, Frank Kendall, stated that AI drones must be able to make deadly decisions while being overseen by a human. The difference between winning and losing, he said, is whether or not you make individual decisions and you won’t lose.
In October, The New Scientist reported that Ukraine had already started using AI-controlled drones on the battlefield to combat the Russian invasion. However, it is not clear if any of these drones have caused human casualties.