Is there room for ethics and the law in military AI?

Is there room for ethics and the law in military AI?

As AI development continues to ramp up, researchers are figuring out if ethics and law can be embedded into AI itself.

The use of artificial intelligence (AI) has been a talking point for militaries, especially in recent years, as they ponder how much of warfare can be conducted without human involvement. This comes as little surprise given how AI capabilities have continued to expand. Computers can read stocks, translate speech, and even make medical diagnoses better than doctors. The idea of autonomous warfare is no longer a hypothetical casually discussed around the dinner table.

In February, the US Army launched a new initiative, Advanced Targeting and Lethality Automated System (ATLAS), to design vehicles with AI capabilities for increased lethal accuracy and ground combat capabilities. Widespread attention was given to the ATLAS plan following reports that the US defense department had plans to upgrade the current ATLAS vehicles used on ground combat vehicles to not only help human gunners aim, but also have the capacity to shoot autonomously.

While the Department of Defense has policies stating that humans would always make the final decision on whether armed robots can fire against targets, commentators like Stop Killer Robots have expressed fears that the lack of a ban against lethal autonomous weapons would leave room for the world to enter into a destabilising robotics arms race.

“Delegating life-and-death decisions to machines crosses a moral ‘red line’ and a stigma is already becoming attached to the prospect of removing meaningful human control from weapons systems and the use of force,” Stop Killer Robots said.

The state of military AI regulation

There are currently no explicit conventions or laws on a global scale that define the parameters for how autonomous weapons can be used. The only legal guidance on restricting the use of autonomous weapons is in Article 36 of the United Nation’s (UN) Convention on Certain Conventional Weapons (CCW), which only requires militaries to consider whether weapons—autonomous or not—should be prohibited.

Factors considered when determining whether a weapon can more about artificial intelligence  be deployed includes if its operator: Knows its characteristics; is assured the weapon is appropriate to the environment in which it is deployed; and has sufficient and reliable information on the weapon in order to make conscious decisions and ensure legal compliance. But unless there is a clear, disproportionate misprioritisation of military necessity over humanity, states will often have leeway under Article 36 to deploy a weapons system.

The question on the table for diplomats has been whether autonomous weapons should be prohibited. Since 2014, the UN has organised forums—now called the meeting of the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE on LAWS)—to discuss the use of lethal autonomous weapons. In these meetings, countries party to the UN have explored topics ranging from human responsibility in using autonomous weapons to reviews required for weapons to be deployed, with the aim of ensuring AI does not overturn the international legal framework for armed conflicts.

At first glance, the creation of a forum focusing on preserving human control in warfare is a step in the right direction. But Australia’s Trusted Autonomous Systems Defence Cooperative Research Centre (TASDCRC) chief scientist and engineer Jason Scholz, who spoke with TechRepublic, said that while the forums have good intentions, they have not been comprehensive enough in discussing the systems of control when using a weapon.

“It’s not only about what can happen in terms of selecting a target and engaging it, autonomously or not, but the reliability of the weapon … the context in which it’s used, training, certification of the people and technology, authorisation, no one from a confident military sticks a weapon directly into a target without having an entire system of control,” Scholz said.

The most recent GGE on LAWS meeting took place in Geneva in late March, with over 90 countries in attendance. Much like the previous meetings, little progress was made about the use of lethal autonomous weapons.

The United States, United Kingdom, Australia, and Israel all put forward the opinion that a better understanding of the technology was needed before any restrictions are made.

With countries being unable to reach a consensus regarding lethal autonomous weapons, the chatter has arguably resulted in more questions than answers. As a result, there is still little explanation for how to distinguish between autonomous weapons and other AI military technology, and the extent to which autonomous weapons may be used. In this way, the framework for military AI shares many parallels with the internet, where there are no clear rules of engagement in handling online attacks.

There also doesn’t appear to be any signs of military AI spending slowing down, with the United States reportedly spending at least $2 billion on military AI R&D, and the United Kingdom also spending £160 million.

The AI conversation in society

With the AI conversation coming to a standstill in the military domain, there has been little regulation created on the civil front as well. Pundits like the American Civil Liberties Union have been vocal over their concerns surrounding the AI usage of companies and civilians, warning against the bias and deception that can come with the use of this technology.

Where action has occurred, it has been primarily driven by employee protests at tech companies, like the ones at Google and Amazon. Amazon employees last year protested the sale of its facial recognition services to police departments. Meanwhile at Google, thousands of its employees signed a letter protesting against the company’s involvement in a Pentagon program that uses AI to improve the targeting of drone strikes.

Following the protests, Amazon and Google placated to minimise the damage caused to their reputations. Amazon announced in a blog post that it would support government regulation for AI technologies such as facial recognition, while Google created a set of AI principles that aim to safeguard against the creation of AI systems that lead to bias. As part of the principles, Google said it would not build AI weapons, with the company then announcing it would not renew its contract for the Pentagon drone program it was criticised for taking part in.

What is AI? Everything you need to know about Artificial Intelligence 

Google also created an external AI ethics council in late March, but it was scrapped in just over a week in reaction to another set of employee complaints. Thousands of Google employees signed a petition to remove one of the board members, Kay Coles James, due to her past comments on transpeople and climate change, which set off a domino effect of the other board members resigning from the council. Among the issues the ethics council was set to explore was whether to work on the military applications of AI.

“It’s become clear that in the current environment, [the AI ethics council] can’t function as we wanted. So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics,” Google said.

While Scholz did not provide comment about the AI principles or ethics council of Google, he said the reactionary positions taken by tech companies has skewed the perceptions around military AI. By inflating fear around the use of military AI, Scholz said, it has made it more difficult to have public discussion about how military technology can evolve.

It’s become clear that in the current environment, [the AI ethics council] can’t function as we wanted. So we’re ending the council and going back to the drawing board

But even in situations where companies have taken a firm position, it is unclear whether ethical principles created by companies have any real, tangible impact. Microsoft, Facebook, and Axon—which makes stun guns for US police departments—have all created their own sets of AI principles.

In Microsoft’s AI principles, the company does not provide any explanations regarding its approach towards handling AI technology in military and weaponry contexts, preferring to use broad, umbrella terms such as “fairness, reliability, inclusivity” instead. Microsoft, according to a New York Times report, had announced in October it would sell technologies to the United States government to build more accurate drones or compete with China for next-generation weapons.

In a report [PDF] written by the AI Now Institute in 2018, a research group at New York University, experts questioned whether these actions are just vehicles used by companies to deflect criticism due to the lack of accountability mechanisms currently in place.

“These [ethical] codes and guidelines are rarely backed by enforcement, oversight, or consequences for deviation. Ethical codes can only help close the AI accountability gap if they are truly built into the processes of AI development and are backed by enforceable mechanisms of responsibility that are accountable to the public interest,” the report said.

Is embedding ethics into AI the solution?

Rather than focusing solely on the development of regulations and principles around the use of AI in military contexts, University of Queensland Law School associate professor Rain Liivoja told TechRepublic that an alternative solution could be embedding existing ethical and legal frameworks into the military AI itself.

Beyond UN conventions such as the CCW and the Geneva Convention, there are few prescriptive laws and ethics for militaries to follow. For most decisions, militaries follow armed conflict principles such as proportionality, distinction, and military necessity. Is the firepower used proportional to the military objective? Will the military decision sufficiently distinguish between combatants and civilians? Or is the military objection is even necessary in the first place?

Liivoja, who is currently in a research team from the University of Queensland and University of New South Wales—along with Scholz—are undertaking a AU$9 million study into the application of ethics and the law into autonomous defence systems. The five-year project, which is the biggest investment in the world into understanding the social dimensions of military robotics and AI, will attempt to clarify the legal and ethical constraints placed on these systems, as well as the ways in which autonomy can enhance compliance with the law and with social values.

Acknowledging the tightrope that exists between violating human rights and improving a nation’s security, both Scholz and Liivoja told TechRepublic in separate conversations that there are various use cases for AI in the military that do not relate to targeting.

6 ways to delete yourself from the internet (CNET)

Among them is the use of AI to prevent weapons systems from firing at targets wearing protected symbols, such as the Red Cross, Red Crescent, and the Red Crystal. Firing at individuals that wear these symbols is a violation of international humanitarian law. While this type of technology is still in development, Scholz said, such preventative AI could be applied to any weapon, regardless of whether it was autonomous or not.

“A conventional weapon with AI that can recognise symbols may be used to direct it away or self destruct to avoid unlawful harm on something that has that protected object,” Scholz said.

“That’s an example of something that is clear-cut and potentially doable because unlike AI for image netting, which has to tell the difference between various objects, this would be a lot simpler as it may just need to determine whether its a Red Cross or not.”

A conventional weapon with AI that can recognise symbols may be used to direct it away or self destruct to avoid unlawful harm on something that has that protected object

The use of AI in military contexts could mitigate against potential mistakes by humans, Scholz added, explaining that situations like the MH17 disaster where a passenger plane was allegedly shot down by Russian troops could be averted in the future through such technology.

So while an autonomous machine gun triggered by heat sensors is clearly something to be banned, the question around banning all military AI becomes much more difficult to answer if the tech is merely there to help soldiers navigate the questions around proportionality, distinction, and military necessity, rather than to make the decision for them.

While there is no quick fix to the ethical dilemmas surrounding AI, particularly in determining whether lethal autonomous weapons have a place in warfare, Liivoja said that now is the time where technology specialists should use their power to drive the regulatory conversation as AI development is still in its infancy.

“It’s a bit too late to start thinking about the rules once the technology has been widely adopted—the responsible thing to do is to consider the impact while the tech is being contemplated and developed and that is what we are trying to do,” Liivoja said.

“Increased focus on law and ethics around new technologies, irrespective of the walk of life the new technologies are being implemented, is required.

6 military-inspired best practices for drone deployment
As enterprises develop their drone strategies, they can take a cue from the military.

Why military veterans might be key to closing the cybersecurity jobs gap
Discover why it might be prudent to hire veterans who are already trained in cybersecurity and understand the concepts of militarization.

How military-style training may enhance your cybersecurity strategy
Find out the benefits of realistic cybersecurity training, such as what is offered by IBM’s X-Force Command Center. The facility is modeled on the approach used by the military and first responders.

How China tried and failed to win the AI race: The inside story
China’s aggressive artificial intelligence plan still does not match up to US progress in the field in many areas, despite the hype.

Pentagon documents the military’s growing domestic drone use
The Pentagon recorded 11 domestic UAS missions in FY 2018—as many as it recorded from 2011 through 2017.

Artificial intelligence: Trends, obstacles, and potential wins
This ebook looks at the potential benefits and risks of AI technologies, as well as their impact on business, culture, the economy, and the employment landscape.