The first gathering of parties to an accord on artificial intelligence (AI) will be held this week by the U.S. State Department, with an initial focus on military uses as a matter of international concern.
As Mark Montgomery, senior director of the Center on Cyber and Technology Innovation for the Foundation for the Defense of Democracies, stated that it is admirable that the State Department is carrying out the conversations about the moral application of AI in military applications.
Given that this is essentially a voluntary alliance of disparate nations, I try not to read too much into it. Sharing knowledge is the focus here, not creating policies. Most obviously, those nations are absent that should most concern us with their military applications of AI.
Several prominent countries, including China, Russia, Saudi Arabia, Brazil, Israel, and India, were absent from the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, which was signed by 53 governments last year.
42 of the signatories will be present at the conference this week. Every military use of AI that has emerged in the last few years will be discussed by more than 100 people with backgrounds in both the military and diplomacy.
An official from the State Department told Breaking Defense that the department genuinely wants a framework to maintain states’ attention on the topic of responsible AI and on developing practical capabilities.
The goal of the State Department’s conference this week is to set the stage for future ones, with participants coming annually to discuss the most recent advancements, for as long as necessary.
The department invites signatories to get together and debate fresh concepts or run war games with cutting-edge AI technology in between those sessions, “anything to build awareness of the issue and take concrete steps” toward carrying out the declaration’s objectives.
The official stated that the list of nations endorsing the declaration demonstrates their value for a variety of viewpoints and experiences. They are extremely grateful for the depth and breadth of support that the Political Declaration has received.
Concerns about disinformation and employment displacement are not as important as the application of AI in military and international security. During a recent speech at the Georgia Institute of Technology, Bonnie Jenkins, the undersecretary of state for arms control and international security affairs, covered the subject.
The Biden administration has made safe, secure, and reliable AI development and application a top priority, and for good reason, according to Jenkins.
AI has the ability to both bring great good and great harm. It can help reform modern medicine, enhance agricultural methods, alleviate global food insecurity, and slow down the consequences of climate change.
Without the right controls, AI has the potential to exacerbate dangers, escalate hostilities, and upset the balance of international security—even in the hands of well-intentioned actors, Jenkins cautioned. They are unable to forecast how AI will develop or what it could be able to accomplish in five years.
They are aware, nevertheless, that regardless of technology improvements, there are actions that can be done in the meantime to put rules in place and establish the technical capabilities to allow responsible production and use.