According to some assessments, artificial intelligence is having a significant and unsettling impact on Israel’s assault in Gaza.
According to recent investigation reports, the Israeli military may have contributed to reckless and inaccurate killings, widespread destruction, and thousands of civilian losses during the early stages of the conflict by allowing an AI software to take the lead in targeting thousands of Hamas operatives. This claim is categorically denied by the IDF.
Experts told that the reports provide a horrifying look into the future of warfare and demonstrate just how awful things can go when people lose out to emerging technologies like artificial intelligence (AI), particularly when it comes to situations where lives are at stake.
Mick Ryan, a retired Australian major general and strategist who focuses on evolutions in warfare, said that it has been the main point of contention whenever these have been discussed; autonomous systems, artificial intelligence, and lethality in combat. The choice to end a human life is a very serious one.
According to a joint investigation by +972 Magazine and Local Call earlier this month, which included interviews with six unidentified Israeli intelligence officers, the Israel Defense Force has been employing an AI algorithm called “Lavender” to produce suspected Hamas targets on the Gaza Strip.
According to the study, the IDF mostly depended on Lavender and regarded its intelligence about whom to kill “like it were a human decision,” according to sources. According to sources, the IDF merely took a few seconds to verify the machine’s conclusion once a Palestinian was associated with Hamas and their residence was discovered.
The joint assessment discovered that Israel’s rapid targeting showed little regard for trying to lessen the harm to nearby people.
Details about Israel’s Gospel program came to light last fall. The method increased Israel’s ability to reach its goals from about 50 people a year to more than 100 people every day.
Regarding the Lavender article, the IDF directed to a statement released on X by Lt. Col. (S.) Nadav Shoshani, the IDF spokeswoman, who wrote last week that the IDF does not employ AI algorithms to select targets for assault. Any other assertion demonstrates an inadequate understanding of IDF procedures.
According to Shoshani, the system is a cross-checking database meant to support human analysis rather than take its place. However, there are still possible hazards.
Not only is Israel not the only nation investigating the potential of AI in conflict, but there is also a growing emphasis on the employment of unmanned systems, as we regularly witness in Ukraine and other places. Fears of killer robots are no longer science fiction in this domain.
According to Peter Singer, a future warfare expert at the New America think tank, we are living through a new industrial revolution and, like the previous one with mechanization, it is transforming our world for better or worse. Just as AI is becoming more commonplace in our work and personal lives, so too is it in our wars.
AI is growing more quickly than the controls to keep it in check.
According to experts, the claimed use of lavender by Israel brings up a number of issues that have long been at the center of the discussion on artificial intelligence (AI) in warfare. But governments frequently find themselves unable to keep up with the rapid advancement of technology.
Several nations have made integrating AI programs into their armed forces a top priority, notably the US, China, and Russia. Project Maven in the United States is one example; since 2017, it has made significant progress in helping troops on the ground by sorting through massive volumes of incoming data. But governments frequently cannot keep up with the rapid advancement of technology.
Ryan claims that there is a general tendency toward technology and battlefield necessities taking precedence over the ethical and legal concerns surrounding the use of AI in warfare. To put it another way, things are happening too quickly.
Ryan stated that the bureaucracy and present government methods of governing regarding these matters are just unable to keep up and may never be able to.
Several nations voiced their worries at a United Nations summit in November of last year about the need for new laws to control the use of deadly autonomous programs—AI-driven devices that make decisions to kill people.
However, several countries, especially those who are at the forefront of the development and application of these technologies, were hesitant to enact such limitations. To be more specific, the US, Russia, and Israel all seemed especially reluctant to back new international regulations pertaining to the subject.
As autonomous weapons specialist Paul Scharre of the Center for New American Security told, “many militaries have said, ‘Trust us, we’ll be responsible with this technology.” However, a lack of control is unlikely to win over many people, and the way certain nations—like Israel—have used AI raises doubts about how consistently their armed forces will apply this cutting-edge technology.
According to Scharre, a program like Lavender is very much in line with how military forces worldwide are attempting to employ artificial intelligence (AI) and doesn’t seem like science fiction.
He explained that the military would be going through this process of gathering information, evaluating it, making sense of it, and deciding which targets to attack. These targets could be military targets like artillery pieces or tanks, or they could be individuals who are a part of an insurgent network or organization.
The final stage involves combining all of that data into a targeting plan, connecting it to certain platforms or weapons, and carrying out the plan.
It takes time, and Scharre speculated that Israel may have felt pressure to build a large number of targets rapidly.
Concerns have been raised by experts regarding the precision of these AI targeting systems. According to reports, the Lavender program in Israel uses a range of information sources, including social media and phone activity, to identify its targets.
According to sources in the +972 Magazine and Local Call article, the program’s 90% accuracy rate was considered satisfactory. The 10% that remains is the obvious problem there. Given the scope of Israel’s air campaign and the notable increase in targets made available by AI, that’s a sizable number of mistakes.
Furthermore, AI is constantly evolving—for better or worse. These programs learn from each use and use that information and experience to future decision-making. Ryan informed that Lavender’s machine learning may be encouraging both its correct and incorrect killings, given its 90% accuracy rate as reported in the reports. He said, “We just don’t know.”
Letting AI make military decisions
In the midst of combat, artificial intelligence (AI) and humans may collaborate to assess enormous volumes of data and recommend possible courses of action. However, a number of things could tarnish such a collaboration.
There may be too much information gathered for people to comprehend or assimilate. Massive data processing to generate a list of potential targets by an AI software may quickly overload humans and prevent them from having a meaningful role in decision-making.
Not to mention the risk of moving too fast and drawing conclusions from the data, which raises the probability of errors.
Ruben Stewart, the Military and Armed Group Advisor for the International Committee Red Cross, and Georgia Hinds, the Legal Adviser, wrote about this issue back in October 2023.
One of the most stated military benefits of AI is the increased speed with which a user can make decisions compared to their adversaries. Increased tempo frequently poses greater threats to civilians, which is why tactics to lessen tempo, such as ‘tactical patience,’ are used to reduce civilian casualties, they said.
In order to move rapidly, humans could take their hands off the wheel and rely on AI with no monitoring.
The Local Call and +972 Magazine reports state that before an attack was approved, AI-selected targets were merely examined for roughly 20 seconds, usually just to make sure the prospective murder was male.
Regarding the extent to which a human being was “in the loop” during the decision-making process, the current reporting raises major concerns. Additionally, Singer notes that it may be a “illustration of what is sometimes known as ‘automation bias,’ which is a situation in which a human fools themselves into believing that just because a machine gave an answer, it must be true.”
Singer continued, “So even though a human is ‘in the loop,’ they aren’t doing the job that is assumed of them.”
The International Committee of the Red Cross President Mirjana Spoljaric and UN Secretary-General António Guterres jointly demanded in October of last year that armies take immediate action to protect civilian authority over the use of force in conflict.
Decisions involving life and death must remain under human control. They stated that we must not cross the moral line when machines target people on their own. International law need to forbid machines with the ability and discretion to end human life without human intervention.
Even if there are concerns, artificial intelligence (AI) has the potential to be very beneficial to the military. For example, AI can assist humans in processing a vast array of data and sources so they may make well-informed decisions and consider a range of options for handling various circumstances.
In the end, it comes down to the human keeping up their end of the bargain, that is, maintaining authority and control over the AI, even though a meaningful “human in the loop” cooperation might be helpful.
Ryan, the retired major general, stated that humans have always used tools and machinery. Whether you are operating an airplane, a ship, or a tank, we are the experts of machinery.
However, he claimed that many of these new autonomous systems and algorithms will allow militaries to “partner with” rather than use machines.
For such a change, many militaries are unprepared. In the upcoming ten years, military organizations may come to recognize that there are more unmanned systems than humans, as Ryan and Clint Hinote noted in a commentary published on War on the Rocks earlier this year.
Currently, military institutions’ tactics, training, and leadership models are intended for combat organizations that are primarily human, with those humans exercising close control over the machines,” they wrote.
It’s a necessary but challenging cultural transition, they argued, to adapt education and training to educate people for working with machines rather than just utilizing them. However, many military still need to improve on that.