Drones Are The Real AI Weapons

One terrifying catalyst for weapons races is war. Land mines and cluster munitions were the topic of intense controversy prior to Russia’s invasion of Ukraine two years ago, and numerous states had pledged not to deploy them. But when the urge to win becomes overwhelming, governments might let go of their reservations and embrace once-controversial technologies wholeheartedly. The conflict between Russia and Ukraine has eliminated whatever concerns either nation may have had over the military application of AI. Millions of unmanned aerial vehicles, or UAVs, are being sent by both sides to spy on and strike enemy sites; to guide these drones, AI is being used extensively. Certain drones are made from modest, straightforward kits that are available for purchase from private manufacturers; others are more sophisticated attack weapons. The latter group includes the Shaheds, which are manufactured in Iran and have been widely used by the Russians in their attack on Ukraine this winter. Furthermore, it will become increasingly difficult for human operators to keep an eye on all of the drones a country’s military uses.

Many people find it unsettling to consider allowing computer algorithms to operate deadly weapons. Enabling machines to choose which targets to fire at what time might have terrifying effects on civilians. It should spark a heated moral discussion. However, in reality, these conversations are cut off by conflict. Both Russia and Ukraine are fervently seeking to harness AI to their advantage. These kinds of calculations will probably be made by other nations as well, which is why the current conflict provides an early warning of many battles to come, including one between the United States and China.

The Pentagon has made a point of include people in the decision-making process prior to the use of lethal force, even before the Russian invasion. However, given the increasing prevalence of AI drones operating over and beneath Ukrainian and Russian lines as well as the quick advancements in the precision and potency of these weaponry, it is possible that military strategists everywhere may eventually have to deal with scenarios that were previously unimaginable.

The possibility of AI being used in combat was unsettling long before it was actually used on actual battlefields. In the popular 1983 movie WarGames, Ally Sheedy and Matthew Broderick prevented a nuclear meltdown driven by artificial intelligence. In the film, the artificially intelligent supercomputer known as WOPR, or War Operation Plan Response, took over command of the United States’ strategic nuclear arsenal because the U.S. military was concerned that people, with their unstable emotions and nagging consciences, might not have the courage to launch nuclear weapons in the event that such an order ever came. Only human involvement was able to get around the system before the AI launched a reprisal that would wipe out all life on the planet. Broderick’s character, an adolescent computer hacker, had unintentionally tricked the system into believing the U.S. was under assault when it wasn’t.

For the next forty years or more, the discussion about AI-controlled weapons continued along similar lines. The Bulletin of the Atomic Scientists released an essay titled “Giving an AI Control of Nuclear Weapons: What Could Possibly Go Wrong?” in February 2022, the same month that Russia began its full-scale invasion. To that, the answer was in the affirmative. Zachary Kallenborn, the author, opened with the statement, “All of us could be dead if artificial intelligences controlled nuclear weapons.” The primary concern was that AI might make mistakes as a result of errors in the data it was meant to respond to or in its programming.

Though the Russo-Ukrainian war demonstrates, the true impact of AI is found in the enabling of thousands of small, conventionally armed systems, each with its own programming that allows it to take on missions without a human guiding its path. This is despite the attention paid to nukes launched by a single godlike WOPR system. One of the most deadly Russian drones for Ukrainians is the Lancet-3, also known as the “kamikaze,” because it is small, extremely nimble, and difficult to identify, let alone shoot down. While a Lancet may harm battle tanks and other armored warfare vehicles that can cost millions of dollars each, it only costs roughly $35,000. In a November article regarding Russia’s use of Lancets, the publication stated that “drone technology often depends on the skills of the operator.” However, it is believed that Russia is combining more artificial intelligence (AI) technology to enable these drones to function autonomously.

The only thing that allows the AI in question to function is the usage of Western technologies that the Russians are using to get over sanctions with the assistance of outsiders. It is believed that a drone equipped with target-detection technology can distinguish between different shapes it encounters during flight, including cars and similar objects. The computer in the drone can essentially direct the Lancet to attack an object, including adjusting the angle of attack to deal the maximum amount of damage, once the AI recognizes a shape as typical of a Ukrainian weapon system (such as a recognizable German-made Leopard battle tank).

To put it another way, each Lancet is equipped with a unique WOPR.

The Ukrainians are competing hard in the AI race as well. The deputy defense minister of Ukraine, Lieutenant General Ivan Gavrylyuk, recently briefed a legislative delegation from France on his nation’s endeavors to integrate artificial intelligence (AI) technology into their French-manufactured Caesar self-propelled artillery pieces. He clarified that the AI would expedite the process of locating targets and determining the most effective kind of munitions to employ against them. If Ukrainian artillery operators can identify a Russian battery before the Russians can, that extra time might be the difference between life and death. Furthermore, a lot of firepower may be saved by using this form of AI-driven optimization. According to Gavrylyuk, AI might save the country up to 30% on ammo use, which would be extremely beneficial given that the U.S. Congress is now starving the nation of ammunition.

The AI weapons that Russia and Ukraine are currently using are just the beginning of what will soon be seen on battlefields everywhere. There is little doubt that China and the United States, the two most powerful militaries in the world, are attempting to draw lessons from the ongoing conflict. The Replicator project, one of the United States’ most ambitious AI-driven projects, has been widely discussed during the last two years. At a September press conference, Deputy Defense Secretary Kathleen Hicks clarified that Replicator is an effort to employ self-guiding technology to “help overcome China’s advantage in mass.” She described an army of aerial drones and self-driving cars that would replace human soldiers in many tasks as they went into combat alongside American forces.

These AI-driven soldiers might even carry supplies. They could even patrol ahead of the Army and guard American forces, even using solar power to eliminate the need for refueling. Additionally, Hicks implied that similar drone forces may also target adversaries, albeit in a less direct manner. Hicks set a rather aggressive timetable in September, saying she planned to have Replicator online in some capacity within two years.

The subject of severely restricting the role that humans will play in combat in the future will undoubtedly be brought up by initiatives like Replicator. In this kind of conflict, if China and the United States can build thousands, if not millions, of AI-driven units that can attack, defend, scout, and supply supplies, then what place does human decision-making have? How many people will die in conflicts between opposing drone swarms? There are many ethical dilemmas, but when conflict arises, the desire for military might typically takes precedence over them.

Longer term, the most strong military may have to adapt significantly in how they staff and equip themselves due to AI’s rapid advancement. What will happen to human-piloted fixed-wing aircraft in the future if combat drones are fully autonomous or can be remotely controlled by human operators from far away? A human operator restricts the amount of time an aircraft can stay in the air, makes the aircraft large enough to accommodate one or more people, and necessitates sophisticated technologies to keep the people on board alive and well. A British company was awarded a contract worth $8.7 million in 2021 to supply explosive charges for the pilot-ejector chairs on various aircraft. Note that this deal did not include the seats themselves. The development, installation, and upkeep of the seat systems will probably cost several thousand dollars in total. Furthermore, a very costly airliner consists of much more than just the seats.

Comparatively speaking, a highly efficient $35,000 AI-guided drone is a steal. While real-life artificial intelligence systems continue to improve in cost and effectiveness, the fictitious WOPR nearly sparked a nuclear war. AI warfare is not going away.

Source link

Most Popular