INDIAN ARMED FORCES CHIEFS ON
OUR RELENTLESS AND FOCUSED PUBLISHING EFFORTS

 
SP Guide Publications puts forth a well compiled articulation of issues, pursuits and accomplishments of the Indian Army, over the years

— General Manoj Pande, Indian Army Chief

 
 
I am confident that SP Guide Publications would continue to inform, inspire and influence.

— Admiral R. Hari Kumar, Indian Navy Chief

My compliments to SP Guide Publications for informative and credible reportage on contemporary aerospace issues over the past six decades.

— Air Chief Marshal V.R. Chaudhari, Indian Air Force Chief

       

AI Generated Kills

The use of AI on the battlefield is adouble-edged sword of innovation and risking human lives

April 26, 2024 By Lt. General P.C. Katoch (Retd) Illustration(s): By SP Guide Pubns Photo(s): By Wafa in contract with a local company (APAimages) / Wikimedia Commons
The Author is Former Director General of Information Systems and A Special Forces Veteran, Indian Army

 

Israel has been using AI to identify targets in Gaza for its bombing

Artificial intelligence (AI) has taken the world by storm, including in the military field. In 1991, the US military used the DARPA-funded Dynamic Analysis and Re-planning Tool (DART), an AI programme, to schedule the transportation of supplies or personnel and to solve other logistical problems. Military AI systems can process more data more effectively than traditional systems. With its intrinsic computing and decision-making capabilities, AI also increases combat systems' self-control, self-regulation, and self-actuation. Experts agree that future warfare will be characterised by the use of technologies enhanced with AI, especially fully-autonomous systems although AI cannot replace soldiers.

AI technologies can enhance military capabilities, but they also introduce new risks and challenges

In 2021, the Indian Army demonstrated an AI-enabled swarm of 75 aerial drones and used AI for intelligence, surveillance and reconnaissance purposes in the 2021 edition of Exercise 'Dakshin Shakti'. The Indian Armed Forces are engaged in leveraging AI across various domains; to revolutionise threat assessment, logistics optimisation, cyber-security and surveillance using AI-enabled platforms and robots. Similar to AI's disruptive impact on the economy, these military advancements are poised to redefine the landscape in the near future.

At the same time, while AI technologies can enhance military capabilities, they also introduce new risks and challenges. The risk of machines making life-and-death decisions without human intervention could lead to unpredictable and devastating consequences due to intentional or accidental mismanagement. Sole reliance on machine intelligence in critical situations could lead to unintended actions or inappropriate targeting, causing unintended collateral damage and violating international laws. Moreover, military AI systems can be vulnerable to cyber-attacks.

Potential manipulation of data used by AI systems remains a possibility. Adversaries could feed false information into AI algorithms, leading to incorrect assessments and decisions, putting both military personnel and civilians at risk. Disruption of communication during critical military operations can reduce the offensive and defensive capabilities of the adversary. As more and more automated weapons systems are adopted, these could be turned against their own armies using cyber-attacks.

AI is only as unbiased as the data and people training the programmes

Finally, AI is only as unbiased as the data and people training the programmes. Therefore, if the data is flawed, impartial or biased in any way, the resultant AI will also be biased; the two main types of AI bias being "Data Bias" and "Societal Bias." A recent example of "data bias" appears to have occurred in the ongoing Gaza War.

It has now emerged that the IDF has been using AI to identify targets for its bombing spree in Gaza. According to an investigation report, the 'Lavender' programme was developed by Unit 8200; an elite intelligence division of the IDF. The Lavender system is said to mark suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ) as potential targets, including low-ranking individuals. The software analyses data collected through mass surveillance on most of Gaza's 2.3 million residents, assessing and ranking the likelihood of each person's involvement in the military wing of Hamas or PIJ. Individuals are given a rating of 1 to 100, indicating their likelihood of being a militant.

Palestinians inspect the ruins of Watan Tower destroyed in Israeli airstrikes in Gaza city, on October 8, 2023.

As per the report, even though the AI machine has an error rate of 10 per cent, its outputs were treated "as if it were a human decision". Citing six Israeli intelligence officials involved in the alleged programme, the investigation revealed that during the initial stages of the conflict in Gaza, the military heavily relied on Lavender, which identified as many as 37,000 (mostly junior) Palestinians as "suspected militants" for potential airstrikes and assassination. According to these officials, the outputs of the AI machine were treated "as if it were a human decision," and alleged that the human review of the suggested targets was cursory at best. The approval "to automatically adopt Lavender's kill lists" was reportedly given about two weeks into the war.

The investigation report reads, "From that moment, sources said that if Lavender decided an individual was a militant in Hamas, they were essentially asked to treat that as an order, with no requirement to independently check why the machine made that choice or to examine the raw intelligence data on which it is based." A senior official identified as 'B' confirmed that officers were not required to review the AI system's assessments "in order to save time and enable the mass production of human targets without hindrances".

The Lavender system being used by Israel is said to mark suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ) as potential targets

As per the report, two sources alleged that the Israeli army, in an unprecedented move, decided during the first weeks of the war that for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians, before which the military did not authorise any "collateral damage" during killing of low-ranking militants. Also, the army reportedly on several occasions authorised the killing of more than 100 civilians per assassination.

Call it employment of AI machines having an error rate of 10 per cent or willful "data bias", this is what ended up in an Israeli airstrike in Gaza killed seven aid workers (six foreign nationals and one Palestinian), resulting in an international outcry and calls for an investigation. Faced with the international outrage, Herzi Halevi, Chief of Staff of the IDF, said it was a "grave mistake" that resulted from "misidentification" - by Lavender?