Exploring AI’s involvement in the defence industry.

Alexander Kmentt doesn’t pull his punches: “Humanity is about to cross a threshold of absolutely critical importance,” he warns.

The disarmament director at the Austrian Foreign Ministry is discussing autonomous weapons systems (AWS), emphasizing that the technology is advancing faster than the regulations overseeing it. He warns that the window for effective regulation is closing rapidly.

In the defense sector, a wide range of AI-assisted tools is either in development or already in operational use. Various companies have made claims regarding the current level of autonomy achievable.

A German arms manufacturer asserts that its produced vehicle, capable of autonomously locating and destroying targets, has no restrictions on autonomy. Essentially, the decision to allow the machine to fire without human input is at the discretion of the client.

An Israeli weapons system has previously shown the ability to identify individuals as threats based on the presence of a firearm, although, like humans, these systems can make errors in threat detection.

Athena AI, an Australian company, has introduced a system capable of detecting individuals wearing military attire and carrying weapons, mapping them for situational awareness. According to Stephen Bornstein, the CEO of Athena AI, populating a map for situational awareness is currently the primary use case for their technology.

“We have done it our way with AI on the loop by design to be absolutely sure that AI doesn’t make a decision to target anything without a human involved to review the information and decide if the target is a legitimate target. We are talking about an AI that helps a human decide if a

However, many current applications of AI in the military are of a rather mundane nature.

Such areas include military logistics, gathering and processing of data for military intelligence and surveillance and reconnaissance.

C3 AI is one of several companies focusing on military logistics. Mainly a civilian company, it has incorporated into its system the military of the USA.

As an illustration, the predictive maintenance by C3 AI for the US Air force takes its information from inventories, service histories, and ten thousand sensors which may exist in a single bomber.

According to Tom Siebel, the chief executive of C3 AI, “we can look at those data and identify device failures in advance, repair it before failed, and eliminate unscheduled down times”.

The company claims that an AI analysis has resulted in a reduction of about 40 percent unscheduled maintenance for monitored systems.

Mr. Siebel maintains that the technological advance has become complex enough to generate such forecasts, despite the factor of human fault.

He concludes that AI is an essential element in all given circumstances particularly when it comes to today’s wars which are extremely complicated. For instance, such things as groups of objects say drones. As noted by Mr Siebel, “there’s no way you can coordinate swarm behavior without using AI”.

Besides, Anzhelika Solovyova, a specialist in the Department of Security Studies of Charles University in Czechia, also asserts that this kind of machines can “boost situational awareness of human pilots in manned aircraft”, and “opens roads up to autonomous aerial vehicles

Yet, it is the domain of weapon use that really makes people concerned about the armed AI.

Catherine Connolly, the automated decision research manager for the campaign network Stop Killer Robots, warns that the capability of these weapons to make their own decisions is definitely there.

“Nothing more than a software change means that the system can track down the target automatically and does not actually have to be manually directed by someone,” as stated by Ms Connolly who has a PhD in International Law and Security Studies. I therefore assume that the technology is nearer than many have imagined.

“Fears about weaponized AI are justified,” admits Anzhelika Solovyova, a PhD specializing in international relations and national security studies. She however asserts that many NGOs and the media have made unnecessary hype about a very complicated group of weapons.

In her view, it is expected that AI will mainly assist in decision making, connect various systems, and enable people to interact with machinery. She, however, anticipates that an AI would be used first for non-lethal applications, such as missile defence and electronic warfare systems, before taking a fully autonomous weapons decision.

According to Ms. Solovyeva, and her co-author Prof. Nik Hynek call it the “switchable mode”. In other words, this is a complete autonomous mode which human operator may turn on and off at will.

According to Solovyeva (whose Ph.D. is in international relations & national security studies), “fears of lethal AI are well-justified”. However, she claims that NGOs and the mass media oversimplified and exaggerate an extremely complex arms concept.

She holds that AI used in weaponry system basically will assist decision making, systems integration and enhance man-machine interactions. She does not want autonomy in firing of weapons and wants its uses for such purpose as missiles defense or electronic war systems before lethal applications.

This, Ms Solovyeva says, together with her co-worker, Mr Hynek, calls the “switchable mode” which they attribute to the future of autonomous weapons. By this, the meaning is a full autonomy mode that human operators will switch on the off at their discretion.

Advocates for AI-enabled weapons argue that they could be more accurate, but Rose McDermott, a political scientist at Brown University, doubts that AI can eliminate human errors. She believes algorithms should include safeguards that require human oversight and evaluation, acknowledging that humans make mistakes but of a different nature than machines.

Ms. Connolly emphasizes the inadequacy of self-regulation by companies. Although many industry statements claim human involvement in decision-making on the use of force, she points out that companies can easily change their stance.

Some companies are seeking clarity on permissible technologies. To prevent AI’s speed and processing power from overriding human decision-making, Ms. Connolly states that the Stop Killer Robots campaign is advocating for an international legal treaty. This treaty aims to ensure meaningful human control over systems detecting and applying force based on sensor inputs, rather than immediate human commands.

According to her, regulations are urgently needed not only for conflict situations but also for everyday security concerns. Ms. Connolly highlights the risk of military technologies being used domestically by police and border security forces, making it crucial to address the use of autonomous weapon systems beyond armed conflicts.

The anti-autonomous weapon campaign has so far failed to achieve any international treaties but Ms Connolly is still hopeful that one day international humanitarian law will catch up with the technological leap.

She argues that a good example is the existing international agreement regarding such weapons as cluster munitions and anti-personnel mines under the law of humanitarian which are usually moving at snail’s pace, but create norms prohibiting some categories such weapons, especially those affecting civilians.

For some, autonomous weapons are part of a larger category that is very hard to clearly define, and an even less effective ban could be expected even if a treaty were adopted.

At the Austrian Foreign Ministry, Alecander Kmentt states that a purpose of any regulator has to provide ‘human input’ on making decisions about dying.

The human touch should be preserved.

Though none of the international regulations on autonomous weapons have been adopted throughout the history of the campaign, Ms Connolly remains hopeful that one day the international humanitarian law will finally overcome its current lagging.

She argues that past international protocols on non-detectable mines and ban on cluster munitions are evidence that although cumbersome, international humanitarian laws generate certain norms against a specific category if weapons.

There are others who consider that these devices constitute a rather broad category of weapons that extend far beyond self-governing weapons devices. Besides, their ban treaties will probably play very poor practical roles.

At the Austrian Foreign Ministry, Alexander Kmentt argues that for every regulatory measure, there should be human control over whether a person lives or dies.

We must ensure at all costs that the human aspect does not vanish.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *