„Artificial Intelligence“: Automated Warfare and the Geneva Convention

Warfare is becoming digitalized and automated. This is shifting the role of citizens within the Geneva Conventions. We urgently require new solutions and international agreements.

A wall of glowing red rectangles, in front of it the shadow of a person
Who makes the decisions in automated warfare? And who bears the responsibility? – Gemeinfrei-ähnlich freigegeben durch unsplash.com rishi

Technologies commonly referred to as AI have enjoyed a rapid uptake and growth over the last 18 months within the commercial space with the release of solutions such as Chat GPT. Recently, several news outlets reported that Israel was using AI in the Gaza Conflict. The IDF vehemently denies this; however, there is little doubt that advanced technologies will soon be on the battlefield.

Within warfare, computer automation can refer to significantly more sophisticated systems, for example, enabling Automated Weapons (AW) and Unmanned Vehicles (UV) that could make their own decisions in the field.  Other examples include technologies that facilitate faster decision-making by providing input to battle planning or enabling the speedy interception of enemy transmissions.

Much attention has been focused on the bias, discrimination, and possible job losses introduced by automated technologies in the commercial space. However, understanding the use of these solutions in warfare should be at the top of such discussions: Automated Decision-making or Automated Weapons do not just automate war; they can change and shift the roles of citizens within the Geneva Conventions.

The digitization of warfare

It is not uncommon to see a strong link between industry and defense forces; one prominent example is Eric Schmidt, ex-CEO of Google, who has also had roles as the Chair of the Advisory Committee for the Department of Defense and the National Security Commission on Artificial Intelligence. In his latest piece for Foreign Affairs, he has called for “warfare at the speed of computers, not the speed of people.” He evokes comparisons of AI to the conquistadors who defeated the Incan empire and wants to ensure the USA is capable of fully automated warfare, where “autonomous weaponized drones – not just unmanned aerial vehicles, but also ground-based ones – will replace soldiers and manned artillery altogether.”

The autonomous land and air vehicles that Mr. Schmidt wants the US to develop are still more experimental, but work has already started to enable military vehicles—including fighter jets and submarines—to operate alongside swarms of autonomous drones while AI coordinates the actions. So, while much of this may seem far-fetched, automated decision-making has a long head-start in the military – with Project Maven being funded by the USA in 2017, while alliances such as AUKUS have been actively co-developing automated and robotic weapons systems for several years. Systems are already aiding in detecting and classifying signals of interest, enabling them to be jammed or intercepted as deemed appropriate. AI can anticipate the trajectories of ballistic missiles to allow for pre-emptive interception or redirection. It can also help to decode – and automatically translate – encrypted communications of enemy forces.

Russia and China are all actively engaged in developing such automated systems, and this is one reason that the USA and the EU, through NATO, have placed such an emphasis on developing similar solutions; other nations are playing along as best they can, but automated intelligence identification and analysis has already started to play as crucial a role in warfare.

Implications for the Geneva Conventions

Automating systems and weapons within war can initially seem like a good idea – in principle, it is possible to wage war while putting fewer young people in the line of fire. Within the IDF statement that they were not using AI for targeting people, however, there was hidden one small sentence that should ask us to reconsider this idea:

“…a person who directly participates in hostilities is considered a lawful target.”

A critical question for the world as it enters the new era of AI-enabled and AI-driven warfare is what its impacts will be on the Geneva Conventions and the role of citizens in war. AI has the potential to shift this dramatically.

The Geneva Conventions and their Additional Protocols are the core of international humanitarian law and regulate the conduct of armed conflict. They seek to limit the impacts of war by protecting people who are not participating in hostilities and those who are no longer part of them. Each new generation of technology causes issues with the applicability of the Geneva Conventions, which were developed shortly after and in response to the horrific nature of WWII. However, most of the previous generations of technology have still been captured within the traditional realm of warfare. Automated data gathering and intelligence analysis threatens to change the notion of “who is directly participating in hostilities.”

The basis of this comes down to how such automated systems are built.  Regardless of how they are used, so-called AI applications require significant amounts of data that must be processed quickly enough to be used in battle. Military AI needs to parse millions of inputs to make sensible recommendations in battle.

Who counts as a civilian?

However, the risks associated with using AI in warfare are less publicly discussed. The digitalization of warfare has, however, led to a challenge for both militaries and international humanitarian law as the role of citizens has become increasingly blurred in some cases.

Examples include using cryptocurrencies to raise over $225 million in funds for the Ukraine war effort, e.g. “for weapons, ammunition, medical equipment and other crucial war supplies”. In comparison, a Czech crowdfunding campaign raised $1.3 million to purchase a tank for the Ukrainian forces.

In other spheres of war, we can see that civilians have either lent their computers to distributed denial-of-service attacks coordinated via AI or had them commandeered for these activities through computer viruses.

Digital technologies, therefore, challenge some of these assumptions of who is a civilian and who is not during war. This raises interesting challenges about who is an active participant in warfare. Through smartphone apps, civilians can become significant data input for war efforts – automated data sets rely on up-to-date information to provide recommendations to military decision-makers.

Threat of growing numbers of victims

If this data comes from civilian smartphones, these people could be viewed as active participants in the war.

The role of someone’s laptop or computer being used in distributed denial of service attacks makes it slightly more difficult to prove someone is a willing participant in hostilities; however, the active raising of funds – either via cryptocurrencies or through crowdfunding platforms is more straightforward to argue as active participation in war efforts. Digital technologies have enabled individuals globally to collect money and take a role previously reserved for national governments – the contribution of arms to a war effort.

Furthermore, when taking datasets from various sources, there is a broader risk that data sources may be poisoned—or have incorrect data injected into them—to disrupt the war effort. This means that AI could lead a scenario where it recommends actions that worsen warfare or lead to decisions that were taken far too quickly without waiting to see the enemy’s response. Far from reducing the number of casualties, therefore, Automated decision-making can unintentionally increase them dramatically.

How AI is different from nuclear bombs

During the Cold War, there were numerous near misses, thanks in part to the slower nature of communication but mainly due to humans‘ role in deploying missiles. Perhaps the most famous one came in 1983 when a computer system mistakenly reported that six nuclear missiles had been launched from the United States toward Russia. Stanislav Petrov, who was on duty, decided that the system must be faulty and, through deliberately disobeying orders, is credited as ‘saving the world’ from the all-out nuclear war that would have cascaded rapidly with retaliation from the USA and NATO if he had followed protocol.

Many people have likened AI to nuclear bombs. These technologies are, however, fundamentally different from nuclear ones. With nuclear weapons, humans‘ autonomy was preserved. From start to finish, humans were involved in every step of analyzing, interpreting, and acting upon the data presented to them by different computing systems for nuclear war.

The many supporters and developers of automation in warfare promote the idea of using the “human in the loop” approach. This is where a trained human operator is included at specific points in AI processes to ensure that a human makes the decisions rather than the algorithms themselves. The idea is that this will ensure the inclusion of human ethics and morals in decision-making processes and, therefore, ensure compliance with international humanitarian law.

Autonomy and control

The critical question here, however, comes down to autonomy. With automated systems, humans increasingly lose the autonomy to make decisions over datasets over time. The more data is used to create and refine models, and the more times those models run to improve the algorithms, the less depth of knowledge a human can have over that process. Therefore, the extent to which a human can claim autonomy or control over the decisions presented by AI is questionable. The sheer volume of data sources that are combined and crunched makes it a fundamentally different beast from previous generations of digitally enabled warfare. The human-in-the-loop solution, therefore, does not genuinely solve the shift in how civilians may become viewed as active participants in war efforts.

New solutions are required, and new international agreements are needed that focus not just on the application of these new weapons in the field of war, but also on how the data can be sources to feed them. What constitutes a participant in digital-enabled warfare must be clearly defined so governments, militaries, and civilians in war zones can make informed decisions. Failure to take appropriate actions now will mean that, yet again, it may come down to people brave enough to completely disobey orders to avoid horrific, fully automated consequences.

Cathy Mulligan is aexpert in digital technology and digital economy. She is currently a Richard von Weizsäcker Fellow at the Robert Bosch Foundation in Berlin.


Die Arbeit von netzpolitik.org finanziert sich zu fast 100% aus den Spenden unserer Leser:innen.
Werde Teil dieser einzigartigen Community und unterstütze auch Du unseren gemeinwohlorientierten, werbe- und trackingfreien Journalismus jetzt mit einer Spende.

Es gibt neue Nachrichten auf friedliche-loesungen.org
:

Nur wer angemeldet ist, geniesst alle Vorteile:

  • Eigene Nachrichten-Merkliste
  • Eigener Nachrichtenstrom aus bevorzugten Quellen
  • Eigene Events in den Veranstaltungskalender stellen
M D M D F S S
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31