Autonomous Weapon Systems Based on Artificial Intelligence
Humankind continues to make remarkable technological advances in the 21st Century. With the increasing use of Artificial Intelligence in armed conflict situations and the growing prevalence of Artificial Intelligence, International Humanitarian Law must address the urgent need for International Humanitarian Law to determine the appropriateness of the rules of war concerning Autonomous Weapon Systems (AWS) based on AI.
Before we get into the details, it is essential to clarify some terms. AI is a machine that can collect, analyze and make decisions without human intervention. AI aims to imitate human intelligence through a complex process that includes machine learning and deep learning.
There are many definitions of AWS, but a universal consensus has yet to exist. AWS machines are capable of operating on their own using AI.
Some autonomy is also available for land mines, defensive missile systems, and other weapons. They should not be labeled as AWS because they have very precise trigger situations in which they must operate. They do not have absolute autonomy when choosing a target and are not dependent on AI. AWS should not be applied to drones and other machines that rely on AI for information collection and relaying but cannot deploy weapons themselves.
Globally, there are many terms for AWS. These include killer robotics, fully-autonomous weapons, lethal autonomous weapon system, and automated robotics. Consistent and simplified terminology can be more beneficial for international advocacy. It is hoped that academics and other relevant stakeholders can reach a consensus.
Problems with AWS use in Armed Conflicts
AWS deployment is very worrying. There is also the possibility that algorithm-based machines could make mistakes. Humans may be better equipped to handle specific nuances.
For example, a drone carrying explosive munitions is deployed in conflict zones to destroy an enemy military facility adjacent to a civilian-occupied residential building. Dropping a bomb on an army installation could result in civilian casualties. The question is: How will AI determine whether or not the bomb should go off? Can an AI algorithm be created to balance the principles of proportionality and military necessity relative to the Military Advantage? Is there a limit to AWS’s ability to operate? Will a threshold be mathematically based on basic consequentialist moral reasoning to determine if an incidental loss of life is excessive?
AWS can’t be trusted to distinguish between civilians and combatants. Sometimes, combatants might disguise themselves as civilians in certain situations. Sometimes, children growing up in conflict zones play with toy guns. In these situations, a machine might be unable to maintain the principle of distinction.
Just Security’s article acknowledged that the mere existence of guidance, processes, or technologies to prevent civilian harm […] doesn’t solve the tragedy. These must be used in good faith […].”. Likely, there can never be a mathematical algorithm to ensure good faith.
Suppose there is an AWS’mistake’ and there is no way for a human operator to intervene and stop the attack. In that case, the machine cannot be held responsible for any individual criminal liability under International Criminal Law.
These are just a few of the compelling issues surrounding AWS that remain unresolved.
AWS will inevitably be deployed
The June 2019 report of the ICRC on AI stated that it was “not opposed” to new technologies of war, provided it meets minimum requirements. It also noted that it is essential to preserve human control and judgment in decisions that could seriously affect civilians’ lives in armed conflicts.
However, the ICRC expressed severe concerns concerning AWS. A UN Secretary-General for March 2019 also stated AWS is “politically unacceptable and morally repugnant” and should be banned by international law.
These concerns and opposition aside, arguments for AWS and AI are being made. AWS is claimed to increase attack precision, reduce collateral damage, improve efficiency, and lower costs. The US Joint Artificial Intelligence Center Director stated in September 2019 that AI would give them advantages on the battlefield and that they would start deploying the technology in war zones. Other countries like Russia, China, and Israel are also working on their AI and AWS capabilities.
The ICRC’s 2019 report on AI stated that military applications of emerging technologies are not “inevitable” but are “choices made in the States.” It is believed that AWS deployment is likely to continue in the future as advanced military States continue to work at developing the technology at a faster pace.
Regulating the use of AWS is an immediate necessity.
Without international safeguards, advanced military systems around the globe are quickly developing their AI/AWS capabilities. However, research has yet to be rapid in identifying how IHL could be updated to address the AWS challenges. Given the lack of empirical evidence, we need to find out how AWS might impact the actual situation in armed conflict. In 2021, the UN reported that STM Kargu-2, manufactured in Turkey, was deployed in Libya. However, the UN report is lacking detailed information. It has been reported Azerbaijan used AWS during its conflict with Armenia in Nagorno–Karabakh. However, the UN report is lacking specific details. The international community cannot benefit from a reactive approach that waits for evidence about the consequences of AWS before attempting to regulate it further. It is necessary to take a proactive approach.
Not all militaries around the globe deny the need for AWS to be used in accordance with IHL. AWS, which can learn and adapt over time and may be able to act in ways that violate IHL rules on the battlefield, may, despite all their best efforts and best intentions. AWS could also be at risk from being misused by non-state actors. The risks are only significant with adequate safeguards.
It is essential that the UN and global civil society, as well as other concerned organizations, immediately take action to ensure that AWS and AI are developed and implemented within the framework of international rules. If technology is not developed quickly enough, law will be slow to catch up. Once a potentially dangerous weapon becomes available, it may become impossible to stop the progress. The world did not stop nuclear weapons’ proliferation when they were first created. We must not repeat the same error .
Human control over AWS deployment
A human being should decide whether or not a person should be attacked in an armed war. A machine should not do this. It is a grave violation of human dignity for a device to kill a human being, as the ICRC stated in its May 2021 position document on AWS. The ICRC asserts that human control is necessary for AWS compliance with IHL. The US Department of Defense Directive on AWS states that AWS must be designed to allow for “appropriate levels of human judgment over the use of force.” Human control is required to guarantee accountability for potential violations of IHL. It is, therefore, imperative that AWSs are partially autonomous. Human control must be maintained.
There are strong arguments against AI needing to be designed with an algorithm that can consider cultural sensitivities, and on-ground realities, and make context-based decisions. This allows AI to identify civilians and civilians reliably. Article 51.4 can be read to mean that an AWS is inherently an indiscriminate weapon because it has the potential to strike military targets without human control. This is why IHL can prohibit the use of such a method or means of combat.
Additionally, the Convention on Certain Conventional Weapons already prohibits certain weapons. AWS is becoming more popular. It can be updated or an additional Protocol could be negotiated to include explicit provisions that AWS must be under human supervision before attacking another person.
The challenge of humanity and the need to reach a global consensus
Human-made machines could soon attain a level where humans cannot control them at this point in human civilization. AI is a potent tool, and technology has advanced to this point. AI can process vast amounts of data and learn to perform complex analyses in milliseconds with very high accuracy. OpenAI’s chatGPT, Dall E 2, will show the reader how advanced and scary AI technology is. As explained above, AWS standards must be established globally for its development and use.
Although a total ban on AWS would be ideal, it is a fact that they will continue to exist. However, we need to ensure that AWS is developed with human oversight and control. Any AWS should have limited autonomy. A fully autonomous weapon that is capable of operating without human control should therefore be made illegal by International Law. It is possible to call it ‘Controlled AWS’ (CAWS) if the term “AWS” in this case seems too restrictive. This is because humans must control it. In practice, the CAWS can identify targets and make suggestions. Humans can then make the final decision about whether or not to launch an attack.
Leave a Reply