Fully autonomous weapons (FAWs), which are robotic systems that can select and fire upon targets without any human intervention, have the potential to be an enormous revolution in military affairs. Proponents of FAWs believe that they will allow faster, more precise and more efficient military interventions. They additionally maintain that these systems will reduce the human risks connected with military operations since they will replace soldiers with autonomous machines. However, despite these advantages, FAWs have been denounced as killer robots and are widely criticized by society and the international community. Critics cite, among other factors, the lack of human supervision, the dangers of the robotic arms race, the limited ability of machines to estimate proportionality of an attack and distinguish legitimate targets from illegitimate ones, and the problems associated with accountability for robot’s actions. Despite these serious arguments against FAWs, a pre-emptive ban on this technology is unlikely because it is opposed by the superpowers mosted invested in the development of the weapons systems. A more feasible solution would be to establish regulations that would alleviate the most harmful aspects of FAWs. This policy proposal will introduce regulations that the international community, through the mechanism of the United Nations, should implement in order to prevent the most harmful effects of FAW development and deployment.
Background
The argument that it is too soon to consider a ban or regulations for FAWs is poorly grounded. Although military companies are far from developing fully independent killer robots, their latest weapons have been presenting increasing autonomy. For instance, Israel Aeroplane Industries recently produced an autonomous loitering weapon named Harpy, which can fly over large areas; when it detects enemies’ radar, it crashes into it. It does not need any human assistance in detecting and destroying its target. Another example of an autonomous weapon may be seen with an attack drone with an autonomous mode named KARGU, which is produced by the Turkish company STM. This drone is capable of facial recognition and autonomous fire-and-forget through the entry of target coordinates.
Another argument against introducing regulations for autonomous weapons is that existing legislation is sufficient to prevent the harmful effects of FAWs. Nevertheless, there remain aspects of fully autonomous weapons which are not covered by existing laws, most notably the accountability gap, or the problems associated with assigning legal accountability for war crimes committed by autonomous machines. This accountability aspect is essential to ensure responsible use of the weapons and to provide justice for victims.
The necessity to introduce regulations is augmented by the absence of specific requirements that the autonomous weapons need to meet in order to uphold the principles of proportionality and distinction as specified under international humanitarian law. While the international community may not agree on a full ban of FAWs, it should at least establish baseline technical standards for the weapons systems to ensure that they do not cause unnecessary harm to civilians.
There have been successful attempts at banning or regulating particular weapons in the past, including anti-personnel landmines and cluster munitions. Thus, there is already an institutional framework in place to facilitate a ban of FAWs. For example, they can be banned as a part of the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects. The following paragraphs propose a series of regulations for FAWs that the international community should consider adding to the Convention.
1. The principle of human supervision
In the Convention on Certain Conventional Weapons, the signatories agreed to maintain an appropriate degree of human control over weapon systems. Thus, for FAWs to be legal, they must comply with this agreement. However, minimal human control limited to the ability to stop the attack or deactivate the machine is not enough. Human and computer abilities are compatible: computers perform better with calculations, searching large data sets and carrying out multiple tasks at once, while humans possess stronger deliberate and inductive reasoning skills and are more able to apply diverse experience to novel tasks. The military should take advantage of this compatibility to make the most out of autonomous weapons’ potential, namely by extending human supervision to at the very least approve the attack. . In the optimal scenario, autonomous machine systems should enhance human abilities instead of trying to replace them.
2. Closing the accountability gap
Each fully autonomous machine should have an assigned member of military personnel who is accountable for its attack. While this may seem unfair because the machine could either act in an unpredictable manner or experience a technical malfunction that results in illegal action, someone must nevertheless be held accountable for the harm that results in order to ensure justice to the victims and to deter further harmful action. When the person who supervises the autonomous weapon is aware of their accountability, he or she will be more cautious about its use and more critically assess the actions proposed by the artificial intelligence. Furthermore, such accountability would incentivise military personnel to demand the highest technical standards of autonomous weapons from the manufacturers because they would personally face the consequences should a program malfunction.
3. Obligatory tests for distinction before deployment
Before being deployed, the automated weapon should pass an advanced test that gauges its ability to distinguish between legitimate and illegitimate targets in vague circumstances. Computer programmes lack the sophistication of human judgement, qualitative assessment, empathy and compassion. Not all people who are armed constitute threats that need to be eliminated; for example, they could be surrendering combatants or wounded. Human soldiers are more capable than any algorithm of making correct judgements in such subtle situations. Therefore, before the army decides to deploy an autonomous weapon, it must prove that the weapon possesses an ability comparable to a human soldier to distinguish between targets. The criteria for this assessment should include the weapon’s camera quality and contextual assessment of intention and behaviour.
Conclusion
FAWs change the nature of warfare to the extent that the traditional laws of war are no longer applicable and need to be adjusted to the new reality. While a pre-emptive ban on fully autonomous weapons is impossible to achieve, the international community should focus on implementing feasible measures that would prevent the realisation of the worst-case scenarios involving autonomous weapons. This includes instances in which these weapons increase the civilian’s burden during military conflicts and in which victims are unable to seek justice due to the lack of accountability for the actions of autonomous machines.
The featured image (top) from 2014 of ‘Ridgeback 4M Tank Hybrid’ is by David Steeves and is licensed under Attribution-NonCommercial-NoDerivs 2.0 Generic (CC BY-NC-ND 2.0).
Patrycja Jasiurska
Patrycja Jasiurska is a PPE student and the Director of Technology and Innovation Policy Centre at King’s Think Tank.
Bibliography
Davison, Neil. “A legal perspective: Autonomous weapon systems under international humanitarian law.” 2018. 5-18.
Gayle, Damien. “UK, US And Russia Among Those Opposing Killer Robot Ban”. The Guardian, Last modified 2020. https://www.theguardian.com/science/2019/mar/29/uk-us-russia-opposing-killer-robot-ban-un-ai.
Israel Aerospace Industries. Harpy. <https://www.iai.co.il/p/harpy> [Accessed 5 April 2020].
Lewis, J. “The Case for Regulating Fully Autonomous Weapons.”, The Yale Law Journal, vol. 124, no. 4. 2015. 1309–1325.
Making the Case: The Dangers of Killer Robots and the Need for a Preemptive Ban, HRW, December 2016. https://www.hrw.org/report/2016/12/09/making-case/dangers-killer-robots-and-needpreemptive-ban#page.
Sharkey, Noel. “Towards a principle for the human supervisory control of robot weapons.” Politica & Societa 3.2. 2014. 305-324.
“Slippery Slope”. Paxforpeace.Nl. 2019. https://www.paxforpeace.nl/publications/all-publications/slippery-slope.