The Rise of Killer Robots: Should machines be allowed to kill us?

© 2015 Russell Christian for Human Rights Watch

For the past four years, diplomats, academic experts, and NGO representatives have come together for a number of meetings in Geneva to discuss regulating the so-called Lethal Autonomous Weapon Systems (LAWS) under the Convention on Certain Conventional Weapons. While drones have become a normal part of military operations, LAWS, or as those critical of them like to call them, killer robots, are still in a stage of early development. What makes them special is that they are capable of navigating through air space searching for potential targets, and once they have found them, they can use their weapons to select them and fire on them, all on their own. Put bluntly, these are machines that – once deployed – can kill humans on their own without human interference. While the use of drones – especially in so-called targeted killing operations – already raise a myriad of legal, ethical, and technical questions (which I discuss in some more detail here), LAWS add an additional layer of complexity, leading to three problems when it comes to granting them the agency to kill: the laws of war and the issue of emotions, responsibility, and de-humanization.

The first question we need to answer is whether machines would be better in the practice of fighting under the laws of war. Now, there are arguments being made that machines, if properly programmed, would be better capable of following the laws of the war in making a decision whether to kill or not. The opponents hold that the laws of war, such as the principle of proportionality, require an element of human judgement that machines will never be capable of. Related to these questions is the issue of emotions or the lack thereof. Supporters of LAWS argue that the fact that machines lack emotions protects them from acting out of, say, rage or revenge, exactly as it happened in the Mỹ Lai massacre in Vietnam. Critics hold against this view, advocating that the lack of emotions is the problem. Machines are not capable of pity, or they may not recognize parties wanting to surrender or individuals having been badly wounded that they have to be considered hors de combat. Following the laws of war and the role of emotions is the first problem with granting machines the agency to kill.

A second problem arises with questions of responsibility. Within International Humanitarian Law, the body of law regulating the conduct of war, the notion of command responsibility clearly outlines who is to be held responsible for violations of the law. Yet this is explicitly limited to humans. There is no system in place to punish a non-sentient being, and hence these machines would be indifferent to punishment (this may not be the case for future machines running on artificial intelligence, as both Star Trek’s Data and the robots in I, Robot indicate). Who should be held responsible then? Suggestions include the military commander who deployed the weapon system, the programmer who was responsible for writing the algorithm, and the manufacturer. Debates over autonomous cars may serve as a foundation, but the ethical dilemma caused by a weapon has to have much higher stakes then those caused by cars. Hence, the inability to assign responsibility to these weapon systems constitutes the second problem when granting agency to them.

A third problem lies in the de-humanizing effect of these weapons. It is an effect we can already observe with drones due to the distance between the operators and the targets. Drones are not like fire-and-forget weapons such as tomahawk missiles. What makes them different is the fact that operators can observe potential targets for days and sometimes weeks before killing them via the drone. This has been termed “distant intimacy” by John Williams. LAWS may remove the human operator to varying degrees, depending on the modus of operation, but the de-humanization effect remains, because the use of drones and LAWS alike reduces the interpersonal relationship that fighting in a war entails, as the philosopher Thomas Nagel argued over 25 years ago. The de-humanizing effect of such weapons constitutes the third problem when granting agency to weapon systems.

Let us assume for a moment that we would be successful in solving these three problems. Even then the ethical question remains: whether we would want to grant machines the agency to kill humans. This would require a conception of agency that goes beyond humans. Scholars such as Anna Leander have started to push our notion of agency to include drones, but we are only at the beginning of thinking about post-human agency. Agency within international relations is usually confined to either human actors or to collective actors such as states, IOs, IGOs, or NGOs. By definition, everything else – including weapon systems – are structural components. LAWS challenge this notion through their autonomy. In all fairness, debates so far have always included discussions about how much meaningful human control should be part of using these weapons. Should a human make the final decision to fire (human in the loop), oversee the weapon systems actions and only interfere if they think there is a mistake (human on the loop), or not be involved at all (human outside of the loop). Yet, we can imagine a world in which humans are not on the loop anymore, where machines decide autonomously if someone is a valid target in a theatre of war. Human oversight of autonomous weapon systems is never perfect since human operators may be inclined to believe the results of a computer analysis based on a large amount of sensory data, and may in addition only have a limited time to decide whether to continue with an attack or stop it.

It might be that solving the problems I outlined here is not impossible; but it should not be done without having definitive answers to the deeper and more complex questions of ethics and agency the use of LAWS would produce.

Share this:

3 Replies to “The Rise of Killer Robots: Should machines be allowed to kill us?”

  1. Very scary – the question of giving machines agency to kill humans just brings back the senselessness of war in the first place. War is being fought because humans have some sort of conflict amongst themselves, mostly about resources. For humans on one side of the struggle to remove themselves and let machines take their place is just wrong. I know it is already happening, but your article has just brought it home again and made me think about how absurd this whole concept is. Thanks Sassan!

  2. Great post, and an excellent overview of the ethical questions raised by autonomous weapons! The idea of post-human agency is definitely mindboggling and might very well become one of the major theoretical challenges for International Law in the decades to come. One aspect I find fascinating is the extent to which those driving the debate have to rely on imaginaries and future scenarios. Other norm debates on the use of certain warfare technologies (landmines, nuclear weapons etc.) usually took off after their disastrous effects had become visible on the battlefields. By contrast, the LAWS debate is about the ethical implications of a technology that isn’t even fully developed yet, and the uncertainties that come along with that certainly influence the discourse. Strategically, I wonder whether that is an advantage for those arguing for more restrictive rules or those who want to be more permissive. But more generally, it is also fascinating how popular culture plays into these questions. Your mention of Star Trek and I, Robot is telling as it shows that science fiction helps us wrap our minds around these problems. I honestly wonder whether we would have the current debate if it wasn’t for The Terminator & co…

    1. Dear Killian,
      thanks for your kind words. I agree with your assessment about how LAWS are different because we do not have that technology deployed in combat yet. My understanding has been that this has made it more difficult for those pushing for a ban or regulations, as the counter-argument is that the weapons should be developed first and then an assessment under Internationl Humanitarian Law can be made (pursuant to Article 36 of the Additional Protocol 1 of the Geneva Convention). Another issue is that of dual-use, i.e. the technology for autonomous robots is not exclusively usable for weapon systems. Robots cleaning up after a nuclear accident are an example of autonomous robots in a civilian use. One last note on the influence of sci-fi, with which I largely agree: Terminator is a cyborg, not a robot 🙂

Leave a Reply

Feel free to comment on all the blog posts! We are happy to engage in a discussion with our readers and want to foster the dialogue with them. Any reasonable or constructive comment, including strong, but well-argued critique is welcome. To guarantee a fair and respectful discussion, we operate a propriety filter, which will route the comments to the Blog Team first. The comments will be published after they have been checked, so there might be a brief delay until your comment is posted for public view.

Your email address will not be published. Required fields are marked *