© 2015 Russell Christian for Human Rights Watch
For the past four years, diplomats, academic experts, and NGO representatives have come together for a number of meetings in Geneva to discuss regulating the so-called Lethal Autonomous Weapon Systems (LAWS) under the Convention on Certain Conventional Weapons. While drones have become a normal part of military operations, LAWS, or as those critical of them like to call them, killer robots, are still in a stage of early development. What makes them special is that they are capable of navigating through air space searching for potential targets, and once they have found them, they can use their weapons to select them and fire on them, all on their own. Put bluntly, these are machines that – once deployed – can kill humans on their own without human interference. While the use of drones – especially in so-called targeted killing operations – already raise a myriad of legal, ethical, and technical questions (which I discuss in some more detail here), LAWS add an additional layer of complexity, leading to three problems when it comes to granting them the agency to kill: the laws of war and the issue of emotions, responsibility, and de-humanization.
The first question we need to answer is whether machines would be better in the practice of fighting under the laws of war. Now, there are arguments being made that machines, if properly programmed, would be better capable of following the laws of the war in making a decision whether to kill or not. The opponents hold that the laws of war, such as the principle of proportionality, require an element of human judgement that machines will never be capable of. Related to these questions is the issue of emotions or the lack thereof. Supporters of LAWS argue that the fact that machines lack emotions protects them from acting out of, say, rage or revenge, exactly as it happened in the Mỹ Lai massacre in Vietnam. Critics hold against this view, advocating that the lack of emotions is the problem. Machines are not capable of pity, or they may not recognize parties wanting to surrender or individuals having been badly wounded that they have to be considered hors de combat. Following the laws of war and the role of emotions is the first problem with granting machines the agency to kill.
A second problem arises with questions of responsibility. Within International Humanitarian Law, the body of law regulating the conduct of war, the notion of command responsibility clearly outlines who is to be held responsible for violations of the law. Yet this is explicitly limited to humans. There is no system in place to punish a non-sentient being, and hence these machines would be indifferent to punishment (this may not be the case for future machines running on artificial intelligence, as both Star Trek’s Data and the robots in I, Robot indicate). Who should be held responsible then? Suggestions include the military commander who deployed the weapon system, the programmer who was responsible for writing the algorithm, and the manufacturer. Debates over autonomous cars may serve as a foundation, but the ethical dilemma caused by a weapon has to have much higher stakes then those caused by cars. Hence, the inability to assign responsibility to these weapon systems constitutes the second problem when granting agency to them.
A third problem lies in the de-humanizing effect of these weapons. It is an effect we can already observe with drones due to the distance between the operators and the targets. Drones are not like fire-and-forget weapons such as tomahawk missiles. What makes them different is the fact that operators can observe potential targets for days and sometimes weeks before killing them via the drone. This has been termed “distant intimacy” by John Williams. LAWS may remove the human operator to varying degrees, depending on the modus of operation, but the de-humanization effect remains, because the use of drones and LAWS alike reduces the interpersonal relationship that fighting in a war entails, as the philosopher Thomas Nagel argued over 25 years ago. The de-humanizing effect of such weapons constitutes the third problem when granting agency to weapon systems.
Let us assume for a moment that we would be successful in solving these three problems. Even then the ethical question remains: whether we would want to grant machines the agency to kill humans. This would require a conception of agency that goes beyond humans. Scholars such as Anna Leander have started to push our notion of agency to include drones, but we are only at the beginning of thinking about post-human agency. Agency within international relations is usually confined to either human actors or to collective actors such as states, IOs, IGOs, or NGOs. By definition, everything else – including weapon systems – are structural components. LAWS challenge this notion through their autonomy. In all fairness, debates so far have always included discussions about how much meaningful human control should be part of using these weapons. Should a human make the final decision to fire (human in the loop), oversee the weapon systems actions and only interfere if they think there is a mistake (human on the loop), or not be involved at all (human outside of the loop). Yet, we can imagine a world in which humans are not on the loop anymore, where machines decide autonomously if someone is a valid target in a theatre of war. Human oversight of autonomous weapon systems is never perfect since human operators may be inclined to believe the results of a computer analysis based on a large amount of sensory data, and may in addition only have a limited time to decide whether to continue with an attack or stop it.
It might be that solving the problems I outlined here is not impossible; but it should not be done without having definitive answers to the deeper and more complex questions of ethics and agency the use of LAWS would produce.