© 2015 Russell Christian for Human Rights Watch
For the past four years, diplomats, academic experts, and NGO representatives have come together for a number of meetings in Geneva to discuss regulating the so-called Lethal Autonomous Weapon Systems (LAWS) under the Convention on Certain Conventional Weapons. While drones have become a normal part of military operations, LAWS, or as those critical of them like to call them, killer robots, are still in a stage of early development. What makes them special is that they are capable of navigating through air space searching for potential targets, and once they have found them, they can use their weapons to select them and fire on them, all on their own. Put bluntly, these are machines that – once deployed – can kill humans on their own without human interference. While the use of drones – especially in so-called targeted killing operations – already raise a myriad of legal, ethical, and technical questions (which I discuss in some more detail here), LAWS add an additional layer of complexity, leading to three problems when it comes to granting them the agency to kill: the laws of war and the issue of emotions, responsibility, and de-humanization.