Ethical Robot Killers
April 12, 2009
The news that the military is working on killer drones that will operate autonomously was in the news again recently. Y’know, like none of us have seen this coming. Or that entire movies franchises haven’t been built off of this plot. Is anyone really surprised? The military, of course, is talking about how such machines will be programmed with a set of robot ethics, so that they don’t get in trouble for killing the wrong people, or say, someone trying to surrender. We all trust the military, right?
Jamais Cascio addresses the issue more concretely with a draft set of his own Laws of Robotics. These are a good start, noting both that humans are ultimately responsible for robot behaviors and actions and that we need to consider that robots are going to increasingly become more *like* humans. Also of importance is that these robots will also be programmed in accordance with dominant social customs and norms:
Law #2: Politics Matters
The First Law has a couple of different manifestations. At a broad, social level, the question of consequences comes down to politics–not in the partisan sense, but in the sense of power and norms. The rules embedded into an autonomous or semi-autonomous system come from individual and institutional biases and norms, and while that can’t really be avoided, it needs to be acknowledged. We can’t pretend that technologies–particularly technologies with a level of individual agency–are completely neutral.
These are not just issues and concerns that we should be applying to those in power. Increasingly, robots and drones will be proliferate and become accessible to others — including anarchists. If a tech savvy anarchist insurgency was to employ its own drones, say to assassinate capitalist leaders or sabotage corporate or military facilities, these issues will also need to be considered and addressed in a careful and principled manner.