In defense of killer robots

Adhering to international humanitarian law also means making moral judgments under chaotic conditions. A machine will never be able to assign a value to, say, bombing a bridge and weigh its strategic importance against the cost borne by the local population. Equally problematic, machines lack basic human empathy. So an autonomous robot would respond to a 12-year-old holding a weapon very differently than a soldier would.

Advertisement

These are all powerful arguments, but there is something odd about closing the door on a technology simply based on what it may and may not be able to do. Shouldn’t we be testing these suppositions first? Right now, there is far too much “Terminator” sci-fi coloring the debate. At this stage, no one is discussing an android stalking an urban landscape and reading threats based on human facial expressions or something equally subtle.

Autonomous weapons systems of the near future will be assigned the easy targets. They will pick off enemy fighter jets, warships and tanks—platforms that usually operate at a distance from civilians—or they will return fire when being shot at. None of this is a technical stretch. Combat pilots already rely on machines when they have to hit a target beyond visual range. Likewise, some ground-combat vehicles have shot-detection systems that slew guns in the direction of enemy fire (although we’d probably want a robot to rely on something more than acoustic triangulation before unloading).

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement