The Pentagon still requires autonomous weapons to have a “man in the loop” — the robot or drone can train its sights on a target, but a human operator must decide whether to fire. But full autonomy with no human controller would have clear advantages. A computer can process information and engage a weapon infinitely faster than a human soldier. As other nations develop this capacity, the United States will feel compelled to stay ahead. A robotic arms race seems inevitable unless nations collectively decide to avoid one…

Some argue that these concerns can be addressed if we program war-fighting robots to apply the Geneva Conventions. Machines would prove more ethical than humans on the battlefield, this thinking goes, never acting out of panic or anger or a desire for self-preservation. But most experts believe it is unlikely that advances in artificial intelligence could ever give robots an artificial conscience, and even if that were possible, machines that can kill autonomously would almost certainly be ready before the breakthroughs needed to “humanize” them. And unscrupulous governments could opt to turn the ethical switch off…

Even Syria’s Bashar al-Assad must consider that his troops have a breaking point. But imagine an Assad who commands autonomous drones programmed to track and kill protest leaders or to fire automatically on any group of more than five people congregating below. He would have a weapon no dictator in history has had: an army that will never refuse an order, no matter how immoral.