Let’s also dismiss some other concerns of anti-robot crowd: The idea that less hands-on killing makes society more depraved is not borne out. As we have become less personally involved in warfare we have continued to be less violent, as well. (Read Steven Pinker’s masterful “The Better Angels of Our Nature: Why Violence Has Declined.”) And as to the question of who will be responsible when an innocent is killed by a robot, that’s easy: The institution or leaders who send a machine to kill. We will be no less culpable for our actions because we deploy metal surrogates rather than ones made of flesh and bones.

Who knows? Perhaps, in the far flung future, robots will able to make some ethical choices on their own — with even better results than humans. The Office of Naval Research has already awarded $7.5 million in grants to researchers at Brown, Georgetown, RPI, Tufts and Yale to study the possibilities of teaching robots the difference between right and wrong and the weight of moral consequences. Can a robot ever have a simulated “soul” that allows it to feel empathy, or utilize intuitive skills or deal with ethical dilemmas? Yes. But when that happens we will be in singularity and engaged in a deeper philosophical discussion. If our robot overlords permit it.