Autonomous warbots could be more moral than human soldiers

A group of 116 artificial intelligence and robotics tech luminaries, including Tesla’s Elon Musk and DeepMind’s Mustafa Suleyman, sent the U.N. an open letter in August urging such a ban. This week a group of artificial intelligence researchers from Canada and Australia joined the chorus. “Lethal autonomous weapons systems that remove meaningful human control from determining the legitimacy of targets and deploying lethal force sit on the wrong side of a clear moral line,” the Canadians said.

Don’t be so hasty.

In my 2015 article “Let Slip the Robots of War,” I cited law professors Kenneth Anderson of American University and Matthew Waxman of Columbia, who insightfully pointed out that an outright ban “trades whatever risks autonomous weapon systems might pose in war for the real, if less visible, risk of failing to develop forms of automation that might make the use of force more precise and less harmful for civilians caught near it.”