Its vice-chairman, Robert Work, a former deputy secretary of defense, said autonomous weapons are expected to make fewer mistakes than humans do in battle, leading to reduced casualties or skirmishes caused by target misidentification.
“It is a moral imperative to at least pursue this hypothesis,” he said.
The discussion waded into a controversial frontier of human rights and warfare. For about eight years, a coalition of non-governmental organisations has pushed for a treaty banning “killer robots”, saying human control is necessary to judge attacks’ proportionality and assign blame for war crimes. Thirty countries including Brazil and Pakistan want a ban, according to the coalition’s website, and a UN body has held meetings on the systems since at least 2014.
While autonomous weapon capabilities are decades old, concern has mounted with the development of AI to power such systems, along with research finding biases in AI and examples of the software’s abuse.