These observations—and alarm at a possible arms race around these emerging and future weapons—lead many to argue that the solution lies in some form of multilateral treaty. A proposed treaty might be prohibitory, along the lines of the Ottawa landmines convention, or it might delineate acceptable uses of autonomous systems. Human Rights Watch recently called for negotiation of a sweeping multilateral treaty banning outright the use, production, and even development of “fully autonomous weapons” programmed to select and engage targets without human intervention.

Ambitions for multilateral treaty regulation in this context are misguided for several reasons. First, limitations on autonomous military technologies, although quite likely to find wide superficial acceptance among non-fighting states and some non-governmental groups and actors, will have little traction among those whose actions most matter in the real world. Even states and groups inclined to support treaty prohibitions or limitations will find it difficult to reach agreement on scope or definitions because lethal autonomy will be introduced incrementally. And, of course, there are the general challenges of compliance, including the collective action problems of failure and defection that afflict all such treaty regimes.

There are also serious humanitarian costs to prohibition, given the possibility that autonomous weapons systems could in the long run be more discriminating and ethically preferable to the alternatives. Blanket prohibition precludes the possibility of such benefits. This is particularly so if prohibitions include even the development of components or technologies that might incrementally lead to much greater humanitarian protection in war.

Nevertheless, the dangers associated with evolving autonomous robotic weapons are very real, and the United States has a serious interest in guiding development in this context of international norms.