Robot weapons: What's the harm?

Then there’s the question of whether a machine — say, an A.I.-enabled helicopter drone — might be more effective than a human at making targeting decisions. In the heat of battle, a soldier may be tempted to return fire indiscriminately, in part to save his or her own life. By contrast, a machine won’t grow impatient or scared, be swayed by prejudice or hate, willfully ignore orders or be motivated by an instinct for self-preservation.

Advertisement

Indeed, many A.I. researchers argue for speedy deployment of self-driving cars on similar grounds: Vigilant electronics may save lives currently lost because of poor split-second decisions made by humans. How many soldiers in the field might die waiting for the person exercising “meaningful human control” to approve an action that a computer could initiate instantly?

Neither human nor machine is perfect, but as the philosopher B. J. Strawser has recently argued, leaders who send soldiers into war “have a duty to protect an agent engaged in a justified act from harm to the greatest extent possible, so long as that protection does not interfere with the agent’s ability to act justly.” In other words, if an A.I. weapons system can get a dangerous job done in the place of a human, we have a moral obligation to use it.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement