Can we make our robots less biased than us?

“Bomb squad” robots are marketed as tools for safely disposing of bombs, not for delivering them to targets. (In 2018, police offers in Dixmont, Maine, ended a shootout in a similar manner.). Their profession had supplied the police with a new form of lethal weapon, and in its first use as such, it had killed a Black man.

Advertisement

“A key facet of the case is the man happened to be African-American,” Ayanna Howard, a robotics researcher at Georgia Tech, and Jason Borenstein, a colleague in the university’s school of public policy, wrote in a 2017 paper titled “The Ugly Truth About Ourselves and Our Robot Creations” in the journal Science and Engineering Ethics.

Like almost all police robots in use today, the Dallas device was a straightforward remote-control platform. But more sophisticated robots are being developed in labs around the world, and they will use artificial intelligence to do much more. A robot with algorithms for, say, facial recognition, or predicting people’s actions, or deciding on its own to fire “nonlethal” projectiles is a robot that many researchers find problematic. The reason: Many of today’s algorithms are biased against people of color and others who are unlike the white, male, affluent and able-bodied designers of most computer and robot systems.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement