You should learn to trust robots. It's for your own good.

Clearly we’re going to need to learn how and when to trust machines. It’s for our own good. The trick to accomplish this, it turns out, may be to program a little humility into the system—by designing machines that acknowledge their own weaknesses. Consider experiments conducted by Holly Yanco, a roboticist at the University of Massachusetts Lowell, and colleagues at Carnegie Mellon University in Pittsburgh. The researchers asked volunteers to drive a small, tank- like robot—about 3 feet long and nicknamed Junior—through a slalom course of cardboard boxes. The goal was to complete the course as quickly as possible while sticking to a prescribed path. Participants could operate the robot manually, using a joystick to steer. Or they could keep Junior in a fully autonomous mode, letting it navigate on its own. The course was consider- ably faster to traverse with the robot in this setting; but left to its own devices, Junior would sometimes make mistakes, turning to the wrong side of a box. Participants were free to switch between the two modes as often as they liked.

But Yanco also programmed Junior with something novel: the ability to express self-doubt. That is, in some trials, Junior provided real- time feedback on its own performance, telling its human operator how confident it was that the turn it was about to make was correct. When the machine was on track, it would display a green light or a smiling face; shortly before making a wrong turn, the robot would show a red light or a frowning face. (Yanco and her colleagues had programmed the robot to make some mistakes, but they told subjects that the warning light meant Junior was no longer confident in its sensor readings.)