“The maximum we can achieve is behavioural intelligence,” Mohammad says. In other words, we can create a robot that behaves like a living, thinking being. “It’s up to you to decide if someone is actually inside or not.”
The idea may seem a little far-fetched, but in fact it takes very little for a person to see human qualities in a machine.
Watching demonstration videos at Nishida’s lab, I experience a moment of surprising emotional impact. In one clip, a human points down to an object on a bench in order to get a robot to interact with it. The robot, a fairly unimpressive looking pile of nuts and bolts, doesn’t yet know what a pointing motion means.
But it does know to follow a human’s gaze. Taking its cue from the person, the robot dips its head and glances at the bench. Instantly, the robot no longer seems to be a lifeless machine. It has interacted with a human in a way that looks as if it is responding to his desires or interest in something. The sense that they have an emotional connection, a shared interest, is inescapable. Even though I know it’s not true, my human interpretation of what I see, is telling me otherwise; and I feel a twinge of empathy for the robot.
This sort of response is inevitable, says David Levy, a British chess master who has written extensively on AI. Humans, especially children, actually have a strong tendency to form attachments to objects, including computers.