The more humanlike a robot seems in both appearance and ability, the easier it is for us to project human thoughts and feelings onto them (this effect is even more pronounced in Japan, where followers of Shinto or animism believe that objects can have souls). Anthropomorphizing is a natural human tendency, as our understanding of the world and everything in it is based on our own experiences. We personify all kinds of objects—we refer to a trusty vehicle as “old girl,” feel nagged when our alarm clocks scream at us to wake up, and experience irritation or sympathy as our dated computer limps along, struggling to obey our commands.
We do this with robots, too. MIT researcher Kate Darling conducts experiments in which people play with Pleos, small mechanized dinosaurs, and are then asked to “torture” them. Participants often can’t bear to do it and can’t watch when others do, even though they know Pleos can’t feel anything. The exercise really isn’t about the Pleo at all—it’s about the human participants and their feelings. It doesn’t matter that their attachment only goes in one direction. The more affection someone feels for an object or a robot, the stronger the tendency to anthropomorphize becomes. Think back to your favorite childhood toy—perhaps a stuffed animal or a blanket. How would you feel if someone ripped it apart? You’d experience some degree of anguish even though you know your stuffed animal can’t feel pain and doesn’t know what’s happening.
This is exactly what happens when humans interact with “social” or interactive robots that narrow the gap between machines and people by making sounds (Pleos’ whimpers contribute to people’s horror at their mistreatment), mimicking facial expressions, or reacting physically to their surroundings. And if you think only the bleeding hearts among us are susceptible to anthropomorphizing, think again.