In a paper published last year, Kosinski and a Stanford computer scientist, Yilun Wang, reported that a machine-learning system was able to distinguish between photos of gay and straight people with a high degree of accuracy. They used 35,326 photographs from dating websites and what Kosinski describes as “off-the-shelf” facial-recognition software.
Presented with two pictures – one of a gay person, the other straight – the algorithm was trained to distinguish the two in 81% of cases involving images of men and 74% of photographs of women. Human judges, by contrast, were able to identify the straight and gay people in 61% and 54% of cases, respectively. When the algorithm was shown five facial images per person in the pair, its accuracy increased to 91% for men, 83% for women. “I was just shocked to discover that it is so easy for an algorithm to distinguish between gay and straight people,” Kosinski tells me. “I didn’t see why that would be possible.”…
One vocal critic of that defence is the Princeton professor Alexander Todorov, who has conducted some of the most widely cited research into faces and psychology. He argues that Kosinski’s methods are deeply flawed: the patterns picked up by algorithms comparing thousands of photographs may have little to do with facial characteristics. In a mocking critique posted online, Todorov and two AI researchers at Google argued that Kosinski’s algorithm could have been responding to patterns in people’s makeup, beards or glasses, even the angle they held the camera at. Self-posted photos on dating websites, Todorov points out, project a number of non-facial clues.
Join the conversation as a VIP Member