Premium

Can facial recognition software predict your political affiliation?

Much to the consternation of liberals and most libertarians, I’ve long been a proponent of the development of facial recognition software when used responsibly by law enforcement agencies. The potential benefits in terms of quickly solving serious crimes and identifying suspects far outweigh any concerns over people’s “digital privacy” when out in the public square, at least in my opinion. But when that software is unleashed in the private sector, even I start getting a queasy feeling sometimes. That may be the case with a new development from Stanford University, where a research team claims that their AI algorithm is able to analyze pictures of people harvested from social media and identify their political persuasion (conservative versus liberal) in an impressively large percentage of cases. The possibilities are disturbing, to put it mildly. (The Debrief)

A Stanford University researcher claims facial recognition software can reasonably predict a person’s political affiliation based solely on facial features.

In the study published in the Nature journal Scientific Reports, Dr. Michal Kosinski says facial recognition algorithms could be used to reasonably predict people’s political views. Using over 1 million Facebook and dating site profiles from users based in Canada, the U.S., and the U.K., Kosinski says his algorithm could correctly identify one’s political orientation with 72% accuracy.

“Political orientation was correctly classified in 72% of liberal–conservative face pairs, remarkably better than chance (50%), human accuracy (55%), or one afforded by a 100-item personality questionnaire (66%),” Kosinski notes in the study.

As Tim McMillan (the reporter covering the story) points out, the author of the paper has a bit of a controversial past, leading to questions about these results. Dr. Kosinski published a different study in 2017 claiming that the facial recognition software he was using could correctly identify a person’s sexual orientation. That seems impossible to me, but then again, I’m not a scientist. Also, I have absolutely zero “gaydar” abilities, as my gay friends like to say. I was absolutely stunned when Anderson Cooper came out because I wouldn’t have guessed that in a million years, though those same friends assured me that “everyone had known it” for ages.

Getting back to this new study, a 72% accuracy rate is pretty impressive, I must admit. How the software manages this feat, however, isn’t very well explained. They used more than one million images captured from social media postings and the algorithm examined each one, measuring more than 2,000 facial “descriptors” ranging from the shapes and sizes of ears, noses and other characteristics to shades of color, hair length and style, etc. Somehow, those characteristics were assimilated with the declared political leanings of the subjects, and a formula was built.

So is this really dangerous or just silly? The thing is, there’s simply no valid reason for such a program to exist, at least from a law enforcement perspective. It’s not illegal to be either liberal or conservative… at least not yet. I suppose that if this application were released into the wild, it might be useful in terms of targeted marketing, identifying potential customers who might be more or less likely to purchase various sorts of political paraphernalia. That would even be extended to things like survivalist gear for “preppers” because they tend to generally be of a more conservative bent.

A more sinister possibility is that people could look for doxxing targets or subjects to be “canceled” at their jobs or other cultural activities. File that under “things that make you go hmmm.”

But the major issue here comes back to those accuracy figures. 72% certainly sounds impressive compared to random chance, but it still means that the algorithm is going to wind up being wrong more than one in four times. That’s a lot of errors. You may recall some of the early testing of Amazon’s hilariously inaccurate facial recognition software when it incorrectly identified a California lawmaker as a wanted criminal. Other errors were less amusing. Also, the software was notoriously bad at correctly identifying women and people of color, though it did pretty well with white males.

Sadly, this algorithm is now a thing that exists, and we already know that the internet is forever. If it’s already out in the wild, somebody will end up using it sooner or later. Perhaps we should all change our profile pictures to those of our dogs and cats before things really go to hell in a handbasket.

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Advertisement