IBM bails out of "racist" facial recognition software development

The field of companies racing to develop reliable facial recognition software just became a bit less crowded. IBM CEO Arvind Krishna made the announcement this week that his company would no longer work on such projects while simultaneously sending a letter to Congress asking them to take action. But what action is he looking for? Congress isn’t particularly well known for its coding skills. No, Krishna isn’t pulling his company out of the game because the technological challenges are too daunting. He’s bailing out because the software is racist, you see. And he wants Congress to shun such applications as well. (CNBC)

Advertisement

IBM CEO Arvind Krishna called on Congress Monday to enact reforms to advance racial justice and combat systemic racism while announcing the company was getting out of the facial recognition business.

The decision for IBM to get out of the facial recognition business comes amid criticism of the technology, employed by multiple companies, for exhibiting racial and gender bias. Amazon’s own use of facial recognition was put to a shareholder vote last year, with 2.4% of shareholders voting in favor of banning the sale of the technology to government agencies amid privacy and civil rights concerns

“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” Krishna wrote in the letter delivered to members of Congress late Monday.

Wow. Everyone is really racing to be at the front of the woke parade these days, aren’t they? Clearly Krishna has been listening to the leaders of the progressive movement who all claim that facial recognition software is “racist” and should be banned from any and all use by law enforcement. Of course, I’m not sure who his message is intended to reach. Certainly it’s not aimed at his shareholders who likely won’t be thrilled with ditching an entire development line that the company has already invested tens of millions of dollars into. (As noted above, nearly 98% of Amazon’s stockholders opposed the idea of ditching the software when polled.)

Advertisement

The idea that software is capable of being “racist” is, of course, preposterous. It’s just lines of code. It does what it’s built to do, assuming the coder is competent. With that said, it’s absolutely true that early offerings in the facial recognition development race have had some embarrassing (and occasionally hilarious) failures. Amazon’s Rekognition software was unable to correctly match up minority and female test subjects in a significant number of cases, while getting an almost perfect score for white males.

The reasons for these problems are already well understood and you can learn more about them from this article at CNET. Darker brown skin produces less lighting contrast than light skin. Women are far more likely to wear makeup that covers up wrinkles or lines, making them appear more “generic” to the software. They are also more likely to change their hairstyle than males, throwing more doubt into the mix.

But the main thing to keep in mind is that these development projects take time, particularly when you’re talking about something as complicated as this. Accuracy in facial recognition programs is going to get to the required levels sooner or later. It’s a pattern followed in all new innovations. You may recall that in the early days of speech recognition, IBM’s own product in that line was among one of several that interpreted the phrase “recognize speech” as “wreck a nice beach.” But with extensive development work we eventually arrived at the systems we employ today.

Advertisement

As far as the “dangers” of misidentifications when facial recognition software is used, as suggested in Krishna’s letter, I’ll just remind everyone of one thing. There has yet to be a single, verified report of anyone being arrested, to say nothing of convicted, after being misidentified in that fashion. It just doesn’t happen. After the software makes the “mistake,” a human being has to look at the images before any action can be taken and law enforcement professionals quickly recognize that the software has picked the wrong candidate. So let’s not panic just yet, shall we?

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
Advertisement