FTC Bans Rite Aid AI Facial Recognition to Fight Shoplifting

(AP Photo/Elise Amendola)

As we’ve covered here repeatedly, retail outlets have been increasingly subject to incidents of shoplifting and mass retail theft. This is particularly true of pharmacies, where thieves can find all sorts of goods that can be easily resold on the streets, not to mention prescription drugs in some cases. All too often, particularly in larger cities, municipal governments haven’t seemed particularly interested in assisting and unleashing law enforcement to deal with the problem. Rite Aid hasn’t been immune to this scourge, but rather than closing down like some of their competitors, they took matters into their own hands and tried to do something about it. They invested in next-generation facial recognition technology powered by Artificial Intelligence that scanned the faces of people on the premises, matching the faces with known shoplifters and “problematic shoppers.” But now the Federal Trade Commission (FTC) has been unleashed on them and the agency has ordered Rite Aid to stop using the software in most cases. Why? Because it’s racist, of course.

Advertisement

The Federal Trade Commission (FTC) on Tuesday banned Rite Aid from using facial recognition powered by artificial intelligence (AI) for surveillance purposes for five years following charges the retailer’s use of AI lacked appropriate safeguards and falsely tagged customers as shoplifters.

In a complaint filed in federal court, the FTC argued that Rite Aid used AI-based facial recognition tools to identify customers who may have engaged in shoplifting or other problematic behavior. The agency said that Rite Aid failed to put in place safeguards to protect employees who were falsely accused of wrongdoing because the facial recognition technology mistakenly flagged them as matching someone previously identified as a shoplifter or other troublemaker.

The FTC said the facial recognition system “generated thousands of false-positive matches” and that it “sometimes matched customers with people who had originally been enrolled in the database based on activity thousands of miles away, or flagged the same person at dozens of different stores” all across the country.

Concerns over errors in facial recognition software have been with us from the beginning. And if we’re being honest, the earlier versions (circa 2015-2018) were pretty bad in some cases, with Amazon’s software being among the worst, often almost hilariously so. And the errors being produced could easily lead some to believe that “racism” was involved. In the case of one Amazon test in 2019, the software was able to identify the faces of “white males” virtually 100% of the time. But with minorities, the percentage was much lower. In the case of Black females, more than one-third of the time the system was unable to even confirm they were female, to say nothing of their race.

Advertisement

Other crazy testing failures followed. In a separate incident, the same software (known as Rekognition) was fed the images of a large number of California legislators, matching them against a database of known criminals and suspects. Out of 120 legislators, the system identified 26 of the elected officials as criminal suspects. (We’re talking about California here, so was it really that far off?) So yes, these systems have had some problems.

But Rite Aid is pushing back, arguing that the FTC hasn’t done its homework. The system that they examined for this report was a beta system that Rite Aid installed in a limited number of stores for a trial run, but they pulled it three years ago. The current system is, according to the company, far more reliable and hasn’t been producing the same sort of mistakes.

I was unable to find any recently published test results of the current system, but I’m aware that the software has been getting better in the past few years. Also, while AI is far from perfect, its addition to many complex systems has expanded the reach of this technology extensively. If the police are unable to stop this sort of mass theft, why should we stop retailers from attempting to improve security and the reporting of violations on their own? What’s the worst thing that will happen? Someone may be misidentified by the software and asked for their ID. Is that really that big of a deal? An apology and a note to the software manufacturer should suffice. The alternative is to give up and start shutting down even more stores. And then where will people shop? This simply seems like more socially suicidal behavior on the part of the government.

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement
David Strom 3:30 PM | December 17, 2024
Advertisement