Democrats target Artificial Intelligence over... bias?

Corey Booker and some of his Senate colleagues would like to introduce a new area of government regulation in the tech industry. We need to be keeping a closer eye on the development of Artificial Intelligence, but not because of the coming robot revolution. The problem, you see, is that the computer algorithms are (wait for it)… racist. And that justifies some sort of government oversight of the tech sector beyond what we already have in place today. (Associated Press)

Advertisement

Congress is starting to show interest in prying open the “black box” of tech companies’ artificial intelligence with oversight that parallels how the federal government checks under car hoods and audits banks.

One proposal introduced Wednesday and co-sponsored by a Democratic presidential candidate, Sen. Cory Booker, would require big companies to test the “algorithmic accountability” of their high-risk AI systems, such as technology that detects faces or makes important decisions based on your most sensitive personal data…

“When the companies really go into this, they’re going to be looking for bias in their systems,” [Senator Ron] Wyden said. “I think they’re going to be finding a lot.”

I’d like to have more fun with this subject, but the fact is that Booker and Wyden are right about some of this software, at least in some cases. There are still big problems with facial recognition programs, for example. I wrote about Amazon’s facial recognition software back in January and the results of independent testing were pretty shocking.

Advertisement

Researchers found that the Amazon software was able to correctly identify a person based on a scan of their face with zero errors… but only if the subject was a white male. White females were not correctly identified seven percent of the time. The same test done on black or Hispanic male subjects produced an even higher error rate. And by the time you get around to black women, in nearly one-third of the test cases, the software wasn’t even able to identify them as being women, let alone get their identity correct.

So the question is… why? No matter how “intelligent” the software may seem, it’s still only emulating intelligence. Until the AI eventually wakes up, it doesn’t form opinions or preferences and thus is incapable of becoming “racist” on its own. So it must have either inherited these preferences from somewhere or there’s a flaw in the programming we haven’t figured out yet. Might the programmers have some sort of unconscious (or perhaps conscious) bias that steers how they develop the program? Could it be that some faces have fewer differences in the number of data points be collected? (There have been studies that suggest some races have a wider variety of nose sizes and shapes based on the climate where those races evolved.)

Advertisement

Either way, this is a mystery I’m sure we’ll eventually solve. But should the government be introducing regulations to prevent racist software from infiltrating every aspect of our technological lives? That point is probably moot. There’s nothing Congress likes more than something new to regulate.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
David Strom 3:20 PM | November 15, 2024
Advertisement
David Strom 12:40 PM | November 15, 2024
Advertisement