Is your software racist?

Rich Caruana, a Microsoft researcher who has worked to better understand the internal mechanisms of algorithms, said that omitting variables like gender and race in different algorithms isn’t always the solution to countering bias. In some cases, like medical predictions, these variables could be important to accuracy. And there can be other variables, like ZIP codes, that can correlate with race and introduce bias into models that don’t explicitly include race. which can embody biases as well when they are included in models.

Advertisement

In Europe, a sweeping new set of privacy regulations slated to take effect this spring strives to take a crack at the issue of transparency as well. One of its provisions would offer users a “right to explanation” when their data is processed by an automated system. But while it sounds straightforward, imposing a broad requirement like this can pose its own challenges.

“It’s not yet clear how this will work,” Caruana said. He worries that as appealing as “transparency” may sound, there’s no easy way to unpack the algorithm inside AI software in a way that makes sense to people. “There are cases where the law can go too far … too soon. Suddenly, everyone in the EU has a legal right to an explanation. Most of the time, we wouldn’t know how to do it,” he said.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement