Robots with AI can become racist and sexist

While it was a public relations disaster for Microsoft, Tay demonstrated an important issue with machine learning artificial intelligence: That robots can be as racist, sexist and prejudiced as humans if they acquire knowledge from text written by humans.

Advertisement

Fortunately, scientists may now have discovered a way to better understand the decision-making process of artificial intelligence algorithms to prevent such bias.

AI researchers sometimes refer to the complex process machine learning algorithms go through when reaching a decision as the “black box” problem, as they are unable to explain the reason for an action. In order to better understand it, scientists at Columbia and Lehigh Universities reverse engineered a neural network in order to debug and error-check them.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement