"Coordinated" effort by social media users turns Microsoft's AI bot into a Nazi in less than 24 hours

“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical,” a Microsoft spokesperson told BuzzFeed News in an email. “Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

Advertisement

Left unexplained: Why Tay was released to the public without a mechanism that would have protected the bot from such abuse, blacklisting contentious language. Asked why Microsoft didn’t filter words like “nigger” and “holocaust,” a Microsoft spokesperson did not immediately provide an explanation.

Microsoft unleashed Tay to the masses Wednesday on a number of platforms including GroupMe, Twitter and Kik. Tay learns as she goes: “The more you talk to her the smarter she gets,” Microsoft researcher Kati London told BuzzFeed News in interview. Tay takes stances, London said. An intriguing theory, but obviously problematic when tested against the dark elements of the internet.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement