The artificial intelligence that wasn't

It wasn’t meant to be an applied test, not in 1950 and not now, Russell said. But that’s how the contest used it. For the contest held last weekend by the University of Reading, 30 judges were to interact in 10 conversations—five with machines, five with humans. The judges were asked to vote on whether they were speaking to a machine or a human. Eugene Goostman, a program imitating a 13-year-old Ukrainian boy, fooled a third of the judges, enough for the judges to consider it a “passed test.”

Advertisement

That Eugene was programmed to be a non-native English speaker gave it an advantage; similarly that it was meant to be 13. We expect different things from pubescent boys whose first language isn’t English, compared to adult humans raised with the language. So Eugene already had a leg up—an interlocutor could explain away any failed communication.

But it’s not just that. The definition of “passing” the Turing test used for the examination is a particular interpretation of the paper’s prediction. Turing wrote that in 50 years, it would be possible for computers to play the “imitation game”—imitating human speech—“so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning.” While this reads like a wildly optimistic prediction about computer intelligence, the organizers of the conference used it as literal criteria. The 30 judges spent five minutes talking to humans or machines, then recorded whether they felt their conversation partner was human or not.

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement