The AI that has nothing to learn from humans

As many will remember, AlphaGo—a program that used machine learning to master Go—decimated world champion Ke Jie earlier this year. Then, the program’s creators at Google’s DeepMind let the program continue to train by playing millions of games against itself. In a paper published in Nature earlier this week, DeepMind revealed that a new version of AlphaGo (which they christened AlphaGo Zero) picked up Go from scratch, without studying any human games at all. AlphaGo Zero took a mere three days to reach the point where it was pitted against an older version of itself and won 100 games to zero.

Advertisement

Now that AlphaGo’s arguably got nothing left to learn from humans—now that its continued progress takes the form of endless training games against itself—what do its tactics look like, in the eyes of experienced human players? We might have some early glimpses into an answer.

AlphaGo Zero’s latest games haven’t been disclosed yet. But several months ago, the company publicly released 55 games that an older version of AlphaGo played against itself. (Note that this is the incarnation of AlphaGo that had already made quick work of the world’s champions.) DeepMind called its offering a “special gift to fans of Go around the world.”

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement