March 25, 2016 Volume 22, Number 12 |
Research and Education |
General Interest |
Network Tools |
In the News |
Research and EducationBack to Top | |
General InterestBack to Top | |
Network ToolsBack to Top | |
In the NewsBack to Top | |
Artificial Intelligence Beats a World Class Go Master | |
In Two Moves, AlphaGo and Lee Sedol Redefined the Future Mastering the game of Go with deep neural networks and tree search In the Age of Google DeepMind, Do the Young Go Prodigies of Asia Have a Future? YouTube: DeepMind Rage Against The Machines IBM100: Deep Blue The game of Go, which originated in China more than two and a half millennia ago and was formalized in 15th century Japan, is sometimes considered the most deceptively complex board game in history. Statisticians estimate that there are 10 to the 170th power (billions upon billions) of possible moves. Learning to play well takes years. To achieve mastery, students in Korea and other countries attend schools where they engage in nothing but Go for 50 or more hours a week. It came as somewhat of a surprise, then, when AlphaGo, a program designed by Google's artificial intelligence lab, DeepMind, beat one of the top Go players in the world in four out of five matches. In fact, the news sent shockwaves through the international community. AI aficionados rejoiced. Go professionals despaired. And many commentators drew parallels to the 1996 match in which chess master Garry Kasparov lost a game to IBM's chess playing computer, Deep Blue. [CNH] The first link takes readers to an article from Wired, in which journalist Cade Metz offers a fascinating account of the brilliant moves that defined the matches between AlphaGo and the human Go master, Lee Sedol. Next, an article published in Nature outlines the intricacies of the program in question, including how it continually learns and corrects mistakes through deep neural networks and a tree search process. The third article, written by Dawn Chan and published in the New Yorker, delves into the training that Go prodigies undertake in Korea and asks whether advances in artificial intelligence will make such austerities obsolete. The fourth link takes readers to the DeepMind YouTube channel, where live streamed matches of Lee Sodol and AlphaGo may be watched, as well as shorter summaries of the matches. Next, Nate Silver, writing for FiveThirtyEight reviews the long history of human relationships with advances in artificial intelligence, as well as the specifics of how programmers designed a chess computer that could beat a world champion. Finally, the IBM website offers insight into the workings of Deep Blue, the first computer program to do exactly that. |