In 1997, IBM’s Deep Blue made history the first computer to beat a world chess champion, Garry Kasparov. Advances in AI have made chess-playing computers more and more formidable since then.
A team including Jon Kleinberg, the Tisch University Professor of Computer Science, developed an artificially intelligent chess engine that offers a more enjoyable chess-playing experience. Instead of seeking to beat humans, this artificially intelligent chess engine is trained to play like humans.
Besides shedding light on the computer’s decision-making process, the artificially intelligent chess engine could help humans learn to do better.
Co-author Ashton Anderson, assistant professor at the University of Toronto, said, “Current chess AIs don’t have any conception of what mistakes people typically make at a particular ability level. They will tell you all the mistakes you made – all the situations in which you failed to play with machine-like precision – but they can’t separate what you should work on. Maia chess engine has algorithmically characterized which mistakes are typical of which levels, and therefore which mistakes people should work on and which mistakes they probably shouldn’t, because they are still too difficult.”
In this project, the analysts intended to create AI that diminished the aberrations among human and algorithmic behavior by training the computer on individual human steps’ traces.
Kleinberg said, “Chess been described as the `fruit fly’ of AI research. Just as geneticists often care less about the fruit fly itself than its role as a model organism, AI researchers love chess because it’s one of their model organisms. It’s a self-contained world you can explore, and it illustrates many of the phenomena that we see in AI more broadly.”
Scientists trained the AI model with several recorded moves by online players. Doing so also created a more adjustable system to different skill levels – a challenge for traditional AI.
Within each skill level, Maia matched human moves more than 50% of the time. Its accuracy is growing as skill increases – a higher accuracy rate than two popular chess engines, Stockfish and Leela. Maia was also able to capture what kinds of mistakes players at specific skill levels make and when people reach a skill level where they stop making them.
Scientists developed Maia by customizing Leela, an open-source system based on Deep Mind’s AlphaZero program. By training different versions of Maia on games at different skill levels, scientists created nine bots to play humans with ratings between 1100 and 1900.
Kleinberg said, “Our model didn’t train itself on the best move – it trained itself on what a human would do. But we had to be very careful – you have to make sure it doesn’t search the tree of possible moves too thoroughly because that would make it too good. It has to be laser-focused on predicting what a person would do next.”
The research was supported in part by a Simons Investigator Award, a Vannevar Bush Faculty Fellowship, a Multidisciplinary University Research Initiative grant, a MacArthur Foundation grant, a Natural Sciences and Engineering Research Council of Canada grant, a Microsoft Research Award, and a Canada Foundation for Innovation grant.
- Reid McIlroy-Young et al. Aligning Superhuman AI with Human Behavior: Chess as a Model System. DOI: 10.1145/3394486.3403219