Tag Archives: go

The Mystery of Go, the Ancient Game That Computers Still Can’t Win | Enterprise | WIRED

The Mystery of Go, the Ancient Game That Computers Still Can’t Win | Enterprise | WIRED.

Remi Coulom (left) plays against Norimoto Yoda in Tokyo. Photo: Takashi Osato/WIRED

Crazy Stone and Nomitan are locked in a game of Go, the Eastern version of chess. On each screen, you can see a Go board — a grid of 19 lines by 19 lines — filling up with black and white playing pieces, each placed at the intersection of two lines. If Crazy Stone can win and advance to the finals, it will earn the right play one of the best human Go players in Japan. No machine has ever beaten a top human Go player — at least not without a huge head-start. Even if it does advance to the man-machine match, Crazy Stone has no chance of changing this, but Coulom wants to see how far his creation has come.

[…]

In 1994, machines took the checkers crown, when a program called Chinook beat the top human. Then, three years later, they topped the chess world, IBM’s Deep Blue supercomputer besting world champion Garry Kasparov. Now, computers match or surpass top humans in a wide variety of games: Othello, Scrabble, backgammon, poker, even Jeopardy. But not Go. It’s the one classic game where wetware still dominates hardware.

Invented over 2500 years ago in China, Go is a pastime beloved by emperors and generals, intellectuals and child prodigies. Like chess, it’s a deterministic perfect information game — a game where no information is hidden from either player, and there are no built-in elements of chance, such as dice.1 And like chess, it’s a two-person war game. Play begins with an empty board, where players alternate the placement of black and white stones, attempting to surround territory while avoiding capture by the enemy. That may seem simpler than chess, but it’s not. When Deep Blue was busy beating Kasparov, the best Go programs couldn’t even challenge a decent amateur. And despite huge computing advances in the years since — Kasparov would probably lose to your home computer — the automation of expert-level Go remains one of AI’s greatest unsolved riddles.

[…]

The challenge is daunting. In 1994, machines took the checkers crown, when a program called Chinook beat the top human. Then, three years later, they topped the chess world, IBM’s Deep Blue supercomputer besting world champion Garry Kasparov. Now, computers match or surpass top humans in a wide variety of games: Othello, Scrabble, backgammon, poker, even Jeopardy. But not Go. It’s the one classic game where wetware still dominates hardware.

Invented over 2500 years ago in China, Go is a pastime beloved by emperors and generals, intellectuals and child prodigies. Like chess, it’s a deterministic perfect information game — a game where no information is hidden from either player, and there are no built-in elements of chance, such as dice.1 And like chess, it’s a two-person war game. Play begins with an empty board, where players alternate the placement of black and white stones, attempting to surround territory while avoiding capture by the enemy. That may seem simpler than chess, but it’s not. When Deep Blue was busy beating Kasparov, the best Go programs couldn’t even challenge a decent amateur. And despite huge computing advances in the years since — Kasparov would probably lose to your home computer — the automation of expert-level Go remains one of AI’s greatest unsolved riddles.

[…]

… games of Go are often so complex that only extremely high-level players can understand how they’re progressing.

[…]

‘THERE IS CHESS IN THE WESTERN WORLD, BUT GO IS INCOMPARABLY MORE SUBTLE AND INTELLECTUAL.’

This is not for lack of trying on the part of programmers, who have worked on Go alongside chess for the last fifty years, with substantially less success. The first chess programs were written in the early fifties, one by Turing himself. By the 1970s, they were quite good. But as late as 1962, despite the game’s popularity among programmers, only two people had succeeded at publishing Go programs, neither of which was implemented or tested against humans.

Finally, in 1968, computer game theory genius Alfred Zobrist authored the first Go program capable of beating an absolute beginner. It was a promising first step, but notwithstanding enormous amounts of time, effort, brilliance, and quantum leaps in processing power, programs remained incapable of beating accomplished amateurs for the next four decades.

To understand this, think about Go in relation to chess. At the beginning of a chess game, White has twenty possible moves. After that, Black also has twenty possible moves. Once both sides have played, there are 400 possible board positions. Go, by contrast, begins with an empty board, where Black has 361 possible opening moves, one at every intersection of the 19 by 19 grid. White can follow with 360 moves. That makes for 129,960 possible board positions after just the first round of moves.

The rate at which possible positions increase is directly related to a game’s “branching factor,” or the average number of moves available on any given turn. Chess’s branching factor is 35. Go’s is 250. Games with high branching factors make classic search algorithms like minimax extremely costly. Minimax creates a search tree that evaluates possible moves by simulating all possible games that might follow, and then it chooses the move that minimizes the opponent’s best-case scenario. Improvements on the algorithm — such as alpha-beta search and null-move — can prune the chess game tree, identifying which moves deserve more attention and facilitating faster and deeper searches. But what works for chess — and checkers and Othello — does not work for Go.

[…]

“A lot of people peak out at a certain level of amateur and never get any stronger,” David Fotland explains. Fotland, an early computer Go innovator, also worked as chief engineer of Hewlett Packard’s PA-RISC processor in the 70s, and tested the system with his Go program. “There’s some kind of mental leap that has to happen to get you past that block, and the programs ran into the same issue. The issue is being able to look at the whole board, not the just the local fights.”

[…]

Coulom had exchanged ideas with a fellow academic named Bruno Bouzy, who believed that the secret to computer Go might lie in a search algorithm known as Monte Carlo. Developed in 1950 to model nuclear explosions, Monte Carlo replaces an exhaustive search with a statistical sampling of fewer possibilities. The approach made sense for Go. Rather than having to search every branch of the game tree, Monte Carlo would play out a series of random games from each possible move, and then deduce the value of the move from an analysis of the results.

[…]

Black and white stones continue to fill the board, beautiful as always, forming what is technically known as a percolated fractal.

[…]

Coulom plays down the Electric Sage Battle. “The real competition is program against program,” he told me during one early phone interview. “When my opponent is a programmer, we are doing the same thing. We can talk to each other. But when I play against a professional and he explains the moves to me, it is too high level. I can’t understand, and he can’t understand what I am doing. The Densei-sen — it is good for publicity. I am not so interested in that.”

[…]

According to University of Sydney cognitive scientist and complex systems theorist Michael Harré, professional Go players behave in ways that are incredibly hard to predict. In a recent study, Harré analyzed Go players of various strengths, focusing on the predictability of their moves given a specific local configuration of stones. “The result was totally unexpected,” he says. “Moves became steadily more predictable until players reached near-professional level. But at that point, moves started getting less predictable, and we don’t know why. Our best guess is that information from the rest of the board started influencing decision-making in a unique way.”

[…]

…no programmers think of their creations as “intelligent.” “The game of Go is spectacularly challenging,” says Coulom, “but there is nothing to do with making a human intelligence.” In other words, Watson and Crazy Stone are not beings. They are solutions to specific problems. That’s why its inaccurate to say that IBM Watson will be used to fight cancer, unless playing Jeopardy helps reduce tumors. Developing Watson might have led to insights that help create an artificial diagnostician, but that diagnostician isn’t Watson, just as MCTS programs used in hospital planning are not Crazy Stone.

The public relations folks at IBM paint a different picture, and so does the press. Anthropomorphized algorithms make for a better story. Deep Blue and Watson can be pitted against humans in highly produced man-machine battles, and IBM becomes the gatekeeper of a new era in artificial intelligence. Caught between atheism and a crippling fear of death, Ray Kurzweil and other futurists feed this mischaracterization by trumpeting the impending technological apotheosis of humanity, their breathless idiocy echoing through popular media. “The Brain’s Last Stand,” read the cover of Newsweek after Kasparov’s defeat. But in truth, these machines are nowhere close to mimicking the brain, and their creators admit as much.

Many Go players see the game as the final bastion of human dominance over computers. This view, which tacitly accepts the existence of a battle of intellects between humans and machines, is deeply misguided. In fact, computers can’t “win” at anything, not until they can experience real joy in victory and sadness in defeat, a programming challenge that makes Go look like tic-tac-toe. Computer Go matches aren’t the brain’s last stand. Rather, they help show just how far machines have to go before achieving something akin to true human intelligence. Until that day comes, perhaps it’s best to view the Densei-sen as programmers do. “It is fun for me,” says Coulom, “but that’s all.”

Advertisements