• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Some computer chess and Go champions -- they learned on their own

lpetrich

Contributor
Joined
Jul 27, 2000
Messages
25,315
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
Board games have long been used as testing ground for artificial intelligence. Their game worlds are simple, abstract, and stylized, but that simplicity disguises a large amount of complexity.  Game complexity has some numbers, and they get very large very quickly. The smallest of them listed, tic-tac-toe, has several thousand possible positions.

 Go (game) is especially difficult. Players alternate placing stones on a 19*19 board, and they try to surround each other's stones. This game has been very difficult for artificial-intelligence software to play. But DeepMind, a subsidiary of Alphabet, Google's parent company, has come up with some software called AlphaGo that has done remarkably well at this game.

Since then, the DeepMind people have devised AlphaZero, a version that can play other games, like chess and shogi, a Japanese chesslike game.
The Ars Technica article has an interesting illustration that shows how many positions that they search. The more advanced chess engines, as they are called, may search tens of millions of positions, while AlphaZero searches tens of thousands, and human grandmasters hundreds.
Poker is one contender for future AIs to beat. It's essentially a game of partial information—a challenge for any existing AI. As Campbell notes, there have been some programs capable of mastering heads-up, no-limit Texas Hold 'Em, when only two players are left in a tournament. But most poker games involve eight to 10 players per table. An even bigger challenge would be multi-player video games, such as Starcraft II or Dota 2. "They are partially observable and have very large state spaces and action sets, creating problems for Alpha-Zero like reinforcement learning approaches," he writes.

AlphaZero Crushes Stockfish In New 1,000-Game Match - Chess.com presents some of the games and analyses of them by some chess masters. In their estimation, AlphaZero had some rather interesting strategies.

CCC: Computer Chess Championship - Chess.com has a chess-engine tournament with several engines paying, including Stockfish though not AlphaZero. It does include LCZero, however, an effort to imitate AlphaZero's success.
 
Usain Bolt can't beat a Ferrari in a race but we wouldn't say a Ferrari learned how to beat him.
 
Anna Rudolf Visits The AlphaZero Headquarters - Chess.com She challenged some of the people there to describe AlphaZero in one sentence.
  • David Smerdon: AlphaZero is the closest we've come to chess purity.
  • Fabiano Caruana: I think it's a breakthrough in artificial intelligence in chess.
  • Demis Hassabis: An incredible optimization process that has discovered its own ideas about chess.
  • Anastasia Sorokina: It's like chess from a different planet.
  • Jon Speelman: The way AlphaZero plays feels like it's Stockfish on acid.
  • Maxime Vachier-Legrave: AlphaZero strikes me with its long-term positional sacrifices.
  • Ali Mortazavi: A better-looking, younger, faster version of me.
  • Matthew Sadler and Natasha Regan: AlphaZero is creative, attacking, and purposeful.
  • Daniel King: AlphaZero goes its own way.
  • Maria Emelianova: It's a freaking beast.
  • Roger Emerson: I will be really impressed by AlphaZero when I see it dealing well with incomplete information systems.
  • Hikaru Nakamura: The best way to describe AlphaZero would be a very creative, human-like program that would simply be a tremendous chess player.
  • Malcolm Pein: Where chess beats human progress.
  • Dominic Lawson: The nearest thing I can think of is if I was playing against a combination, a concentration team of Karpov and Tal, in which they also had computer assistants.
  • Lennart Ootes: AlphaZero is a machine that has been playing too many board games recently, and it's finally time to tackle the real problems in life.
Then a short interview with Garry Kasparov. He wrote an essay on it where he called it the software "Drosophila of reasoning", and he will be working on a foreword for a book that the AlphaZero team is writing about their software.

Drosophila melanogaster is a species of fruit fly that has been a favorite laboratory animal for over a century.

Imperfect information is what poker players have -- they see only some of other players' hands. By comparison, chess is a perfect-information game. Both players see all of the board, and there is no dice throwing or other random elements.
 
What would you say is the likelihood that games where AlphaZero is playing against itself, would continuously result in a draw? A statistical certainty that white always wins?

Truly intelligent learning would/should allow one AlphaZero machine to be or to become a better player than another.

But all we have here is a machine that has been told to play and play and keep playing (itself) over and over again until every possible number and combination of moves (game outcomes) have been statistically crash tested and, of course, memorised. It's just a high speed data generating, data gathering exercise - wash/rinse/repeat number crunching - and then the chess machine has its road map for all future games. Voilà.

There's no more possible mistakes to make because AlphaZero is omniscient. It can't even be bluffed by a "long term positional sacrifice" combo because...

...Usain Bolt can't run faster than a Ferrari
 
A large part of AI is pattern recognition, also a big part of human intelligence.

Having a chess engine that builds a data base of an opponent's reactions and then building a statistical models of how an opponent will move is not at elastin principle a big leap.

I used Chess Master for years. MS Chess Titans had susceptibilities, you could figure out how to beat it through trial and error.

It would be interesting to put Chess Master on two computers and have them play each other. I am sure the authors would have done that. It depends on whether or not the system always makes the same mioves given initial conditions or if there is a statistics as part of a move.

Humans make blunders, even chess master. When Kasparov lost to IBM's computer it was reported he nearly had a mental breakdown.


https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov
 
Back
Top Bottom