• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Computers beating us at chess

lpetrich

Contributor
Joined
Jul 27, 2000
Messages
25,130
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
We Taught Computers To Play Chess — And Then They Left Us Behind | FiveThirtyEight
My progression mirrors how we taught our computers to play chess. The earliest programs, gawkish code running on ungainly mainframes, were woodpushers, capable of playing chess technically but not well. Their successors, running on sleeker supercomputers or speedier modern desktops, had mastered theory — openings and endgames, as well as the sophisticated tactics of the middle game — and now played better than any human. And their successors, the latest evolution, ungodly chess beings sprung from the secretive labs of trillion-dollar companies, play a hyper-advanced alien chess, exotic and beautiful, something no human is capable of fully understanding, let alone replicating, but so full of awesome style.
Then showing the performance of the best chess software vs. the best human players, using the  Elo rating system

The best human players slowly increased from 2700 in 1980 to 2800 in 2019, but computer software increased much faster, from 2200 in 1980 to 2800 in 1995 to 2900 in 2010 to 3400 in 2019.

TCEC - Live Computer Chess Broadcast -- different chess engines, as they are called, pitted against each other. Engines with names like Stockfish, Komodo, Gull, Mantissa, Berserk, Arasan, Tucano, BlackMarlin, Seer, Zahak, Glaurung, ...
 
[2009.04374] Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess
It is non-trivial to design engaging and balanced sets of game rules. Modern chess has evolved over centuries, but without a similar recourse to history, the consequences of rule changes to game dynamics are difficult to predict. AlphaZero provides an alternative in silico means of game balance assessment. It is a system that can learn near-optimal strategies for any rule set from scratch, without any human supervision, by continually learning from its own experience. In this study we use AlphaZero to creatively explore and design new chess variants. There is growing interest in chess variants like Fischer Random Chess, because of classical chess's voluminous opening theory, the high percentage of draws in professional play, and the non-negligible number of games that end while both players are still in their home preparation. We compare nine other variants that involve atomic changes to the rules of chess. The changes allow for novel strategic and tactical patterns to emerge, while keeping the games close to the original. By learning near-optimal strategies for each variant with AlphaZero, we determine what games between strong human players might look like if these variants were adopted. Qualitatively, several variants are very dynamic. An analytic comparison show that pieces are valued differently between variants, and that some variants are more decisive than classical chess. Our findings demonstrate the rich possibilities that lie beyond the rules of modern chess.

VariantPrimary rule changeSecondary rule change
No-castlingCastling is disallowed throughout the game
No-castling (10)Castling is disallowed for the first 10 moves (20 plies)
Pawn one squarePawns can only move by one square
Stalemate=winForcing stalemate is a win rather than a draw
TorpedoPawns can move by 1 or 2 squares anywhere on the board. En passant can consequently happen anywhere on the board.
Semi-torpedoPawns can move by two squares both from the 2nd and the 3rd rank
Pawn-backPawns can move backwards by one square, but only back to the 2nd/7th rank for White/BlackPawn moves do not count towards the 50 move rule
Pawn-sidewaysPawns can also move laterally by one square. Captures are unchanged, diagonally upwardsSideway pawn moves do not count towards the 50 move rule
Self-captureIt is possible to capture one’s own pieces

No-castling chess is a potentially exciting variant, given that king safety is often compromised for both players, allowing for simultaneous attacking and counter-attacking and the equality, when reached, tends to be dynamic in nature rather than “dry”. The multitude of approaches to evacuate the king, and their timing, adds complexity to the opening play. No-castling (10), where castling is not permitted for the first 10 moves (20 plies) is a partial restriction, rather than an absolute one – which does not change the game to the same extent. Due to castling being such a powerful option, the lines preferred by AlphaZero all tend to involve castling, only delayed – resulting in a preference for slower, closed positions, and a less attractive style of play. Such partial castling restrictions can be considered if the desire is to sidestep opening theory and preparation, but this may not be of interest for the wider chess audience.

Pawn one square chess variant may appeal to players who enjoy slower, strategic play – as well as a training tool for understanding pawn structures, due to the transpositional possibilities when setting up the pawns. The reduced pawn mobility makes it harder to launch fast attacks, making the game overall less decisive.

Stalemate=win chess has little effect on the opening and middlegame play, mostly affecting the evaluation of certain endgames. As such, it does not increase decisiveness of the game by much, as it seems to almost always be possible to defend without relying on stalemate as a drawing resource. Therefore, this chess variant is not likely to be useful for sidestepping known theory or for making the game substantially more decisive at the high level. The overall effect of the change seems to be minor.

Torpedo and Semi-torpedo chess both make the game more dynamic and more decisive, and Torpedo chess in particular leads to new motifs and changes in all stages of the game. Creating passed pawns becomes very important, as they are hard to stop. The attacking possibilities make Torpedo chess quite appealing, and it is likely to be of interest for players that enjoy tactical play.

Pawn-back chess makes it possible to regain control of the weakened squares in the position and remove some square weaknesses. It also introduces additional possibilities for opening up diagonals and making squares available for the pieces. Counter-intuitively, even though moving the pieces backwards is usually a defensive manoeuvre, this can make more aggressive options possible, given that pawns can now be pushed further earlier on, as there is always an option of moving them back to cover the weakened squares. AlphaZero has a strong preference for playing the French defence with Black, which is particularly interesting.

Pawn-sideways chess is incredibly complex, resulting in patterns that are at times quite “alien” when one is used to classical chess. The pawn structures become very fluid and it is impossible to create permanent pawn weaknesses. Given how important this concept is in classical chess, this chess variant requires us to rethink how we approach any given position, making it very concrete and relying on deep calculation. Restructuring the pawn formation takes time, and players need to use that time for creating other types of advantages. Many of AlphaZero games in this variant have been quite tactical, some involving novel tactics that are not possible under classical rules.

Self-capture chess is quite entertaining, as it introduces additional options for sacrificing material – and material sacrifices have a certain aesthetic appeal. Self-capture moves can feature in all stages of the game. Not every game involves self-captures, as giving away material is not always required, but they do feature in a substantial percentage of the games, and in some games they occur multiple times. Self-capture moves can be used to open files and squares for the pieces in the attack; opening up a blockade by sacrificing a pawn in the pawn chain; or in defence, while escaping the mating net.
 
There are several other variants that could be evaluated by this means.

lichess.org • Free Online Chess has several:

Chess960 or Fischer Chess consists of scrambling the first-rank pieces at the beginning of the game, with the constraints that the two bishops are on opposite colors and that the king is between the two rooks. Both sides have the same scrambling. That makes 960 possible positions for each side. It was invented by chess champion Bobby Fischer as an alternative to working out chess openings in gory detail.

Crazyhouse enables one to recruit for one's side pieces that one has captured. One drops in a piece as a move. A promoted pawn reverts to being a pawn when it is captured.

King of the Hill has an additional victory condition: getting one's king to one of the four center squares.

Three-check has an additional victory condition: checking one's opponent's king three times.

Antichess has a reverse sort of victory condition: win by losing all one's pieces or becoming stalemated. Captures are mandatory.

Atomic chess has a super capture: every piece in a one-square neighborhood gets captured, except for pawns.

Horde chess has one side being all pawns.

Racing Kings is where all the pieces but pawns are initially on one end of the board and the kings try to get to the other end.

 List of chess variants has a *lot* more.
 
 Solving chess - very difficult, since there are some 10120 possible chess positions. But chess endgames are much easier to solve, and there are solutions for small numbers of pieces:  Endgame tablebase - databases of endgame positions and their outcomes.

 Solved game
  • Ultra-weak - Prove whether the first player will win, lose or draw from the initial position, given perfect play on both sides. This can be a non-constructive proof (possibly involving a strategy-stealing argument) that need not actually determine any moves of the perfect play.
  • Weak - Provide an algorithm that secures a win for one player, or a draw for either, against any possible moves by the opponent, from the beginning of the game. That is, produce at least one complete ideal game (all moves start to end) with proof that each move is optimal for the player making it.
  • Strong - Provide an algorithm that can produce perfect moves from any position, even if mistakes have already been made on one or both sides.
Lists:
and several others.

Checkers -  English draughts - is the largest game solved so far, though the solution is a weak one. Several years of number crunching gave a large database of solution data. There are 5*1020 possible checkers positions, and the solution required some 1014 calculations, on a number of desktop computers that reached 220 at one point but was 50 later into the solution.

 Nim features several piles of objects, and players alternate in taking any number of objects from any one of the piles. Whoever takes the last objects either wins or loses, depending on the version. It has a strong solution, one that works for every possible set of heap sizes.
 
Last edited:
 Game complexity
  • State space: number of possible positions
  • Game-tree size: number of possible games

For  Tic-tac-toe the board size is 3*3 = 9 and the number of positions is bounded from above by 39 = 19,683 positions. But only some of these positions are legitimate to within the game's rules: 5,478, or after rotations and reflections, 765. Turning to the game tree, the total number of games is bounded from above by 9! = 362,880. Since games may finish before all the squares are filled, the number of them is 255,168, or after rotations and reflections, 26,830.

 Go (game) seems much harder than  Chess
GameBoard sizeState spaceGame treeGame lengthBranching factor
Tic-tac-toe3*3 = 910310594
Checkers8*8/2 = 3210201040702.8
Chess8*8 = 641044101237035
Backgammon2*2*7 = 2810201014455250
Go19*19 = 3611017010360150250
 
I think Kasparov who had a nervous breakdown when he lost to IBM.


Deep Blue versus Garry Kasparov was a pair of six-game chess matches between the world chess champion Garry Kasparov and an IBM supercomputer called Deep Blue. The first match was played in Philadelphia in 1996 and won by Kasparov by 4–2. A rematch was played in New York City in 1997 and won by Deep Blue by 3½–2½. The second match was the first defeat of a reigning world chess champion by a computer under tournament conditions, and was the subject of a documentary film, The Man vs. The Machine.[1]
 
AlphaGo | DeepMind
Go is known as the most challenging classical game for artificial intelligence because of its complexity.

Despite decades of work, the strongest Go computer programs could only play at the level of human amateurs. Standard AI methods, which test all possible moves and positions using a search tree, can’t handle the sheer number of possible Go moves or evaluate the strength of each possible board position.

...
Two players, using either white or black stones, take turns placing their stones on a board. The goal is to surround and capture their opponent's stones or strategically create spaces of territory. Once all possible moves have been played, both the stones on the board and the empty points are tallied. The highest number wins.
From combinatorics, this seems like a much more difficult game than chess, despite each side having only one kind of piece. Also true of tic-tac-toe and checkers, but not of chess.
We created AlphaGo, a computer program that combines advanced search tree with deep neural networks. These neural networks take a description of the Go board as an input and process it through a number of different network layers containing millions of neuron-like connections.

One neural network, the “policy network”, selects the next move to play. The other neural network, the “value network”, predicts the winner of the game. We introduced AlphaGo to numerous amateur games to help it develop an understanding of reasonable human play. Then we had it play against different versions of itself thousands of times, each time learning from its mistakes.

Over time, AlphaGo improved and became increasingly stronger and better at learning and decision-making. This process is known as reinforcement learning. AlphaGo went on to defeat Go world champions in different global arenas and arguably became the greatest Go player of all time.
AlphaGo learned from experience with human players, but DeepMind decided to go one better, with AlphaGo Zero, starting from scratch and playing against itself.
This powerful technique is no longer constrained by the limits of human knowledge. Instead, the computer program accumulated thousands of years of human knowledge during a period of just a few days and learned to play Go from the strongest player in the world, AlphaGo.

AlphaGo Zero quickly surpassed the performance of all previous versions and also discovered new knowledge, developing unconventional strategies and creative new moves, including those which beat the World Go Champions Lee Sedol and Ke Jie. These creative moments give us confidence that AI can be used as a positive multiplier for human ingenuity.
DeepMind then generalized this Go engine to other board games, like chess and shogi, as AlphaZero.
In its chess games, for example, players saw it had developed a highly dynamic and “unconventional” style of play that differed from any previous chess playing engine. Many of its “game changing” ideas have since been taken up at the highest level of play.

...
This ability to learn each game afresh, unconstrained by the norms of human play, results in a distinctive, unorthodox, yet creative and dynamic playing style. Chess Grandmaster Matthew Sadler and Women’s International Master Natasha Regan, who have analysed thousands of AlphaZero’s chess games for their forthcoming book Game Changer (New in Chess, January 2019), say its style is unlike any traditional chess engine.” It’s like discovering the secret notebooks of some great player from the past,” says Matthew.

Traditional chess engines – including the world computer chess champion Stockfish and IBM’s ground-breaking Deep Blue – rely on thousands of rules and heuristics handcrafted by strong human players that try to account for every eventuality in a game. Shogi programs are also game specific, using similar search engines and algorithms to chess programs.
 
Traditional chess engines like Stockfish may search several million moves, while AlphaZero searches several thousand moves, and human players may search only a few hundred moves. The difference is pattern recognition, something that AlphaZero and human players have much more of than traditional chess engines.
However, it was the style in which AlphaZero plays these games that players may find most fascinating. In Chess, for example, AlphaZero independently discovered and played common human motifs during its self-play training such as openings, king safety and pawn structure. But, being self-taught and therefore unconstrained by conventional wisdom about the game, it also developed its own intuitions and strategies adding a new and expansive set of exciting and novel ideas that augment centuries of thinking about chess strategy.

The first thing that players will notice is AlphaZero's style, says Matthew Sadler – “the way its pieces swarm around the opponent’s king with purpose and power”. Underpinning that, he says, is AlphaZero’s highly dynamic game play that maximises the activity and mobility of its own pieces while minimising the activity and mobility of its opponent’s pieces. Counterintuitively, AlphaZero also seems to place less value on “material”, an idea that underpins the modern game where each piece has a value and if one player has a greater value of pieces on the board than the other, then they have a material advantage. Instead, AlphaZero is willing to sacrifice material early in a game for gains that will only be recouped in the long-term.

“Impressively, it manages to impose its style of play across a very wide range of positions and openings,” says Matthew, who also observes that it plays in a very deliberate style from its first move with a “very human sense of consistent purpose”.

“Traditional engines are exceptionally strong and make few obvious mistakes, but can drift when faced with positions with no concrete and calculable solution,” he says. “It's precisely in such positions where ‘feeling’, ‘insight’ or ‘intuition’ is required that AlphaZero comes into its own."
So it plays differently from both human players and traditional chess engines.

Sacrificing for positional advantage is a long-recognized chess tactic, but it is usually considered a relatively advanced tactic. It is remarkable that AlphaZero is so good at sacrificing.
 
Tic-tac-toe, nim, checkers, chess, and Go are all deterministic and complete-information games. Complete information meaning that all of the game world is observable by all of the players.

But many games violate one or both of those conditions.

 Backgammon violates determinism, because it involves the roll of dice to find which moves each player can make. But like most other board games, it has information completeness.

Backgammon Computer Program Beats World Champion, by Hans J. Berliner - "On July 15, 1979, a backgammon computer program beat the World Backgammon Champion in a match to 7 points"

 TD-Gammon - "TD-Gammon achieved a level of play just slightly below that of the top human backgammon players of the time. It explored strategies that humans had not pursued and led to advances in the theory of correct backgammon play."
D-Gammon's exclusive training through self-play (rather than tutelage) enabled it to explore strategies that humans previously hadn't considered or had ruled out erroneously. Its success with unorthodox strategies had a significant impact on the backgammon community.[1]

For example, on the opening play, the conventional wisdom was that given a roll of 2-1, 4-1, or 5-1, White should move a single checker from point 6 to point 5. Known as "slotting", this technique trades the risk of a hit for the opportunity to develop an aggressive position. TD-Gammon found that the more conservative play of 24-23 was superior. Tournament players began experimenting with TD-Gammon's move, and found success. Within a few years, slotting had disappeared from tournament play. (It is now reappearing for 2-1, though.[3])

Backgammon expert Kit Woolsey found that TD-Gammon's positional judgement, especially its weighing of risk against safety, was superior to his own or any human's.[1]

TD-Gammon's excellent positional play was undercut by occasional poor endgame play. The endgame requires a more analytical approach, sometimes with extensive lookahead. TD-Gammon's limitation to two-ply lookahead put a ceiling on what it could achieve in this part of the game. TD-Gammon's strengths and weaknesses were the opposite of symbolic artificial intelligence programs and most computer software in general: it was good at matters that require an intuitive "feel" but bad at systematic analysis.
 
An award-winning documentary about the 2016 Go match between DeepMind (Alpha Go) and Lee Sedol (the best human on the planet) may be a fun watch even if you're not a fan of either Go or computer programming. (The match was televised and watched live by 80 million people!) Recall that it wasn't long ago that researchers thought of Go as the game too complex for a computer to play well.

Lee Sedol went into the six-game match confident that he would win 6-0. By the time he was down 0-4 one senses that the humans were in mourning: Even Alpha Go's team spoke of "melancholy" and some were hoping Lee would win a game and save face. Indeed in Game 5 Lee finds a breathtaking wedge tesuji. Final match score: 1-5. In Game 2, Alpha Go played a shoulder hit on the fifth line, a peculiar move that some expert human commentators thought was a mistake. Alpha Go understood this, noting that the chance was only 1 in 10,000 that a human professional would have chosen that move!

A few years later, Lee Sedol retired from Go. He was disheartened to know he had no chance of ever being best-in-the-world again, despite being the best human.


You can download a simple app, gnubg, which plays Backgammon better than any human. I think backgammon programs of high caliber have been around for decades. This may seem remarkable: Backgammon has much complexity, especially in view of strategies like "back game."

It was 54 years ago that I wrote a program to solve chess mates-in-two as a Freshman project. And I have continued involvement (and even publications) in game solving over the decades.
 
Crazyhouse enables one to recruit for one's side pieces that one has captured. One drops in a piece as a move. A promoted pawn reverts to being a pawn when it is captured.

We used to play a version we called double fives--two boards, five minute clocks on each board, you can recruit pieces your partner captured. IIRC in their promoted form but that was almost 40 years ago, I'm not sure. As usual with speed chess victory was by king capture, not checkmate. Stalemate was impossible because you could recruit. Not having anything to recruit meant you had to wait for your partner to capture something. (In practice this never happened because boards never became that depleted of pieces.)
 
A robot can easily lift a car, humans can not. We don't compete with robots although they have negatively affected human employment.

Thinking you are competing in any meaningful sense with a program that can look up all major games played in recorded history and which can look 100 moves ahead in seconds for possible counter moves is ridiculous.

Competitin between humans involves not just moves, it involves confidence, psychology, and emotional self control. The ability to concentrate under stress.
 
A robot can easily lift a car, humans can not. We don't compete with robots although they have negatively affected human employment.

Thinking you are competing in any meaningful sense with a program that can look up all major games played in recorded history and which can look 100 moves ahead in seconds for possible counter moves is ridiculous.

Competitin between humans involves not just moves, it involves confidence, psychology, and emotional self control. The ability to concentrate under stress.
exactly
 
A robot can easily lift a car, humans can not. We don't compete with robots although they have negatively affected human employment.

Thinking you are competing in any meaningful sense with a program that can look up all major games played in recorded history and which can look 100 moves ahead in seconds for possible counter moves is ridiculous.

Competitin between humans involves not just moves, it involves confidence, psychology, and emotional self control. The ability to concentrate under stress.
exactly
Computer , rebots are made by humans and they are under our control. we can operate them as we want. they are nothing more than just obedient machine.
 
We don't compete with robots although they have negatively affected human employment.
To suggest that having a machine to do your work for you is a negative is insane.

The assumption that employment is necessary and desirable for all is ridiculous, outdated, and the foundation of our economy - which is therefore in need of drastic change to accommodate the reality that humans are simply not required for most labour.

We can choose to use machines to improve life for us all, or just for the owners of the machines. So far, we are choosing very poorly indeed.
 
We don't compete with robots although they have negatively affected human employment.
To suggest that having a machine to do your work for you is a negative is insane.

The assumption that employment is necessary and desirable for all is ridiculous, outdated, and the foundation of our economy - which is therefore in need of drastic change to accommodate the reality that humans are simply not required for most labour.

We can choose to use machines to improve life for us all, or just for the owners of the machines. So far, we are choosing very poorly indeed.
Any other way would be utopian, and therefore morally wrong. We need someone to suffer. ;)
 
A robot can easily lift a car, humans can not. We don't compete with robots although they have negatively affected human employment.

Thinking you are competing in any meaningful sense with a program that can look up all major games played in recorded history and which can look 100 moves ahead in seconds for possible counter moves is ridiculous.

Competitin between humans involves not just moves, it involves confidence, psychology, and emotional self control. The ability to concentrate under stress.
exactly
Computer , rebots are made by humans and they are under our control. we can operate them as we want. they are nothing more than just obedient machine.
Clearly you don't used Civil3D.
 
A robot can easily lift a car, humans can not. We don't compete with robots although they have negatively affected human employment.

Thinking you are competing in any meaningful sense with a program that can look up all major games played in recorded history and which can look 100 moves ahead in seconds for possible counter moves is ridiculous.

Competitin between humans involves not just moves, it involves confidence, psychology, and emotional self control. The ability to concentrate under stress.
exactly
Computer , rebots are made by humans and they are under our control. we can operate them as we want. they are nothing more than just obedient machine.
Clearly you don't used Civil3D.
Or Windows.
 
Back
Top Bottom