• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

AI that learned chess by reading about it

lpetrich

Contributor
Joined
Jul 27, 2000
Messages
25,444
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
Instead of practicing, this AI mastered chess by reading about it - MIT Technology Review noting
[1907.08321] SentiMATE: Learning to play Chess through Natural Language Processing

Chess has a big literature about it, and part of that literature is commentaries on games. The researchers used 2700 games with commentaries, and trained a natural-language-processing (NLP) model on them. A side that makes good moves tends to win, and a side that makes bad moves tends to lose. That offers a way of recognizing "good" and "bad", and one can then use this to get assessments of individual moves. That was then used to train a chess engine, SentiMATE. It is not as good as some others, but it is fairly good.
The researchers say the learning techniques used by SentiMATE could have many other practical applications beyond chess. For instance, they might help machines analyze sports, predict financial activity, and make better recommendations. “There is an abundance of books, blogs and papers all waiting to be learnt from,” the team points out.
 
Sounds overstated. I'd want details of form of the code and what 'read' meant. Details of the experiment.
 
You don't have to. All AI, today, works essentially the same.
A large number of variables store the results of random choices and rank them by "goodness" of the result - which is defined by the training method.
AI is simply "random trial and measured error" at a massive scale.
 
Sure, but one will want to do it as efficiently as possible. Because if one uses a model with too many parameters, it won't train very well. Understanding the problem can reduce the number of parameters enormously.
 
Sure, but one will want to do it as efficiently as possible. Because if one uses a model with too many parameters, it won't train very well. Understanding the problem can reduce the number of parameters enormously.

Yes, that is true. Designing the correct number of inputs, layers, and outputs is key. Utilizing an appropriate training protocol is also key. Those are the only "parameters". Like any solution, the key to success is always understanding the problem.

For an AI brain designed to navigate a maze, 4 inputs (distances to obstacles in each direction) and 2 outputs (control axis values), with a single hidden layer, may be all that is needed. For facial recognition, you need an input for each PIXEL of an image to process (!!), at least 4 hidden layers for the levels of image detail, and only 1 output (match or not match). Apart from the sheer number of matrix components, these two brains would be pretty identical.
 
sightly off topic, but very interesting, IMO... Watched a youtube video last night about teaching a computer to play chess blindfolded.
It was an interesting problem. "Blindfolded chess" is where players literally play blindfolded, and have to keep the positions of each piece memorized. One illegal move and they lose. The problem is that computers have a flawless memory. The programmer adjusted the rules for computers... the history of moves was made unavailable to the computer so it could not derive the positions of the pieces (an advantage over having perfect memory is the ability to enforce having no memory!) Also, the current position of the pieces is provided to the program, but not what piece any position has. So yo know there is a piece at A1, and maybe you can assume it is a rook, but the computer has only that there is some piece there. The rule about illegal moves is dropped, and the computer is allowed to check if a desired move is legal. So if a piece is a pawn, and the computer wants to move the piece to the left, it will just be told no and try again. There is a bit more to it and the video is mostly about the interesting algorithms used to solve the problem.

https://www.youtube.com/watch?v=DpXy041BIlA
 
Back
Top Bottom