• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Computers become champions in yet another game - poker

lpetrich

Contributor
Joined
Jul 27, 2000
Messages
25,571
Location
Eugene, OR
Gender
Male
Basic Beliefs
Atheist
Robots Are Beating Humans At Poker | FiveThirtyEight - "It’s hard to win against an opponent without a tell." (presumably a permanent poker face)
Or more precisely, poker-playing software. -- Superhuman AI for multiplayer poker | Science

Computers have been very successful in games with simple game worlds, determinism, and complete information. A simple game world is one that does not need many bits to describe it. Determinism means no random parts. Complete information means that the game state is completely accessible to all the players. Video games violate game-world simplicity with the details of their displays, games with thrown dice or shuffled cards violate determinism, and many card games violate complete information -- each player sees only some of the cards.

Some notable games with these features are tic-tac-toe, checkers, chess, and Go. Tic-tac-toe is completely solved, and a brute-force solution of it is a fairly simple exercise for a computer-science student. Checkers is also completely solved, though its solution requires a large database of solution data. Chess and Go have not been completely solved, and doing so is likely impractical, but some chess and Go software plays as well as human champions in those games.

Departing from determinism is backgammon, though it also has a simple game world and complete information. Here also, some backgammon software can play at championship level.

I now get to poker, which both departs from determinism and has incomplete information, though it has a simple game world. There are numerous versions of poker, but all of them have in common that no player knows which cards will be drawn from the deck aside from lack of duplication, and also that each player gets some cards which only they get to see for part of the game - the player's hand.

Some poker players bluff, that is, bet as if their hand is stronger than it actually is. A bluffer hopes that the other players will decide that they are likely to lose. They will then fold, withdrawing from the current round to reduce their losses. But other players may find a bluff unconvincing, and that is where calling one's bluff comes from.

From the 538.com article,
By 2007 and 2008, computers, led by a program called Polaris, showed promise in early man vs. machine matches, fighting on equal footing with, and even defeating, human pros in heads-up limit Hold ‘em, in which two players are restricted to certain fixed bet sizes.

In 2015, heads-up limit Hold ’em was “essentially solved” thanks to an AI player named Cepheus. This meant that you couldn’t distinguish Cepheus’s play from perfection, even after observing it for a lifetime.

In 2017, in a casino in Pittsburgh, a quartet of human pros each faced off against a program called Libratus in the incredibly complex heads-up no-limit Hold ’em. The human pros were summarily destroyed. Around the same time, another program, DeepStack, also claimed superiority over human pros in heads-up no-limit.

...
Brown and Sandholm’s latest creation, named Pluribus, is superhuman at a flavor of no-limit poker with more than two players — specifically, six — which is identical to one of the most popular forms of the game played online and very closely resembles the game I was playing in that room in the desert.
It was trained on a common poker version called Texas Hold 'em with no-limit betting.

Pluribus was trained much like AlphaGo and AlphaZero, champion-level software for playing Go and chess. It started from scratch and played itself repeatedly, with it tweaking its decision parameters to get improved performance.
The finished program, which ran on just a couple of Intel CPUs, was pitted against top human players — each of whom had won at least $1 million playing as a professional — in two experiments over thousands of hands: one with one copy of Pluribus and five humans and another with one human and five copies of Pluribus. The humans were paid per hand and further incentivized to play their best with cash put up by Facebook. Pluribus was determined to be profitable in both experiments and at levels of statistical significance worthy of being published in Science.

Neither the 538.com article nor the Science-magazine article mentioned bluffing, however. It would be interesting to see if some poker AI could guess that a player might be bluffing -- or if it could do some bluffing of its own.

Another aspect of human poker playing is that some players may try to get around the game's incomplete information by looking at other players' facial expressions and the like to guess what they are thinking. Some players try to avoid giving away such information by having a "poker face", an expressionless appearance, and wearing sunglasses. Members of other species can pick up such cues, and a famous case of that was Clever Hans, a seemingly knowledgable horse. Its "knowledge" was from doing so, as a psychologist demonstrated.

Online poker gets around that issue, though interpreting such body cues is another AI challenge.
 
lpetrich said:
Chess and Go have not been completely solved, and doing so is likely impractical, but some chess and Go software plays as well as human champions in those games.
A lot better than human champions, at least in chess (in Go, I'm not sure what sort of computer is needed to beat all humans easily, but it's doable). Purely for example, Stockfish (or, for that matter, Komodo, Houdini, Fire, Ethereal, Xiphos, Laser, or a rather long list of chess engines, leaving aside NNs), running on a standard laptop computer (for example) would beat Magnus Carlsen easily, and even with significant odds (the information actually isn't complete for the chess program, since it does not have access to information about how much time the opponent has left).
 
Might just be me, but I don't find this that impressive. I mean, it's impressive in the sense that we've trained an electrified rock to beat people in a game. But then, all that really takes is feeding it somewhat complex algorithms, and giving it adequate memory and processing power.

Of course if you're able to build an artificial machine, and give it enough memory and speed, it can make calculations in games like this quickly. But why is this impressive? What is our frame of reference? What are we comparing it to? Are we supposed to just sit there in awe every time a computer does a new thing?

And to date this kind of technology seems to be driving an advertising based digital economy which incentivizes stripping people of their privacy, destabilizing major economic centers from political interference, literally killing people with weapons, and who knows what else.

So I say fuck the poker playing computers, lets go back to the woods.
 
Might just be me, but I don't find this that impressive. I mean, it's impressive in the sense that we've trained an electrified rock to beat people in a game. But then, all that really takes is feeding it somewhat complex algorithms, and giving it adequate memory and processing power.

Of course if you're able to build an artificial machine, and give it enough memory and speed, it can make calculations in games like this quickly. But why is this impressive? What is our frame of reference? What are we comparing it to? Are we supposed to just sit there in awe every time a computer does a new thing?

And to date this kind of technology seems to be driving an advertising based digital economy which incentivizes stripping people of their privacy, destabilizing major economic centers from political interference, literally killing people with weapons, and who knows what else.

So I say fuck the poker playing computers, lets go back to the woods.

It requires a lot of work by a lot of intelligent people to develop the required hardware and software. But that aside, our point of reference is humans. Machines keep beating humans at different stuff.

As for going "back" (never lived there, though) to the woods, by comparing quality of life, I'd rather not. But people are (in some parts of the world, at least) free to live without modern technology.
 
Machines should be able to beat humans. That's what machines are for.

Nobody wants a backhoe that can only dig trenches as fast as a human. Nobody wants a crane that can only lift as much, or as far, as a human. Nobody cares for a car that can only go as fast as a human (with the possible exception of Skoda drivers).

Humans are generalists. Show me a computer that can play Go, and can dig ditches, and can compose symphonies, and can travel at 130km/h, and can design a jet engine, and can bring another person to orgasm, and can cook a perfect soufflé, and can learn three languages, and can swim the English channel, and can tell an hilarious joke, and can comfort a crying child, and can teach quantum physics to a class of slightly bored undergraduates, and can fly to the moon and return safely to the Earth before this decade is out.

Then I might be impressed.

Machines are supposed to be very good at something. Humans are moderately good at anything. We choose to do these things, to go to the moon, and these other things, not because they are easy, but because they are hard.

Humans should be judged, not on their ability to do one thing really well, but on their ability to do everything, even if we make a bit of a half-arsed job of most of it.

"A computer once beat me at chess, but it was no match for me at kick boxing" - Emo Phillips.
 
Humans are generalists. Show me a computer that can play Go, and can dig ditches, and can compose symphonies, and can travel at 130km/h, and can design a jet engine, and can bring another person to orgasm, and can cook a perfect soufflé, and can learn three languages, and can swim the English channel, and can tell an hilarious joke, and can comfort a crying child, and can teach quantum physics to a class of slightly bored undergraduates, and can fly to the moon and return safely to the Earth before this decade is out.

Hold on--I haven't quite perfected the soufflé.
 
Couldn't a computer learn a human player's likelihood of bluffing? As in, keep track of how often a player raises bets but ends up with a poor hand.

I play Texas Hold'em via app, and the other players are only a static image or an avatar, so I can't read any tells either. But even though I'm a middling player, sometimes I get a hunch about an opponent based on his or her previous plays.

One caveat is that a person doesn't have to show his cards when folding, so it can be harder to know what kind of hand they had. Maybe apply a percentage--if a person folds, then the odds their hand was poor is greater than 50/50.
 
Might just be me, but I don't find this that impressive. I mean, it's impressive in the sense that we've trained an electrified rock to beat people in a game. But then, all that really takes is feeding it somewhat complex algorithms, and giving it adequate memory and processing power.

Of course if you're able to build an artificial machine, and give it enough memory and speed, it can make calculations in games like this quickly. But why is this impressive? What is our frame of reference? What are we comparing it to? Are we supposed to just sit there in awe every time a computer does a new thing?

And to date this kind of technology seems to be driving an advertising based digital economy which incentivizes stripping people of their privacy, destabilizing major economic centers from political interference, literally killing people with weapons, and who knows what else.

So I say fuck the poker playing computers, lets go back to the woods.
The difference is that in poker, you can win even if you had a losing hand. You can only win a game of chess. You can't bluff your way out of a loss.
 
Couldn't a computer learn a human player's likelihood of bluffing? As in, keep track of how often a player raises bets but ends up with a poor hand.

I play Texas Hold'em via app, and the other players are only a static image or an avatar, so I can't read any tells either. But even though I'm a middling player, sometimes I get a hunch about an opponent based on his or her previous plays.
Their tells will be their betting behavior. Not the same as at a table, but there are behaviors to observe.

One caveat is that a person doesn't have to show his cards when folding, so it can be harder to know what kind of hand they had. Maybe apply a percentage--if a person folds, then the odds their hand was poor is greater than 50/50.
I'd be interested how the computer can tell a bluff... either whether the odds are low the human has a good hand or based on previous waging behavior. Granted, knowing the odds makes it a lot easier to gauge whether to advance or fold.

I'd also be interested in the subtlety of betting by the computer. A player can't discern a computers' method of playing until having played several hands. And I imagine the algorithm has multiple personalities for playing.
 
Might just be me, but I don't find this that impressive. I mean, it's impressive in the sense that we've trained an electrified rock to beat people in a game. But then, all that really takes is feeding it somewhat complex algorithms, and giving it adequate memory and processing power.

Of course if you're able to build an artificial machine, and give it enough memory and speed, it can make calculations in games like this quickly. But why is this impressive? What is our frame of reference? What are we comparing it to? Are we supposed to just sit there in awe every time a computer does a new thing?

And to date this kind of technology seems to be driving an advertising based digital economy which incentivizes stripping people of their privacy, destabilizing major economic centers from political interference, literally killing people with weapons, and who knows what else.

So I say fuck the poker playing computers, lets go back to the woods.
The difference is that in poker, you can win even if you had a losing hand. You can only win a game of chess. You can't bluff your way out of a loss.

That's all what the computer's been fed, though. Put enough smart people in a room for a while, and they'll crack the algorithm. The computer is doing the same thing as it is when it beats someone in chess.

The larger point, though, was that we all sit here dumbstruck about what technology is able to do these days, with no regard for what it shouldn't be doing. At the end of the day it's just another tool in the belt for warfare and profit, without much of a seat-belt on. And given how catastrophic 'groundbreaking' inventions have been in the past (see: motorcar), we might want to pick our jaws off the floor and start thinking about the consequences of what we're building, rather than the technologies themselves.
 
Facebook’s Poker Bot Shows How A.I. Can Adapt to Liars - "In a new study with significant real-world implications, a poker bot crushed human pros at six-player, no-limit Texas Hold ’em"
Sometimes, poker is all about the bluff. Make the table believe you have a full house when you really have a low pair, and it can pay off big time. Read your opponents — a grimace here, a smirk there — and bet accordingly.

...
A poker-playing bot called Pluribus recently crushed a dozen top poker professionals at six-player, no-limit Texas Hold ’em over a 12-day marathon of 10,000 poker hands. Pluribus was created by Noam Brown, an A.I. researcher who now works at Facebook, and Tuomas Sandholm, a computer science professor at Carnegie Mellon University in Pittsburgh. (The two co-authored the paper in Science.)

...
The game is really a simulator for how an algorithm could master a situation with multiple deceptive adversaries that hide information and are each trying to pressure the other to quit. A.I. can already calculate probability far better and far faster than any human being. But poker is as much about coping with how humans lie as it is about reading the cards, which is exactly why it’s a useful game for A.I. to learn.

“I think this is really going to be essential for developing A.I.s that are deployed in the real world,” Brown told OneZero, ”because most real-world, strategic interactions involve multiple agents, or involve hidden information.”
Part of that is, of course, bluffing, and the article mentions that strategy. This software does not try to read facial expressions or other such player features, and for human players, playing online means that they also play under those conditions. So it has to decide from players' betting histories.
The new bot, Pluribus, doesn’t adapt to other players at the table — it won’t try to understand how John and Jane play the game differently. It doesn’t have a tell — a sign that they might be bluffing or in fact actually have a good hand — and it only bluffs when it’s calculated that it’s a sound strategy, statistically speaking.

“People have this notion that bluffing is this very human thing where you’re looking at the other person and the other person’s eyes, and trying to read their soul, and trying to tell if they’re going to fold or if they’re bluffing right now,” Brown told OneZero. “That’s not really what it’s about. It’s really a mathematical thing. Bluffing is all about balancing betting with good hands with betting with bad hands, so that you’re unpredictable to your opponents.”
So the software is capable of bluffing.
 
Now for quantifying game-world complexity.  Game complexity has a lot of numbers. I use state-space complexity, since it is a measure of how many possible configurations there can be. I also find the number of bits.
  • Tic-tac-toe: 10
  • Checkers: 65
  • Chess: 155
  • Go: 565
  • Backgammon: 65
  • Poker: (deck) 226, (hand) 21, (TxHe visible) 34

Many video games have more realistic-looking game-world displays, and I will attempt to estimate how many bits. Game consoles have well-defined hardware, so I will use them.

AI software has been made to learn video games, by looking at display framebuffer output and deciding what control actions to do. One can run game consoles in emulators, something that is likely faster than the originals for earlier ones.

AI masters 49 Atari 2600 games without instructions | Ars Technica - Human-level control through deep reinforcement learning | Nature
 
Back
Top Bottom