Artificial Intelligence Overwhelms Pros at Poker
Whilst we humans must suffice with swimming with dolphins, our "companions" with artificial intelligence (or AI) can play chess with world champions, calculate faster than any human, and now they can play and win at poker against some of the best players in the world.
Artificial intelligence never ceases to amaze us. Computers can make calculations in a split second. They can scan an unlimited number of data and give us the exact information we want. For writers, they can auto-correct our most common typing mistakes. They have all but eliminated the need for dictionaries to tell us how to spell the notoriously difficult English language. But can they bluff a human out of a big pot?
One of Several Teams
A team of researchers from Czech Technical University, Charles University in The Czech Republic, and the University of Alberta have written software they call DeepStack that they say will soon allow computers to defeat the very top professional poker players. So far, the researchers have concentrated solely on no limit Texas Hold'em. Obviously, it's hard enough to program a computer to win one poker variation. It will likely take forever to get a computer or even a group of computers to beat the pros at every poker variation. But doing it in one poker variation is big news in and of itself.
One AI versus 33 Pros
The team pitted DeepStack against 33 top pros they chose with the help of the International Federation of Poker. They played 44,852 hands and the computer won resoundingly! The game was no limit Texas Hold'em which has far more variation than Limit Hold'em. So, the accomplishment is very big with much work left to be done before it can be said that computers can beat humans at poker without needing to add many caveats. The team from the University of Alberta claimed two years ago to have completely solved the relatively uncomplex Limit Hold'em. Many researchers say that No Limit Hold'em will never be solved!
In addition to concentrating their efforts on Texas Hold'em, the research team is writing software pitting their computer (or bot, needlessly condensed term for robot) in one-on-one encounters with poker pros. This, again, means that the complexity of playing against multiple opponents is still beyond the reach of programmable bots. The main statistic used to denote win rate is mbb which mean mili-big blinds. Here a little knowledge of poker terminology is in order to grasp both the great accomplishment of this research team, one of many pursuing the same goal, and the vast mine field of poker excellence yet to be captured.
Poker as we play it at home, usually involves antes, an amount that every player puts in the pot before the cards are dealt. The ante guarantees that there will be at least a minimal pot. In professional poker, there aren't antes per se. On each hand, as the dealer changes hands, the designations of big blind and small blind change position as well. In a casino, the players don't actually deal the cards but the designation "dealer" revolves around the table and the blinds with it. The big blind is usually twice the small blind. A player who doesn't have enough money to cover a big blind bet is functionally out of the game. In big money tournaments the blinds are for large sums of money so you need to keep your stack well stocked to stay in the game.
The statistic mbb means one thousandth of a big blind. Players often use mbb/g to indicate how much he or she has won per game. A win rate of 50 mbb/g is considered really good by the top pros and the computer won a massive 492 mbb/g!
The reason that DeepStack has thus far played only against single opponents is because the strategy changes in bigger games when you're a blind, when you're the big blind, the small blind, or the dealer. In a bigger game, the later you must make a decision the better. The mental calculations the best pros make when they're up against other top pros is still far beyond the ken of any AI.
In the world of science breakthrough claims must be "peer reviewed" and these claims have yet to go through that rigorous process. Already, many critics have denigrated the results of these 44, 852 hands played against poker pros. The first criticism is that whilst the players were good poker players, they were far from the best the world can offer. Some are questioning the timing of the announcement, just days before artificial intelligence devised by a team from Carnegie Mellon University, led by Noam Brown and Tuamos Sandholm, is set to play fully 120,000 hands against four poker pros, the competition to take place in a Pittsburgh casino. The new AI is called Libratus. As it is the product of a team of programmers who started from scratch we might reasonably expect some competition between AI bots in the future. Even at 120,000 hands, the experiment being performed by the Carnegie Mellon team will be criticized because its best player, Jason Les, is ranked only number 198 on the Global Poker Index. At that level, he's won over one million dollars playing poker but it certainly won't be good enough for the naysayers.
In any case, we can and should marvel that in 44,852 Texas Hold'em hands against 33 top players, the computer won decisively, winning more than four standard deviations above the expected average "good" performance. The statisticians out there will understand that four standard deviations is a lot so it's hats off to the team from Canada and The Czech Republic!