Skip to content

Playing Head-To-Head Texas hold’em

July 6, 2010


Texas hold’em all in strategy and related problems

Andrew Beal is a billionaire who made his fortune in banking and related ventures. While he is not a mathematician he has a conjecture named after him, the Beal Conjecture.

Today I want to talk not about his conjecture, but his adventures playing Texas hold’em against some of the best players in the world. I believe there is an interesting property of this poker game that should be studied more carefully.

Okay, I guess I should at least state his conjecture. Beal’s conjecture is a generalization of Fermat’s Last Theorem. If

\displaystyle  A^x + B^y = C^z,

where {A}, {B}, {C}, {x}, {y} and {z} are positive integers and the exponents {x}, {y} and {z} are all greater than {2}, then {A}, {B} and {C} must have a non-trivial common factor. The classic Fermat’s Last Theorem is the special case when {x=y=z}.

I believe it is still open, and Beal is offering a $100,000 prize for its resolution. I say, “I believe,” since there are many claims and counter-claims on this conjecture. So before you start to work hard to get the prize please check and be sure the problem is still open.

This problem is definitely one an amateur could solve—I can imagine two approaches. You might find an elementary argument that proves if there is a solution where {A,B,C} have no non-trivial common factor, then {x=y=z}. This would yield a proof that there are no solutions—in a sense Andrew Wiles would have done all the hard work. Another possible approach is to find a counter-example. I suspect a naive brute force search is unlikely to succeed, but a small insight and enough computer time could find a counter-example.

An example of this is Leonhard Euler in 1732 showed that

\displaystyle  F_5 = 2^{2^5} + 1 = 429496727 = 641 \times 6700417.

This proved that not all Fermat numbers were primes. Euler did not do this by brute search: he proved a simple lemma about the possible divisors of such Fermat Numbers. In particular, he proved every factor of {F_n = 2^{2^n} + 1} is of the form {k2^{n+1} + 1}. This greatly helped his search, since all the calculations were done by hand.

Let’s move from number theory to probability theory and poker.

Texas hold’em

One of the most popular poker games these days is Texas hold’em. It is played in dorm rooms, on-line, and there are many big money tournaments on ESPN. I believe the interest in the game is driven by its simple rules, by the amount of money available to winners of the major tournaments, and by the feeling that somehow: the game is not that “hard” to play well.

I will not give the complete rules of Texas hold’em here, since there are so many good descriptions on-line. See here for one. Here is a quote explaining the game at a high level:

Texas hold’em is a variation of the standard card game of poker. The game consists of two cards being dealt face down to each player and then five community cards being placed by the dealer—a series of three (“the flop”) then two additional single cards (“the turn” and “the river”), with players having the option to check, bet or fold after each deal, i.e. betting may occur prior to the flop, “on the flop,” “on the turn,” and “on the river.”

I have played poker over the years for tiny stakes, really tiny. I started a standing game at Yale, then one at Berkeley, and finally one at Princeton. We played “dealer’s choice,” and allowed lots of strange and crazy games: from hi-low games to games with names like anaconda, 727, and others with names I no longer even remember.

Let me recall one fun hand that happened at Yale. Stan Eisenstat was sitting to my right and he was the only player in a large pot against another player—I will call X. The game was a crazy form of stud with individual face-down and face-up cards, and with the following property: we each passed cards at the beginning to our neighbors. In this case I had received three cards from Stan. He was showing a very strong hand—it looked like he could have four kings. Player X was showing a possible straight flush:

\displaystyle  9\Diamond \ 7\Diamond \ 6\Diamond \ 5\Diamond

While I was out of this game, I knew that Stan knew that X could not have the straight flush. I knew this since the key card the {8\Diamond} was one that Stan had passed to me at the beginning of the game. The card was in the discard pile—it could not be in X’s hand. Theorem. Stan finally called a large raise of X and they both showed their hands. Of course X had a flush, but not a straight flush. And Stan did have his four kings.

Stan was happy as he raked in the pot, and said something about how he thought X was bluffing. I looked at Stan and said, “you passed me the key card, don’t you remember?” Stan said no, he had forgotten. Oh well. That’s why we are not professional card players.

Beal Plays The Pros

Beal likes to play Texas hold’em and wanted to see how well he could do against the world’s best players. Eventually, he worked out a plan: the best players in Las Vegas formed a “team” and Beal played one of the team members in head-to-head hold’em. After a while another team member would take over, but Beal played on alone. These matches happened over a period of time; often Beal would fly into Vegas, play for a day or two and then go back to Dallas.

There were two reasons for the team setup. Beal wanted to play very high stakes—recall he is a billionaire—and he felt that the large stakes would favor him. The pros played serious poker, but never for the type of stakes Beal wanted. So Beal hoped the large stakes might take them out of their comfort-zone. For another, Beal wanted to play head-to-head. He was afraid if he played several of them at once, they might be able to gang up on him. They could even cheat him—it would only take a very small amount of information moving from one player to another to make the game unfair.

Beal turned out to be a very strong player, but the pros were some of the best in the world. He won some money, lost some money, and again won some money. But finally the team prevailed and won a ton of money, and he gave up poker—although not forever. There is a fun book on Beal and the pros written by Michael Craig—it is called “The Professor, the Banker, and the Suicide King: Inside the Richest Poker Game of All Time.”

Alice Plays Bob

Since we do not have real money to play the pros we will consider what happens when Alice and Bob play Texas hold’em. Imagine that Bob is invited to play a head-to-head match of Texas hold’em with Alice. She is a professional poker player, and worse she is a world champion at hold’em. Does Bob have any chance at all in winning?

Let’s be precise and state the rules they will be using for betting. Each hand the ante for each player will be one chip, and they will each start with {n} chips. The betting is no-limit: the player’s bets are limited only by the amount of money they have in front of them. The match is over when one player gets all the chips. Note, I have stipulated a simple ante rather than the “blind bets” actually used in hold’em. This simplifies the discussion without greatly changing the results.

The surprise, I think, is that Bob can follow a strategy which will make sure that Alice will win less much than {100\%} of the time—even though Bob is a weak player. At the world championship level to have a reasonable chance of winning is cool—I think. There is nothing like this for other games: in chess Bob would be killed, in Backgammon he would very likely lose, the same holds for Bridge or Scrabble, or almost any reasonable game. But not for Texas hold’em. Ken Regan pointed out that a book on poker by James McManus has a chapter on this very issue.

A Strategy

Theorem: For a match of head-to-head Texas hold’em there is a simple strategy that wins with probability {\delta>0} against any player, where {\delta} is independent of {n}.

Here is a sketch of the proof. Bob plays the following simple strategy. Every hand he bets all his chips—in poker terms he “goes all in.” He may for our analysis even tell her that is his strategy—and doesn’t mind always betting first even though that is a major disadvantage in non-extreme poker betting strategies.

Clearly each hand Alice has a simple choice: she can either call Bob’s bet or fold her hand. Consider the first hand. Alice is dealt some hand {h}. She can calculate the probability {h} will win and decide whether or not to go all in. If she does, then the match is over—either Bob or Alice wins. Actually, there always is a small chance of a tie in Texas hold’em, so there is a chance they will essentially start the match over.

Alice could also look at the hand {h} and decide to fold. In this case Bob now has {n+1} chips and Alice {n-1}. Then, they go on to the next hand and Alice has the same decision to make.

The key insight is that no matter what hand Alice finally decides to call Bob with there is a positive probability that Bob will win.

An interesting open problem is, what is the exact probability {\delta} that Bob wins with best play by Alice? The task of computing {\delta} is compounded by the fact that once Alice folds and Bob has more chips than Alice, an all-in bet that Bob loses will no longer end the match. Bob need only bet as much as Alice’s stack to put Alice all-in, and he will still have the chips by which he was leading. Thus even if Alice wins the first played hand, Bob could still get back in the match and win with a series of doubling wins with his remaining chips. This is more of a complication than enumerating the possible hands and their win probabilities (even taking ties into account), for which published computer analyses are available.

Another point is that, Bob is not revealing any information by his strategy. His play is the same all the time. He might as well choose to do this blind, without looking at his cards, and the probability would still be the same. So whatever Alice’s mind reading skills maybe, they are futile with Bob’s strategy.

Even more, Bob is probably best if he avoids looking at his cards. Since he is a weak player he may have a “tell” that gives Alice information, so he best not even knowing his actual cards.

Alice’s Problem Generalized and Simplified

Here is a simplified problem that is related to Alice’s dilemma. Imagine there are {m} possible hands she can be dealt. The hands have the probability of occurring

\displaystyle  {\cal O}_1, {\cal O}_2, \dots, {\cal O}_m.

Of course

\displaystyle  {\cal O}_1 + {\cal O}_2 + \cdots + {\cal O}_m = 1.

Each hand {i} also has a probability of winning {W_i}. Note, the values {W_i} do not sum to one. Also we assume that each {W_i < 1}: that is there is no hand that is always a winner. Finally we stipulate now that a single win by Alice wins the match, even if Bob has chips left over.

The problem Alice faces is this. She gets hands with the given probability distribution {\{ {\cal O}_i \}} and must decide when to bet. Given she will only see {n} hands at most what is her optimal strategy? Note, this generalizes Texas hold’em, but it ignores the issue of chip stack sizes.

Open Problems

Find the exact value of {\delta} from the theorem. Also solve the problem outlined in the last section. What is the best strategy for Alice? It reminds me of other on-line decision problems such as the Online Secretary Problem, but seems to be different.

25 Comments leave one →
  1. July 6, 2010 10:36 pm

    Oddly, the generalization of the game reminds me a lot of how dating is sometimes analyzed. You get to see your utility function for each match in turn, and get to accept or reject at each step. The question is then what the strategy for accepting a match to be. Of course, that model ignores the interactivity of the dating protocol, but to zeroth-order, it is an interesting one to think about, I guess. I just wouldn’t have expected it to pop out of Texas Hold’em.

    • Whoopska permalink
      July 12, 2010 10:01 am

      The dating problem is usually analyzed with, instead of matches, secretaries. If you haven’t seen the solution before, it is pretty fun (and involves e).

  2. Tirxu permalink
    July 7, 2010 7:51 am

    Let’s try a solution for the open problems:

    The value of delta is the probability of winning with a random hand against a pair of aces (roughly 9%, if memory serves).

    In the generalization problem, order the hands by increasing quality (so the hand number 1 is the worse, and the hand number m is the best). Then, the optimal strategy can be described by an increasing series sigma of m integers with the following semantics: “Play i if there are less than sigma_i chips remaining”. Now, it is quite simple to recursively compute the values of sigma^{-1}. I don’t think that the explicit formula is much more complex, but it would require some more notations.

    • July 7, 2010 4:46 pm

      Tirxu, you may be right about the 9%, at least asymptotically with very large chip stacks. (To clarify, a point of the “Theorem” is that the positive probability for Bob on a given deal is bounded below by a constant whatever two cards Alice gets, and the betting is done pre-flop.)

      Here’s an attempt to make your argument concrete. Suppose n = 6,000. Alice’s strategy is to fold until she gets dealt a pair of aces, whose frequency is 1-in-220. Since (219/220)^1000 is about 1-in-100 (an exponent of 1,011 hits it almost exactly), she will expect at worst

      0.99*0.91*10,000 chips

      with the 10,000 coming from winning the all-in bet when she has 5,000 and Bob 7,000 chips. At that point Bob’s stack will be down to 2,000. Repeating the same strategy of folding up to 1000 more times would make it at worst 9,000 Alice to 3,000 Bob. At that point, Alice can survive a loss of the all-in bet with her aces, which would put things back to even. I think this is good enough to establish that as n gets higher, Alice’s winning probability approaches 91% from below.

      Of course, in real games the ratio of initial stack to blinds is nowhere near this high, and then an exact computation of “delta” would be useful. This is also part reason why “shoving all-in” is becoming a bane of tournaments, as indicated by my quotation from the book Cowboys Full in another comment. Note also this recent Time Magazine article on changing trends in hold’em play.

  3. Anon permalink
    July 7, 2010 11:26 am

    The holdem strategy book “Kill Phil” is based almost entirely on this principle.

    • rjlipton permalink*
      July 7, 2010 2:07 pm

      I looked at this, but still would like to know the exact value of delta for the pure “all in” strategy. Seems like a nice calculation that someone should do.

      • Tirxu permalink
        July 7, 2010 5:56 pm

        After checking online, the probability of defeating AA with a random hand is 14,8%.

        However, this value is the exact value of delta, which is independent of n (as KWRegan remarks, as n goes to infinity, Bob’s probability of winning will tend from above to 14,8%).

    • July 7, 2010 3:07 pm

      Indeed, the book Cowboys Full by James McManus includes this reference—last night Amazon Search Inside wasn’t allowing me to see the full page:

      Quoting from pages 249–250:
      ————–
      But the most telegenic aspect of no-limit hold’em is that it allows both skilled and unskilled players to go all-in—one of our century’s favorite verbs—at any point in the hand…The judicious use of the all-in “shove” is an important weapon in any skillful no-limit player’s arsenal, of course. But far too many of these all-in hands, especially in tournaments, come down to a race between a pocket pair and overcards, which is similar in skill to a coin flip.

      The popularity of shoving has accelerated with the publication of Kill Phil and similar books, which recommend going all-in as the best way to neutralize the more nuanced skills of Phil Ivey, Phil Hellmuth, and other intimidating professionals. The profligate use of all-in bets before the flop is the reason Furlong and others have called no-limit hold’em “two-card chicken.” Their criticism gains force as more and more players, drawn to the game by TV, try to compensate for their lack of postflop talent and courage by pushing their entire stack forward whenever they’re dealt a pair or an ace. As Addington scoffs, “This style of play…re-introduces a substantial amount of luck into a game that had always favored the best player over the best card catcher.”
      ————

  4. Karl Bringmann permalink
    July 8, 2010 4:05 am

    I don’t understand why this delta is > 0. As I see it, there is a hand which is best in poker (is it called royal flush?). Now, here’s a strategy of Alice to play against the All-in-Bob: Alice goes in, if she is delt a royal flush, and folds, otherwise. This means that she wins all the money (the minimum of the two amounts), if she goes in, and loses 1, otherwise.

    Let n go to infinity now. Then one can see that Alice is dealt a royal flush only after loosing m times, where m is much smaller than m, so with the first royal flush she essentially doubles her money and nearly defeats Bob, with the second royal flush she wins. As long as n is large enough.

    What did I get wrong? Or does this really show, that Alice can win with probability going to 1 if n tends to infinity?

    Karl

    • rjlipton permalink*
      July 8, 2010 8:49 am

      Karl,

      Recall in Texas hold’em Alice has to make her decision seeing just two cards. The best hand is AA. Suppose when she gets AA poor Bob has 76 of spades. She will win a huge fraction of the time, but the key is she will not always win. If the five community cards have no pairs or aces and yet have 3 spades Bob wins: flush > AA.

      Do you see what happens? The finite size of the deck forces her to not always win.

      I think, by the way, that delta > 0 independent of n and is larger than 10%. I would love to see us figure it out. Someone could try and write a simple simulation program.

      • Karl Bringmann permalink
        July 8, 2010 11:40 am

        I see, I should have read the rules. Thank you Dick.

  5. Kirk Pruhs permalink
    July 15, 2010 10:00 am

    I have a somewhat related question. Let say you know the value of delta, the probability of winning a heads up match given that you go all in all the time. If you are playing in a single elimination heads up tournament with n people, then your probability of winning the tournament is delta^(log_2 n). The (not fully formal question) is, if you were playing in a normal tournament (say that main event at the World Series) with n people, would you expect that your chances of winning the tournament, if you go all in all the time, more or less than delta^(log_2 n) ?

    • rjlipton permalink*
      July 15, 2010 12:15 pm

      Kirk,

      The delta is for head to head play. So would apply if was such a tournament.

      • Kirk Pruhs permalink
        July 16, 2010 9:47 am

        In a heads up tournament, if you go all in all the time. I guess it is a reasonable approximation that the other player will only stay in when he/she has a large pair. So if you know the probability of cracking aces or kings, this can lead to an approximate probability of winning. But in a normal (not heads up) tournament, when you have say 10 players per table, if you go all in all the time, it seems plausible that you might often face two other players, say one with queens/jacks and one with Ace-King. In fact it seems that the two other players would be more willing to get into this sort of race situation because they know there is a good chance that they will get your money as well, so it is better than a 50/50 proposition for them. The situation seems more subtle here, but it is not clear to me whether it is better or worse for the all in player.

  6. Duncan permalink
    August 8, 2010 5:45 pm

    Alice’s problem appears to be a modified version of the Gambler’s Ruin problem from probability theory.

    In the Gambler’s Ruin problem, two players, Alice and Bob, play a game repeatedly until one of them has lost all their initial capital. Alice starts with an initial capital of $a, Bob starts with an initial capital of $b, and after each game the losing player pays $1 to the winner.

    With probability p Alice wins the next game and her capital increases by $1 to $(n+1).
    With probability q=1-p Alice loses the next game and her capital decreases by $1 to $(n-1).

    The probability of ruin for Alice is given by the recurrence equation

    P[n] = p * P[n+1] + q * P[n-1]

    where the boundary conditions are P[0] = 1, P[a+b] = 0.

    Using the boundary conditions to find a solution to the recursion gives the probability of ruin for Alice. Alice’s probability of ruin equals Bob’s probability of success.

    One solution is

    P[a] = (1 – (p/q)^b)) / (1 – (p/q)^(a+b))

    The probability of ruin for Bob is Q[b] = 1 – P[a].

    In the Texas hold ’em example, for a wager of $w:
    with probability p Alice wins a game and her capital becomes $(n+w);
    with probability q Alice loses a game and her capital becomes $(n-w);
    with probability r Alice draws a game and her capital remains the same;
    with probability s Alice folds and her capital becomes $(n-1).

    p+q+r+s=1.

    Alice’s probability of ruin is given by

    P[n] = p * P[n+w] + q * P[n-w] + r * P[n] + s * P[n-1]

    Again, the boundary conditions are P[0] = 1, P[a+b] = 0.

    The wager
    w = $n if n a+b-n

    Solving the recursion will give an equation for Alice’s probability of ruin. Again, Alice’s probability of ruin equals Bob’s probability of success.

    The equation for Alice’s probability of ruin can be rearranged to give the recurrence equation

    P[n+w] – P[n] = (q/p) * (P[n] – P[n-w]) + (s/p) * (P[n] – P[n-1])

    Let w=n,

    P[2n] – P[n] = (q/p) * (P[n] – P[0]) + (s/p) * (P[n] – P[n-1])

    Using the initial estimate P[n]=x^n and evaluating at n=1 gives the algebraic equation

    x^2 – x = ((q+s)/p) * (x – 1)

    The two solutions to this equation are

    x[1] = 1 and x[2] = (q+s)/p

    The general solution for P[n] is then a linear combination

    P[n] = c[1] * (x[1])^n + c[2] * (x[2])^n

    Using the boundary conditions P[0] = 1 and P[a+b] = 0 we have two simultaneous equations which can be solved to find values for c[1] and c[2]. Using these values gives the equation for P[n]

    P[n] = (1 – y^(a+b-n)) / (1 – y^(a+b))

    Where y = p / (q+s) and p q and s are Alice’s probability of winning a game, losing a game, or folding, respectively. These probabilities depend on Alice’s playing strategy when faced with Bob’s strategy.

    Assume that Alice adopts a strategy where she plays if her private hand is AA and she folds otherwise. Her probability of folding is then then probability that her private hand is not AA.

    s=Pr{not AA} = 1 – Pr{AA} = 1 – 1 / 221 = 220 / 221.

    Her probability of winning, p, is the probability that she wins when her private hand is AA.
    Her probability of losing, q, is the probability that she loses when her private hand is AA.
    Calculating p and q, and we can find y = p / (q+s). Once we know y, we can calculate Alice’s probability of ruin given her initial fortune $a:

    P[a] = (1 – y^b) / (1 – y^(a+b))

    Bob’s probability of success is then P[a].

  7. JollyJoker permalink
    August 9, 2010 5:45 pm

    I don’t know if anyone reads comments on old posts here, but I stumbled into this post while reading about the new P != NP claim, have played a little poker and thought I’d write down my sketch of an algorithm for this.

    First, you need a chart of equity for each of the 169 distinct starting hands vs a range of any two. I’ll cheat and giva a link to http://www.pokerstove.com for this. Sort the starting hands by equity. For any decision to call or fold, your calling range will be one of these hands and all hands above it. For every hand in this list you have the fraction of starting hands this range represents (your % chance of having a hand you’ll call with) and the fraction of boards your calling range will win the hand on (your % chance of winning the all-in).

    Here comes a simplifying assumption that will initially apply to any real game but not necessarily hold when blinds go up in a real heads-up tournament: The players start with a stack of n big blinds each, where n is an integer and the blinds never change. In particular, the stacks start as and always remain integer multiples of the big blind. I’ll keep it as n=100 just for simplicity.

    Every possible state of the game is determined by your stack size, with your opponents stack size being 200 – yours. Every one of these 200 states can change into the same or three other states; you fold and lose your big blind (-1),, you call and draw, you call and lose the smaller of your stack or your opponents stack (-n or -(200-n)) or you call and win the smaller stack. e.g. if you have 30, your opponent has 170 and you can end up with 29, 30, 0 or 60.

    Around here my knowledge of algorithms / game theory becomes insufficient. You have a graph of 200 nodes representing game states states with three vertices leaving each node. Each node has a (not yet determined) range you will call with, giving you the chances for fold/win/draw/lose. The first node, where you have only the one big blind, gives you an automatic loss if you fold, so calling is always better. Your range is all cards, or 32o+, giving you around 29% win, 64% loss, 6% draw. A draw leaves you in the same node, so you can add half that fraction to your chance of losing from that node, but you can’t calculate equity fully for that node since you don’t know your chance of ending up in a loss from the second node.

    Each of 200 nodes has three easy-to calculate vertices to other nodes, plus a range of cards you shold call with. You can calculate the equity for each node if you know the equity of the target nodes, since that gives you the optimal choice for calling range for that node. Presumably you could seed all nodes with some initial equity (% of chips in play, for example) and go through the graph repeatedly picking the optimal calling range based on the current “best guess” equity, but I’m not sure if this would converge or get stuck in a loop. Intuitively, I’d assume it does converge.

    Is there a simple known algorithm for solving something like this?

    I’d be really interested in seeing the results (a list of calling ranges vs.stack sizes); Nash Equilibrium calculators use a simple model called ICM to estimate equity based on chip counts and aren’t very useful for figuring out maximally exploitative strategies against drunken degenerates 😉

    • JollyJoker permalink
      August 9, 2010 5:49 pm

      Your range is all cards, or 32o+, giving you around 29% win, 64% loss, 6% draw.

      Ah, forget the numbers. These are for exactly 32 offsuit vs any two rather than the full range. Obviously any two vs any two would give you 50% equity.

  8. marv permalink
    August 11, 2010 10:42 am

    This problem is well studied (i.e. how well does optimal play do against a player who goes all-in every hand and what is the structure of the optimal strategy).

    For example, with $1 antes and $1000 starting stacks, optimal play still only wins just 82.8% of the time.
    Interestingly the optimal play with a pair of kings is to call the all-in bet if and only if our current stack is in the range [975,1001], so we should fold pockets kings (the second best starting hand) most of the time, but not on the first hand of the match.

    A full analysis including some remarkable graphs is at
    http://archives1.twoplustwo.com/showflat.php?Cat=0&Number=6305932&page=0&fpart=all&vc=1

    Marv

  9. August 13, 2010 5:38 am

    Is there a grammar error in “Alice will win less much than 100% of the time”?

    • rjlipton permalink*
      August 13, 2010 8:19 am

      delta,

      Yes. Thanks. Hard to proof these posts given time constraints.

      • August 13, 2010 4:41 pm

        Glad it’s not some technical usage that I’m unfamiliar with. 🙂

  10. August 13, 2010 7:35 am

    @Dick: Richard Garfield (creator of card game Magic: The Gathering) wrote what I thought was a very interesting article on “how much luck should you design into your game” (paraphrasing; Game Developer Magazine Nov-2006).

    He identifies these advantages: “First, high-luck games broaden the range of competition. Second, luck removes players’ ego crutches. And third, luck increases the variety of the gameplay.” He also makes this assertion: “Historically, games usually evolved in such a way as to reduce the amount of luck in them. Even chess at one time had dice. The people who are in a position to modify a game are likely to be very good at it, and the sort of modifications they will be drawn toward are the ones that showcase their talents and their friends’ talents–although they, of course, are all top players.”

    So perhaps the irreducible chance for a weak player to win at a Texas Hold ‘Em tournament precisely contributes to its widespread popularity (as compared to chess, backgammon, etc.)?

    @Marv: It seems like the linked page is only discussing the equity of individual hands (like KK) at particular stages of the game, not the overall chance of winning? It doesn’t seem like we have a calculation for delta yet. For example: I don’t see the 82.8% figure (nor its complement 17.2%) anywhere on that page?

    • marv permalink
      August 14, 2010 6:58 am

      The linked page contains “EV at 1000 chips of 0.8281812378.” This is the skilled player’s equity at the start of the tournament (i.e. his probability of eventually getting all the chips), not just if he holds KK.

      Later in the linked page there’s a nice graph of how the optimal player’s equity varies with chips. Search for “Here is a graph of your equity with starting stacks of 1000”.

      (Correction to my previous post: I meant to say that [975,1001] is the folding set for KK as the linked page makes clear. )

      Marv

  11. August 14, 2010 3:59 pm

    @Marv: Huh, it was hard to spot that post in the middle of the rest of discussion all about KK. I’ll take your word for the result, thanks for clarification.

Trackbacks

  1. Gödel’s Lost Letter and P=NP - erichansander.com

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading