Skip to content

Rumors and Playing Games

November 8, 2009


A rumor from FOCS on approximate Nash Equilibrium is partially true

images

Paul Spirakis is a senior researcher who has made many important contributions to theory. He has hundreds of publications that cover many areas of theory. What is so impressive about Paul is that he has been able to blend theory and practice in a very fruitful way. This is a pretty unique skill that few have.

Today I want to talk about a new result of his on finding approximate Nash Equilibrium for non-zero sum games.

The coolest rumor I heard at this year’s FOCS was that “someone” had proved a sub-exponential bound on the running time of an algorithm for approximate Nash Equilibrium. This was quite exciting and it is what I refer to in my discussion of Theory Day. I eventually tracked the rumor down and discovered the source was the pretty paper that I plan to discuss today. The paper does not quite have such a result, but definitely makes an important step in our understanding of games.

The rumor that I heard at FOCS turned out to be wrong. Oh well. That’s the nature of rumors: by definition a rumor is not always true; otherwise, it would not be a rumor. At least this rumor was partially true.

Years ago, in 1969, there was a great rumor that started right after Steve Cook proved his famous theorem that characterizes polynomial time. He proved:

Theorem: The following three conditions are equivalent for any language {L}:

  1. {L} is accepted by some deterministic auxiliary pushdown machine with logspace storage.
  2. {L} is accepted by some non-deterministic auxiliary pushdown machine with logspace storage.
  3. {L} is in P.

This is a beautiful result, which in my opinion was not expected. An auxiliary pushdown automata with auxiliary logspace storage is a machine that can read the input two-ways, and store information on a regular Turing tape of length at most {O(\log n)}. In addition, the machine has a pushdown, which it can use also to store information.

Note, a pushdown only allows operations on the top: the top symbol can be read, a symbol can be pushed onto the pushdown, or a symbol can be removed from the top. That’s all. Symbols below the top symbol cannot be read without popping off the top symbol.

Almost immediately someone well known, I will call them X, claimed that Steve’s result could be extended “slightly.” In particular, X claimed that the same theorem could be proved with pushdown replaced by stack. A stack is not the same as a pushdown. It has one important extra property: a stack allows the reading of the symbols in the pushdown without popping off any symbols. A stack only allows this in a read-only mode. That is when inside the pushdown, symbols can only be read. Symbols can be changed at the top as usual.

We could not figure out how the new proof went, and worse X went on a camping vacation. So he was unreachable—this is well before cell phones. The theory community was all abuzz, since it was believed that an auxiliary stack machine could accept more than P. It would have been a great result. Clearly, X did not see this, or he would have been more circumspect about his claim.

Finally, X returned and was asked how his proof went. He explained the proof, and it was immediately clear that the proof did not work. He had forgotten one small point: a stack machine could run for more than exponential time unlike a pushdown machine. [I fixed an error here.] He had to retract his claim. Sometimes rumors are false.

Now let’s turn to non-zero sum games and a new approach to getting approximate Nash Equilibrium.

Win-Lose Games

The paper of Haralampos Tsaknakis, and Paul Spirakis is mostly a new approach to finding approximate Nash Equilibrium (NE) for general two player game through a reduction to a restricted class of games.

This class is special, but still quite interesting. It is the class of symmetric win-lose games. Such a game has one {n} by {n} {0}{1} matrix {A}: the matrix {A} is the payoff for the row player and {A^{T}} is the payoff for the column player.

The restriction to {0}{1} values is why it is called a “win-lose” game, and the fact that one player uses the transpose of the other player’s matrix is why it is called symmetric. Finding a Nash Equilibrium in even win-lose games is as hard as the general case thanks to the beautiful result of Tim Abbott, Daniel Kane, and Paul Valiant.

Their proof is quite clever. As someone who has worked a bit on games, I was impressed that they could prove this result. In many complexity situations the {0}{1} case is universal; however, proving that for games is not so easy. My intuition was that this, while probably true, would be very hard to prove.

Approximate Solutions

Since the problem of finding exact Nash Equilibrium for even win-lose games is as hard as solving the general case, it is natural to look for approximate NE’s. This is exactly what Haralampos and Paul do in their paper. They compare their new result with a result of Evangelos Markakis, Aranyak Mehta, and myself, which we proved in a earlier paper:

Theorem: For any {\epsilon>0} there is an algorithm that finds an {\epsilon} approximate Nash equilibrium for a symmetric win-lose game of size {n}, in running time bounded by { n^l } where {l = O(\log n / \epsilon^{2})}.

The main result of Paul’s paper is:

Theorem: For any {\epsilon>0} there is an algorithm that finds an {\epsilon} approximate Nash equilibrium for a symmetric win-lose game of size {n}, in running time bounded by { n^l } where {l = O(m / n\epsilon)}. The value of {m} is equal to:

\displaystyle  \sum_{\lambda_{k} > 0} \lambda_{k}

where {\lambda_{1}, \dots, \lambda_{n}} are the eigenvalues of the graph induced by the game matrix.

This result is neat, in my opinion, since it addresses the barrier discovered by Constantinos Daskalakis and Christos Papadimitriou in their paper titled “On oblivious PTAS’s for Nash Equilibrium.” Constantinos and Christos show essentially that to get good approximations for NE’s, one must look at the structure of the payoff matrices: it is not enough to just examine the strategies as we did in our result.

Sparse Games

I am currently running an open problem seminar and we discussed Paul’s paper in a recent class. We noticed first that the value of {m} in his theorem could be changed from

\displaystyle  \sum_{\lambda_{k} > 0} \lambda_{k}

to

\displaystyle  \sum_{k=1}^{n} | \lambda_{k} |.

This follows since the sum of eigenvalues is {0}, so the new measure is exactly twice the old, and this will have no effect on the results—all can be swept under the constant in the big-O notation.

Then, we considered sparse games: games where the induced graph has {O(n)} edges. For such games the value of the critical parameter {m} is bounded by {O(n)}. The proof of this is quite simple:

\displaystyle  \begin{array}{rcl}  	m &=& \sum_{\lambda \le 1} | \lambda_{k} | + \sum_{\lambda > 1} | \lambda_{k} | \\ 		&\le& n + \sum_{\lambda > 1} \lambda_{k}^{2} \\ 		&\le& n + \sum_{k=1}^{n} d_{k} \end{array}

where {d_{k}} is the degree of the {k^{th}} vertex. This last quantity is easily seen to be at most {O(n)} if the graph is sparse.

Based on these simple insights my students Atish Das Sarma, Subrahmanyam Kalyanasundaram, and Shiva Kintali believe that they can prove the following theorem:

“Theorem”: There is a PTAS for finding approximate NE’s for the class of symmetric win-lose games whose induced graph has bounded degree.

Unfortunately, the proof of this, while it looks like it will work, does not seem to be able handle sparse graphs. The reason is that a sparse graph could fail to have some technical properties that Paul’s theorem requires. We are hopeful that this can be fixed.

For random sparse symmetric win-lose games they do believe that it should be possible to prove the following theorem:

“Theorem”: There is a PTAS for finding approximate NE’s for the class of random sparse win-lose games that works with high probability.

There has already been some nice work on random games by Imre Bárány, Santosh Vempala and Adrian Vetta. Their result works for dense games where the values are Gaussian, while we are looking at sparse games with {0}{1} values.

Open Problems

The major open problem is: is there an algorithm that runs in time {n^{O(1/\epsilon)}} for any non-zero sum game and finds an {\epsilon} approximate Nash equilibrium? Can one even prove this for a special class of games?

14 Comments leave one →
  1. November 9, 2009 6:06 am

    Hi Dick,

    A comprehensive and thought-provoking post as usual.

    Re: your last paragraph: is there some reason to view n^{O(1/eps)} as the right target? Going by your results with Vangelis and Aranyak (and having worked a bit with Vangelis on this), n^{O(1/eps^2)} appears to me to be a more natural goal to shoot for.

    • rjlipton permalink*
      November 9, 2009 7:35 pm

      That is a good point. We have a crazy idea that we could perhaps get n^O(1/eps log log n), but could not make it work. The thought was to try to apply our idea recursively somehow. Never could get to work.

  2. Alex permalink
    November 9, 2009 5:49 pm

    Very interesting reading.

    One question though:

    “Unfortunately, the proof of this, while it looks like it will work, does not seem to be able handle sparse graphs. The reason is that a sparse graph could fail to have some technical properties that Paul’s theorem requires. We are hopeful that this can be fixed.”

    Isn’t the bound of O(n) you showed on m enough to get a PTAS for sparse
    games using the result of Tsaknakis & Spirakis? A hint as to what goes wrong would be much appreciated.

    • rjlipton permalink*
      November 9, 2009 7:33 pm

      Their paper needs some technical stuff that I think should be okay, but am not sure yet. For example, they need that matrix has no all 0 rows or columns. Also a certain “no domination” property. That what we are working on.

  3. rjlipton permalink*
    November 22, 2009 8:37 am

    The sum of absolute values of the eigenvalues of a graph has a name. It is called the “energy” of the graph.

    See this for a definition, or just use Google. They prove a number of interesting basic results about the energy. They do not, I think, yet know which graphs have energy bounded by O(n), which would have consequences for the study here.

Trackbacks

  1. Rumors and Playing Games « Gödel's Lost Letter and P=NP
  2. Rumors and Playing Games « Gödel's Lost Letter and P=NP
  3. Rumors and Playing Games « Gödel's Lost Letter and P=NP - Rumors and Playing Games « Gödel's Lost Letter and P=NP
  4. Rumors and Playing Games « Gödel's Lost Letter and P=NP - Rumors and Playing Games « Gödel's Lost Letter and P=NP
  5. Rumors and Playing Games « Gödel's Lost Letter and P=NP
  6. Rumors and Playing Games « Gödel's Lost Letter and P=NP
  7. Rules of the Game | eCheapPrice.com
  8. Nash Equilibrium for Sparse Games: Part Deux « Gödel’s Lost Letter and P=NP
  9. Galactic Algorithms « Gödel’s Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading