Brouwer’s Fixed Point Theorem is Unstable
Games and economic problems are often unstable

Luitzen Brouwer is famous for his seminal fixed point theorem and also for a philosophy that seems at variance with his own theorem. One of the great things about people is we can hold inconsistent ideas in our heads, and not be too troubled by them.
Sometimes I’ve believed as many as six impossible things before breakfast.
Lewis Carroll
Brouwer created the mathematical philosophy of intuitionism. This philosophy insists on a very constructive approach to mathematics, which is interesting since his proof of his famous theorem is non-constructive. Intuitionism can be viewed as a refinement of classic logic, in that finer distinctions can be made. For example, in Brouwer’s view the statements
and
are distinct. The first says that holds for all
, while the latter says that we cannot find a counter-example to
. Classically, these are the same, but in intuitionistic logic they are different. In passing I should mention that Kurt Gödel made important contributions to the relationship between classic and intuitionistic logic: I will discuss that another day.
Back to Brouwer’s Theorem, Vijaya Ramachandran was a PhD student of mine years ago. She used the Brouwer Fixed Point Theorem (BFT) in her thesis, which I always envied. I have never gotten to use this great theorem in any of my papers. Perhaps one day.
At the time we were studying the theory of VLSI design. Often the first design of a chip would run too slow. The reason usually was that there were wires that were too “long”, and therefore the signal took too long to move from one place on the chip to another. The method that was popular at the time, was to re-size the driver for the wire. If a driver was made larger, the signal would propagate faster down the wire. Thus, there were algorithms that computed how much larger the drivers should be and changed the chip design accordingly.
Vijaya noticed a logical problem, that the new re-sized chip might still be too slow: the issue is that re-sizing the drivers made other wires longer and this may slow them down. So re-apply the algorithm. But wait. Will this ever converge? Or could the process diverge? Or yield a chip that was too large to be practical?
Enter BFT. The theorem states that any continuous map of the closed ball to itself has a fixed point. This was exactly what she needed to prove that the re-sizing process always eventually converges.
While I have never used BFT, there is a theorem that may not be as well known to our community that I have used. Why I used it is another story, for another day. It is called Smith’s Fixed Point Theorem:
Theorem: If
is a continuous map so that for some prime
, and all
,
, then
has a fixed point.
Note, and
and so on.
The Stability Issue
Today, I want to point out an issue regarding proofs based on BFT and related theorems. In general, the fixed points of a map are not stable under perturbations of the map itself. This is a problem that seems to often be missed. I once looked through an entire monograph on fixed point theorems of all kinds and could find no discussion on the inherent instability of BFT.

This already happens in one dimension; a simple example is the following: In the Figure above we show a continuous function from
to itself, and have circled its two fixed points. On the right we have drawn a map
which is a slight perturbation of
. Observe that one fixed point of
has disappeared in
: simply because that fixed point of
was at a point where
and
just touched each other but did not cross.
Game and Economic Theory
Classical game theory and economics rely heavily on fixed point theorems to prove the existence of various solution concepts. The celebrated result of Nash proving the existence of equilibrium strategies in non-zero sum games is a classic example for which the only proofs of existence known are via Brouwer’s or Kakutani’s fixed point theorems. Most extensions of Nash equilibria also depend crucially on these theorems.
To take another example, the existence of the Arrow-Debreu equilibrium prices in a market model was first proved via fixed point theorems, and only later were proofs via convex programming formulations provided in some special cases. Even the most basic solution concept in game theory, the min-max solution of a zero-sum game, was initially proved to exist by von-Neumann via Kakutani’s fixed point theorem, and the connection to linear programming (LP) became evident only later.
This lead me with Vangelis Markakis and Aranyak Mehta to examine results in game theory and in economic theory with an eye toward stability. The bottom line is that some results in game and economic theory are stable and others are not. I think this distinction is interesting. If, for example, a problem is unstable, then one must be careful in discussing what it means to have a solution. Even more, what good is an approximate solution if the “real” solution can be very far away?
Zero Sum Games
Consider a -player zero-sum game. As is usual, I will refer to the
players as the row player and the column player respectively. A zero-sum game is determined by a payoff matrix
, which determines for every choice of strategies, the payment that the column player makes to the row player. Let’s start by looking at the famous min-max solution introduced by von Neumann. The payoff of the row player in a min-max solution is known as the value of the game.
Theorem: Let
and
be two zero-sum games with payoffs bounded between
and
, i.e.
. Let
. Let
be the value of game
and
be the value of game
. Then
Proof: By writing the min-max solution as an LP in the standard manner we get that is the optimum value of the following LP (1):
Similarly is the optimum value of the LP (2):
Let be a value of
which yields a value of
for
in LP (1). Let
. Then
Since and
, we have that
. Hence
Hence is a feasible solution for LP (2). Hence
, the value of game
, is at least
. But symmetrically, we get that
. Hence we get
.
The above theorem shows that for any , perturbing the entries of a payoff matrix by at most
, results in a game in which the value is
-close to the original value. That is, zero-sum games are stable in
-norm with respect to the value of the game.
Note, that although LP is known to be equivalent to solving a zero-sum game, linear programs are in general not stable with respect to their optimal value. It is easy to construct examples in which slight perturbations result in an LP becoming infeasible. Thus, the fact that zero-sum games are stable is a special property.
Non-Zero Sum Games
In general noncooperative games, there is a payoff matrix for the row player and a matrix
for the column player. The dominant solution concept is the notion of a Nash equilibrium, which is a pair of strategies such that no player has an incentive to deviate unilaterally. This next theorem, which is really a family of examples, was found initially by Aranyak:
Theorem: There exists a
game and an
-perturbation of its payoff matrices, (
), such that the payoffs of the Nash equilibria in the resulting game are arbitrarily far away from the payoffs of the Nash equilibria of the original game.

Proof: Consider the following game the payoff matrix for the row player is ,
,
,
. For the column player,
,
,
.
This refers to the above Figure. Since the first row of is dominating, it follows that the only Nash equilibrium of the game is achieved when the row player chooses the first row and the column player chooses the first column. This results in a payoff tuple of
. Suppose we modify
to
, and
to 0 (right side of Figure). The second row is now dominating and the payoffs in the (unique) Nash equilibrium of the game are
for the row player and
for the column player. To get “arbitrarily” away, we need only re-scale the game.
Another Example
For another example consider a special case of the Leontief utility model in which each agent comes with an initial endowment of unit of a distinct good and the utility of an agent for a bundle
is the minimum of the linear functions
where is the coefficient of good
with respect to agent
. Hence the input to the problem is the matrix
.
Theorem: The problem of computing a market equilibrium in a Leontief economy is not stable.
This follows directly from the paper “The complexity of equilibria: Hardness results for economies via a correspondence with games” by Bruno Codenotti, Amin Saberi, Kasturi Varadarajan, and Yinyu Ye.
Open Problems
The min-max value of a zero-sum game is stable, while the Nash equilibria and other problems are not stable under perturbations of the matrix entries. It seems important to carry out a stability analysis for other economic solution concepts. In particular:
- Are market equilibrium prices in the Arrow-Debreu model of markets stable?
- Is the Shapley value stable?
- Pick your favorite other economic problem is that stable?
One approach to the first question is based on the beautiful result of by Nikhil Devanur, Christos Papadimitriou, Amin Saberi, and Vijay Vazirani in the important case of linear utilities. Is their algorithm stable? Another approach would be to use the explicit convex programs that have been discovered for this problem.
Another important question is to investigate if there is any connections between stability and efficient computation?
In our original manuscript (with Vangelis and Aranyak) we also thought about reductions between economic problems. We had an idea that, perhaps, there could be no reduction from an unstable problem to a stable one, under some reasonable notion of reduction. This seems like an interesting direction also to follow.


Great post! Thanks!
Thanks…
Muy interesnte tu desarrollo.
Hi ,
I came across this post a little late in the day I suppose. Probably what I am going to say is obvious. Brouwer’s fixed point theorem is only an existence theorem and Brouwer disowned it himself precisely because it is not ‘constructive’. But, he came up with an intutionistic version of the fixed point theorem in 1952. For the classical BFT and his intuitionist version to be equivalent, the validity of Bolzano Weierstrass theorem is required, which unfortunately is not valid. http://rspa.royalsocietypublishing.org/content/213/1112/1.short Here is the two page classic.
In the context of economic theory, Velupillai has written a lot on this topic and the non-constructivities and uncomputabilities of a lot of core areas of economics such as Computable General Equilibrium, Nash equilibrium.
Hi,
Very interesting post. Do you have the results reported in article form?