Skip to content

An EXPSPACE Lower Bound

April 8, 2009


Vector addition reachability, counters and nondeterministic tricks

images

Albert Meyer, in the first decade of his career, was one of the top theorist in the world, yet you may not know his work in detail. The cause, most likely, is that years ago he closed down his research program in complexity theory, moving into the area of the logic and semantics of programming languages. While he was a theorist, he helped invent the polynomial time hierarchy, proved lower bounds on real-time Turing machines, showed huge lower bounds on various logical theories, proved the sharpest hierarchy theorem known, and more. Much of this was joint work with some of the best of the best: Mike Fischer, Mike Paterson, Charlie Rackoff, Larry Stockmeyer, and others. I plan future posts on several of their results.

I met Albert when he spent a year at Carnegie Mellon: he taught me basic recursion theory. I remember in class one day he mentioned the infamous Collatz problem on the mapping {x \rightarrow 3x+1}: I spent several weeks trying to prove something about this problem with no success at all. Albert listened to my attempts, but I got nowhere. Albert later moved to MIT, where he has been ever since.

Today’s post is about some joint work that we did on a decision problem called the Vector Addition Reachability Problem. This problem has an interesting solution, an interesting history, and moreover uses similar techniques that were used years later to solve the LBA problem. Thats right. I only realized this as I began to write up this post. It is remarkable that “tricks” from very different areas can be used to solve completely unrelated problems. Too bad we did not see the connection years ago. Oh well.

I have to tell you a bit about Albert in order to explain the history of this work. Just a bit. Albert was always intense, he was quick, he was full of ideas, and he was sometimes brisk. Yet, I loved talking with him, listening to his talks, and working with him. The theory community, and me personally, were diminished when he left the field.

Once at a conference I started to tell Albert about my latest result. Right after I had stated the theorem and started to explain the proof, he interrupted me and said “be quiet”. Clearly, he was trying to see if he could prove my result in real-time. Now the result was nice, not earth shattering, but I was pretty miffed that he would try to do this. So I complained to Albert that I should get credit for thinking of the problem, not just for the solution. He thought about that, and agreed. Along with his quick mind, he had a number of sayings that were very helpful. My favorite, which I still use, is his rule of {3}:

Prove the theorem for {3} and then let {3} go to infinity.

The point, of course, in mathematics {2} “what-evers” is often special. But if you can prove the result for the case of {3}, then you are often golden. Like any rule there are exceptions, but it is a good one to know.

Vector Addition Systems

For vectors from {\mathbb{Z}^{d}}, we will say that {v \ge 0} or that {v} is non-negative provided, {v_{i} \ge 0} for {i=1,\dots,d}.

A {d}-dimensional Vector Addition System (VAS) consists of a non-negative vector {s} and a finite set of arbitrary vectors {A} where {s} and {A} are all in {\mathbb{Z}^{d}}. We denote it by   { < s,A > .}

The key notion is that of reachable vectors: the set of reachable vectors {R} of a VAS   { < s,A > } is the set of the following vectors. The start vector s is reachable. If a vector x is reachable, then so is x+a for any a \in A provided x+a is a non-negative vector. Clearly, only non-negative vectors can be reachable.

A VAS system can be viewed as a strange kind of counter machine. It has no finite state control, or if you like its finite state control has one state. The state of the machine is always a non-negative vector; initially the state is the special vector s . At any time the machine is in some state described by a non-negative vector. Suppose it is in the state v . Then, the machine makes a nondeterministic move, that is the machine nondeterministically selects some vector a \in A and tries to add it to v . If the vector v+a is non-negative, then the machine moves to that state; otherwise, the step is illegal.

Without the non-negative constraint the set of vectors reachable is easy to describe. Suppose that {A = \{a,b,c\}} as an example. Then, the set of reachable vectors is:

\displaystyle  s + xa+yb+zc

where {x,y,z} are non-negative natural numbers. This is a very simple geometric set, and it is easy to determine what vectors are in this set. At worst it can be written as an integer programming problem; thus, it is clearly decidable. The key to the VAS notion of reachable is that the order that vectors from {A} are added to the current state is critical. This is what makes the notion of reachable difficult to understand.

Reachable captures a computational process, not just a geometric notion. This is why the behavior of VAS’s is complex, and partly why the study of them is exciting. There are several additional reasons why we studied them. First, they were actually isomorphic to a popular model called Petri nets. Thus, questions about Petri Nets could be converted into questions about VAS’s.

Second, they were close to several other decision problems. Third, there is a close connection between VAS’s and abelian semigroups–more on this later. Finally, the reason they were really interesting to me is that very strong theorist–Meyer, Paterson, Rabin–had worked on various aspects of their structure. For a me, a junior faculty, solving a problem related to their work seemed like a good idea.

The Result

The obvious questions about VAS’s are: is the set of reachable vectors finite? And can the system reach a particular vector? The first, question is relatively easy to solve, although the best known running time is Ackermann like–the upper bound is astronomical. The second, problem is called the Vector Addition Reachability Problem: Given a VAS is a given vector in the set of reachable vectors? I was eventually able to prove:

Theorem 1 The reachability problem for Vector Addition Systems is {\mathsf{EXPSPACE}} hard.

A curiosity is that I proved this before the following was proved:

Theorem 2 The reachability problem for Vector Addition Systems is decidable.

Clearly, lower bounds are a bit more interesting if the problem is decidable. My theorem would have been wiped out, if someone had been able to prove that the reachability problem was undecidable. I had thought hard about the upper bound, but failed completely, so I switched to proving a lower bound. You do what you can.

Getting The Result

The proof of the lower bound result for VAS reachability started with a conversation with Meyer at a STOC conference. He told me that he had just proved, with Paterson, that the {n}-dimensional VAS reachability problem required at least {\mathsf{PSPACE}}. I had been thinking about VAS’s and realized instantly that I could prove the same result with fewer dimensions. I knew a couple of tricks that Albert did not, so I played it cool and said nothing to Albert. But once back at home–Yale at the time–I worked hard until I had a proof of the {\mathsf{EXPSPACE}} lower bound.

I then called Albert to tell him the news. It was a strange phone call. He did not believe that I could prove such a theorem. The conversation consisted of Albert listing a variety of mistakes that I must have made. I kept saying no I did not do that, or no I realize that. Finally, the call ended without me saying anything to him about the proof.

As soon as I hung up, I realized there was a “bug” in my proof. Not anything that Albert had raised, but a bug nevertheless. I still remember taking a deep breath and saying to myself that I can fix this–just stay calm. In less than an hour the bug was gone, and I had the lower bound. Months later I got to explain the details to Albert.

Main Ideas of The Proof

I will now sketch the key ideas of the lower bound proof.

A VAS is not a very friendly type of machine to program. Recall its state is just a non-negative vector, there is not even a finite state control. We next define a friendlier type of machine, which we call a {n} dimensional V machine. This is a machine that has {n} V counters, a finite state control, and is nondeterministic. A V counter is a device that contains a natural number and supports the following operations:

  1. Increment {C} by {1};
  2. Decrement {C} by {1}; however, if {C=0}, then the non-deterministic computation dies.

It is easy to prove that a {n} dimensional V machine can be simulated by a {n+O(1)} dimension VAS. The insight is that the finite state control is encoded into the first few co-ordinates of the state of the system.

There is good and bad news. An {n} dimension V machine has a real finite state control, is nondeterministic, and has {n} counters. These counters can hold arbitrarily large numbers. The bad news is that V machines have no way of testing a counter for {0}.

Here are the highlights of how we add zero testing to counters.

{ \bullet } We use an inductive construction to build a series of counters that allow zero testing, and so that each successive one can count to a larger number than the previous one. More precisely, if the {i^{th}} counter can count to {M}, then the next one can count to {M^{2}}. Thus, at each step the size of the counters square, in {n} steps, one gets to a double exponential size counter, i.e. one that counts to

\displaystyle  2^{2^{n}}.

{ \bullet } We use the fact that a fixed number of counters that can hold numbers up to {M} can simulate a Turing tape of length {\log M}. This is really a simple idea: use the counters to simulate two pushdown tapes. Since two pushdowns clearly equal one Turing tape, this suffices. Counters can simulate a pushdown, since when viewed as a binary number the only operations needed are simple ones.

{ \bullet } During this inductive construct we must figure out how to test larger and larger counters for zero. We do this by having two counters that are always paired, in the sense that the sum of their values is fixed. Thus, we can replace a test for zero, by a test that the other counter is large.

This last is the key trick so let me say a bit more. Imagine that {C_{1} + C_{2} = M^{2}} always. If we want to add {1} to the “counter” we add {1} to {C_{1}} and subtract {1} from {C_{2}}. We reverse if we want to subtract. Finally, to test if {C_{1} =0}, we convert that into testing whether or not {C_{2} \ge M^{2}}. We can do this by uses two {M} sized counters and a double loop: the loop makes {M \times M = M^{2}} decrements to {C_{2}}. This is successful only if {C_{1}} was zero.

Abelian Semigroups

As soon as Albert understood my proof he asked if the V machines I used in the lower bound proof were reversible. I thought for a while and we both worked out that they were. This meant that we could prove the following theorem:

Theorem 3 The word problem for abelian semigroups is {\mathsf{EXPSPACE}} hard.

The idea iss that suppose that the abelian semigroup has three generators. Then, an element of the semigroup is described by

\displaystyle  a^{x}b^{y}c^{z}

where {x,y,z} are natural numbers. A semigroup is defined by equations on the generators such as

\displaystyle  abc = a^{3}c^{2}.

These equations are reversible in the sense that both the following transformations are allowed:

\displaystyle  abc \rightarrow a^{3}c^{2} \text{ and } a^{3}c^{2} \rightarrow abc.

Since the V machines were reversible, the word problem for abelian semigroups could be shown to have the same {\mathsf{EXPSPACE}} lower bound.

The details of this were worked out by Meyer and his graduate student E. Cardoza. They published the final version of our paper without me; we planned a joint paper, but I had an illness in my family and could not help with the paper.

Upper Bounds

Recall a lower was proved first, then the problem was shown to decidable.

Theorem 4 The reachability problem for Vector Addition Systems is decidable.

This theorem has an interesting story of its own, its history falls into three acts. In act I, like many hard problems, the theorem was claimed a number of times: each of these claims were found to be incorrect. For example, George Sacerdote and Richard Tenney–two theorists–claimed a proof that eventually was shown to have lemmas that failed even in the {1} dimensional case. The reachability problem is a tricky problem, and intuition about it was often faulty. I am terrible at geometric reasoning myself, and I can understand how people could easily be mistaken.

In act II, two proofs were found: the first by Ernst Mayr and shortly thereafter a proof by Rao Kosaraju. Many did not believe Mayr’s proof, since it was unclear in many places. Understanding these long complex arguments, with many interlocking definitions and lemmas, is difficult. Kosaraju created his proof because he felt Mayr’s proof was wrong. You can imagine there was an immediate controversy. Which proof was right? Both? Neither? There were claims that Kosaraju’s proof was just Mayr’s proof, but written in a clear way. The controversy got messy very quickly.

Finally, in act III there is now a new proof by Jérôme Leroux that seems much cleaner. and more likely to be correct. More on this in the future.

Open Problems

The main open problem is to improve the lower bound on the reachability problem for VAS’s. There is currently a huge gap between the lower bound of {\mathsf{EXPSPACE}} and the upper bound, which is not even primitive recursive. The current construction of the lower bound is tight, and I see no way to improve it. But, perhaps you will see something that we all have missed.

Finally, a lesson here is that theorems from very disparate areas of theory can actually use the same proofs methods. I am surprised that the guess and count ideas that I used years ago are close to the methods needed to solve the LBA problem. In the VAS and the LBA proofs we need to use counters and we also need to use non-determinism. There is no direct link between the proofs, but I think there are some similar ideas that are being used. Certainly the idea, used in the VAS proof, that to prove something is small, we prove something else is large is related to the use of counters in the LBA proof.

Of course I wish I had seen that the connection years ago, but one wonders if more people had understood the VAS lower bound would the LBA problem had been solved years earlier? What other undiscovered connections are lurking out there.

16 Comments leave one →
  1. April 10, 2009 6:11 pm

    Kudos for another lovely post! Apart from being scientifically very interesting, it brought back memories of a discussion I had with Albert as a second-year PhD student during the conference dinner for LICS 1989 in Asilomar. It was an experience that I have fondly remembered ever since.

    However, I wonder why you say “While he was a theorist, he helped invent the polynomial time hierarchy, proved lower bounds on real-time Turing machines, showed huge lower bounds on various logical theories, proved the sharpest hierarchy theorem known, and more.” Wasn’t he also a theorist when he was working on logic and the semantics of programming languages? In my book he definitely was, as he was also a logician when he proved that the weak monadic second order theory of successor is not elementary recursive. I feel that the equation

    theorist = complexity theorist

    gives far too narrow a view of the broad field of TCS.

    Thanks a lot for your very readable and inspiring posts.

    • rjlipton permalink*
      April 10, 2009 7:55 pm

      I agree. Once a theorist, always one.

      I do take a wide view of theory. I was thinking of missing him at STOC/FOCUS etc.

  2. April 11, 2009 5:03 am

    In fact, as far as I recall, Albert was the prime mover behind the establishment of the LICS conference series. Perhaps this was his reaction to the lack of coverage that FOCS/STOC was giving to his new fields of research interest. I was a beginning PhD student then, so I do not know the details. Others might want to elaborate on the birth of LICS, which has since then become a very strong (some say the strongest) conference in TCS volume B.

  3. April 16, 2009 10:58 am

    I would like to echo Luca both in thanking you for your truly excellent posts, and agreeing with him that the definition of “theory” is often taken to be rather narrow by the STOC/FOCS community… 🙂 In fact, I hear (from second-hand sources, so this could well be wrong!) that Jerome Leroux’s breakthrough paper was rejected from FOCS for being “out of scope”. It will now appear at this summer’s LICS.

  4. rjlipton permalink*
    April 16, 2009 2:27 pm

    I did not know that. I cannot imagine that his paper would not be an obvious “accept”. But that just my opinion.

  5. April 17, 2009 12:04 pm

    Thanks for the nice post.

    Joel is right. One of the three FOCS reviews was “I think that the author should find a conference that is better suited for this paper”. 🙂

  6. none permalink
    June 9, 2009 5:59 am

    Jeff Lagarias has a big collection of Collatz-related results. I think that the problem is proved undecidable if you replace 3 with some other possible parameters. You can’t let 3 go to infinity for that problem.

Trackbacks

  1. Fads and the Theory of Computation « Gödel’s Lost Letter and P=NP
  2. New Ideas On Nondeterministic Simulation « Gödel’s Lost Letter and P=NP
  3. Stockmeyer’s Approximate Counting Method « Gödel’s Lost Letter and P=NP
  4. Proving A Proof Is A Proof « Gödel’s Lost Letter and P=NP
  5. Deolalikar’s Claim: One Year Later « Gödel’s Lost Letter and P=NP
  6. Two Versus Three | Gödel's Lost Letter and P=NP
  7. The Long Reach of Reachability | Gödel's Lost Letter and P=NP
  8. Drama Therapy: A Math Viewpoint | Gödel's Lost Letter and P=NP
  9. STOC 2023 | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading