Skip to content

It’s All Algorithms, Algorithms and Algorithms

September 22, 2009


All results whether upper or lower bounds on based on new programming tricks

images

Donald Knuth has won the Turing Award, and almost every other prize possible. He created the modern field of the analysis of algorithms, and his work in combinatorics and algorithms has had a tremendous impact on the field. He also created \TeX {\,} that allows us to write mathematics that looks great. Thanks Don.

Today I want to discuss one reason why I think that P=NP is at least a possible outcome. It has to do with Knuth’s Dictum.

Knuth’s Dictum is: Computer Science is algorithms. That’s it. Not just theory. All. Everything. If there is not an algorithm lurking around, then it is not Computer Science. I am not sure I always have agreed with this, but I think it probably is a reasonable statement. Back to this in a moment.

Knuth has a rule, not a dictum, that when he cites you in these famous textbooks on the Art of Computer Programming, he must have your full name. He wants your first, middle, and full last name. Nothing less will do.

Today with Google, getting full names is a bit easier than back in the 1970’s when he was editing his second volume. Although it still is not trivial to get even just the first and last name of some people. I know because I have a rule: if I mention you in a post, I want your full first and last name. I decided from the beginning that it was courteous to always refer to everyone by at least their first and last name. This rule has not always been easy to follow, since there are many who do not use their full name. I have had sometimes to work pretty hard to get their full name, but so far I believe I have always gotten them.

Knuth’s rule is in another league—the big leagues. He will not settle for less than the full name, middle name included. In the seventies Knuth sent me a letter—this was before email—asking me for my full name. I love to have fun, I hope that comes out, and decided to write a letter back. The letter I sent him went like this:

Picture 2

I never heard a peep from him about this, but I was correctly cited in the next edition.

Let’s turn now to one reason why I feel the way I do about P=NP.

Algorithms, Algorithms, and Algorithms

I have a thesis that I wish to advance—it is a corollary to Knuth’s Dictum. The thesis is that all the advances in computational complexity theory, all advances, have been from the discover of new algorithms or programming “tricks”. Every advance has been the result of someone discovering a new way to program. Sometimes the programming was explicit, when they discovered a new algorithm; and sometimes it is implicit, when, for example, they proved a lower bound. But all advances come from new programming tricks.

These tricks may involve programming nondeterministic machines in some new way, using randomness in some novel manner, or programming circuits in some clever way. But, it’s algorithms, always algorithms.

Here are some examples of what I mean. The examples are from famous results that are not stated directly as algorithmic papers.

{\bullet } Neil Immerman and Róbert Szelepcsényi proved that nondeterministic space is closed under complement. This is a theorem all about space classes, but at the core is a clever algorithm that proves that a path does not exist in a directed graph. They use a counting trick to prove this.

{\bullet } Alexander Razborov and Roman Smolenski proved that ACC{^{0}[p]} cannot compute the mod {q} function, provided {p} is a different prime than {q} . The core of this proof is again clever programming of low degree polynomials. For example, one of the key steps is to show how to approximately compute the OR function with a low degree polynomial.

{\bullet } Omer Reingold proved that undirected reachability could be done in logspace. Essentially he de-randomized the previous random algorithm. His method is a very clever use of the zig-zag product of Omer Reingold, Salil Vadhan, and Avi Wigderson. Again at the core it is an algorithm for searching graphs. For example, one simple subroutine is how to search an expander—just do a brute force search. A much more important part of his proof is how to construct the edges of the result of applying many zig-zag products to a graph. This is an algorithm that must run in logspace for the whole proof to work.

{\bullet } Mark Braverman recently proved that AC{^{0}} can be fooled by weakly independent random variables. Part of the proof is the construction of certain polynomials that compute various required predicates. This requires that Mark be an expert AC{^{0}} programmer. See Luca Trevisan’s post for a nice summary of this.

I will not go into details on all possible examples, but they do include many others, for example: IP = PSPACE, Seinosuke Toda’s Theorem, and the PCP Theorem.

P=NP and Knuth’s Dictum

The relationship to P=NP is simple, in my mind. Since we are constantly discovering new ways to program our “machines”, why not a discovery that shows how to factor? or how to solve SAT?

Why are we all so sure that there are no great new programming methods still to be discovered? Methods that could allow us to program a random polynomial time Turing machine to factor, or to solve SAT? I am puzzled that so many are convinced that these problems could not fall to new programming tricks, yet that is what is done each and every day in their own research. I am confused.

Open Problems

Take a look at some result that you know. See if I am right that at the core it’s based on a programming trick of some kind. I would be surprised if you can suggest any results that do not fall into this category.

22 Comments leave one →
  1. September 22, 2009 12:01 pm

    For what it’s worth, I agree completely with this post … and yet there is a quotation that argues so beautifully against it, that I will post that quotation (mainly for students who may never have seen or heard it).

    It is Alexander Grothendieck’s description of mathematical problem-solving: “The unknown thing to be known appeared to me as some stretch of earth or hard marl, resisting penetration. … the sea advances insensibly in silence, nothing seems to happen, nothing moves, the water is so far o

  2. September 22, 2009 12:12 pm

    (cont’d after the WordPress-breaking “ff”-ligature … hopefully)
    ——————————–

    … [the water is so far] off

  3. September 22, 2009 12:20 pm

    Oh, blast these invisible characters … anyone who cares to see the full Grothendieck quote and its BibTex reference, can find it here. To everyone else, my apologies.

  4. September 22, 2009 1:39 pm

    I don’t see how Grothendieck’s quote proposes a different view. He talks about the very high-level picture of conquering a problem. At the core, there may still lie an algorithm.

    Algorithms/”programming” tricks lie at the core of many combinatorial results I think. Bijective proofs give algorithms for encoding objects of one class as objects from another. Some are very clever algorithms, like the bijection that encodes pairs of permutations as Young Tableaux and vice versa (incidentally due to Robinson-Schensted-*Knuth*). It might be just a matter of way of thinking, but I can also view generating function proofs in this way. We program a polynomial to compute a number sequence and then we use ready-made “subroutines” from analysis and algebra to compute some interesting property of the sequence. I think this is as nice a programming trick as you can get in pure mathematics.

  5. September 22, 2009 1:43 pm

    Nice post. Here are more examples :

    1) At the heart of natural proofs, is an algorithm for breaking certain prf’s.
    2) Allender-Gore’s proof of ‘TC0 is properly contained in PP’ has a crucial
    algorithm to prove a uniform version of Yao’s and Beigel-Tauri’s characterization
    of ACC.

    I agree that most of the results are algorithms. But I see this as bad news, because we haven’t explored many “non-constructive” techniques for proving lower bounds.

    I haven’t seen a theorem that states “A certain group theoretic object exists, hence class A is (not) equal to class B”. If you know such theorems please leave a comment.

    Geometric complexity theory explores such techniques. As Prof. Mulmuley mentioned at barriers workshop, “exploring GCT requires a massive collaboration between TCS and mathematics”. In near future, group theory and representation theory should be made as a part of curriculum for new complexity theory PhD students. This hopefully will stir some new approaches.

  6. September 22, 2009 1:58 pm

    Thank you, Kevembuangga. Yes, that’s the McLarty article I was quoting.

    Another article by McLarty having a similar theme is “The Last Mathematician from Hilbert’s Gottingen: Saunders Mac Lane as Philosopher of Mathematics”. Leila Schneps has a 2008 review in Mathematical Intelligencer titled “The Grothendieck-Serre Correspondence” that is very good too.

    All three of these articles can be read as in-depth descriptions of the struggle of mathematicians like Mac Lane and Grothendieck to escape from the thrall of algorithms that Lipton’s (wonderfully thought-provoking) post describes —although they do not *have* to be read this way.

    I sincerely hope that the above lines are not contaminated by the non-ASCII characters that seemingly have crept into my BibTeX database, for which I again apologize.

  7. adam o permalink
    September 22, 2009 11:43 pm

    Information-theoretic crypto comes to mind as an area where this may not be the case (same for error-correcting codes, although I guess one could argue that this is traditionally not part of CS). There are some schemes based on the existence of purely combinatorial objects and so on. I can’t recall a specific compelling example now, though.

  8. adam o permalink
    September 22, 2009 11:46 pm

    Oh, I see now that you were just talking about complexity theory.

  9. September 23, 2009 9:02 am

    Well, it’s pretty clear that I read Dick’s post differently from many people … and so if this view is wrong, hopefully someone should correct it.

    From a strictly formal (and therefore, kind of boring) point of view, complexity theory *is* all about algorithms … and so is every other branch of mathematics, and engineering (fundamental physics is quite different; it’s about experiments).

    So perhaps a better way to read Dick’s post is that it is asking “Is complexity theory really the *search* for algorithms?” And one reason Dick’s question is so great, is that it broadens very naturally to “Is every branch of mathematics, engineering, chemistry, and biology really the *search* for algorithms?” (again, fundamental physics is an exception).

    Now, who wants to spend their lives in an NP-hard search for algorithms? Is there any conceivable alternative?

    Mathematicians like Mac Lane and Grothendieck deserve great credit (IMHO) for attempting to conceive alternative ways of doing mathematics.

    Can we extend their approach(es) to *other* disciplines? Are there ways to do engineering, chemistry, and biology *other* than searching for algorithms? That is a question that many people are thinking about.

    Students might ask: “But … wait a second … wasn’t I taught that disciplines like chemistry and biology are experimental, not algorithmic?” To which the emerging reply is “Oh, that point-of-view is *so* twentieth century!” 🙂

  10. Josh permalink
    September 25, 2009 6:29 pm

    Shiva: you are correct about Geometric Complexity Theory, but you didn’t mention a big point that’s very relevant to this post: the heart of Mulmuley’s proof proposal is to find an algorithm that will show that the aforementioned group-theoretic objects exist! (In fact, the algorithm is just linear programming; proving that it always answers “yes, the object exists” is the hard part.)

    On another note: perhaps it should not be so surprising that all our results are algorithms, given the close connection between “proof” and “algorithm” which has existed since Turing/Church/et al. and only been strengthened in recent decades through things like descriptive complexity (which, incidentally, is the route which Immerman took to NL=coNL).

    Finally, another very commonplace example: nearly all NP-completeness proofs are just programming. To show that Hamiltonian Circuit is NP-complete, for example, a standard proof simply programs 3SAT into the “Hamilton Circuit programming language.”

  11. September 26, 2009 11:47 am

    It has been surprising (to me) that Dick’s provocative assertion “It’s all algorithms” has elicited rather few responses, given that it asserts what Neils Bohr called a “Great Truth” (meaning: a true statement whose opposite is also a true statement) … I had expected that many folks would find much to agree with in it … and much to disagree with too.

    So here is an engineering-oriented essay that takes *both* sides.

    We will start by noting that various mathematical disciplines differ in their emphasis on algorithms, and we will simply assert (although it is debatable) that the mathematical discipline that traditionally focuses *least* upon algorithms is geometry, together with its modern heir, category theory.

    For authorization, we can cite Colin McLarty’s article The last mathematician from Hilbert’s Gottingen: Saunders Mac Lane as philosopher of mathematics, which includes the lovely quotation: “[Geometric approaches / category theory] does not itself solve hard problems in topology or algebra. It clears away tangled multitudes of individually trivial problems. It puts the hard problems in clear relief and makes their solution possible.”

    Now, it is *not* our job as engineers to solve hard problems … rather it is our job to “clear away tangle multitudes of individually trivial problems” … and for this algorithmic brush-clearing the main engineering tool nowadays is the mighty lawnmower of computer science.

    This attitude is very much in the spirit of the (wonderful) book A=B by Petkovsek, Wilf, Zeilberger; whose introduction by Donald Knuth is itself a (justly famous) essay on algorithms and computational complexity.

    The upshot is that systems engineers nowadays try to have it both ways regarding Lipton’s maxim “It’s all algorithms”. On the one hand, we fight against it by using geometric ideas and category theory to gain a non-algorithmic perspective on system complexity; this helps us to isolate the genuinely hard (meaning, unsolved) problems in systems analysis and simulation.

    But then we pivot our attitude 180 degrees, and *embrace* the idea “it’s all algorithms” … so that we can set to work at brush-clearing “individually trivial problems” using the powerful algorithms (and hardware) of modern computer science … and thus solve real-world problems.

    It is true, however, that systems engineers very seldom—really never, for all practical purposes—regard practical problems as being “individually trivial” … for the very good reason in fields like synthetic and conformational biology (for example), the “trivial” dynamics of (say) individual chemical bond angles adds up to the highly nontrivial behavior of organisms as a whole … namely us. This summed up in the systems engineering maxim that “quantity has a quality all its own.”

    In summary, the investigation of what McLarty calls “tangled multitudes of individually trivial problems”—which some mathematical purists disdain—is nothing more than the modern discipline “systems engineering” … of which some folks are very fond!

    An example of a discipline in which systems engineers are nowadays seeking to have it both ways is the simulation of quantum system dynamics. A wonderful textbook that presents an algorithmic view of quantum dynamics is Nielsen and Chuang’s Quantum Computation and Quantum Information.

    What do these *same* quantum dynamic problems look like from a geometric/category theory perspective? What (unsolved) “hard problems” are cast into “clear relief” by adopting this perspective? What “tangled multitudes of individually trivial problems” are ready for the A=B lawnmower of computational analysis and simulation? These are much the same questions—and asked in much the same spirit—that Ketan Mulmuley’s geometric complexity theory program is tackling.

    These are Great Questions … “great” because their answers are bound to be wonderful, no matter what the answers are.

    Unsurprisingly, that mighty brain-box Gary Larson has drawn a cartoon on this subject … not “The One About the Flaming Arrows”, but “The One About the Midvale School for the Gifted” (reference below).

    The preceding has been a systems engineering view of the assertion “It’s all algorithms” … needless to say, other views are perfectly reasonable … this being an assertion regarding which it is neither necessary nor desirable that everyone think alike.

    This is why Dick Lipton’s blog (and complexity theory blogs in general) are so provocative … and so much fun! We engineers are very grateful for this! 🙂

    — BibTeX and figures relating to (non?)algorithmic views of computational complexity —

    @article{Author = {McLarty, Colin}, Journal = {British J. Philos. Sci.}, Number = {1}, Pages = {77–112}, Title = {The last mathematician from {H}ilbert’s {G}\”ottingen: {S}aunders {M}ac {L}ane as philosopher of mathematics}, Volume = {58}, Year = {2007}}

    @book{Author = {M. Petkov\v{s}ek and H. S. Wilf and D. Zeilberger}, Publisher = {A.~K.~Peters, Ltd.}, Title = {$A=B$}, Year = 1996}

    figure: informatic cartoon based upon “The FarSide: Midvale School for the Gifted”
    “faculty.washington.edu/sidles/QSEPACK/Kavli/Midvale_informatic.png”

    figure: pullback-theoretic cartoon of Nielsen and Chuang’s “Quantum Computation and Quantum Information”
    “faculty.washington.edu/sidles/QSEPACK/Kavli/Litotica_ski_diagram.pdf”

  12. Jonathan Katz permalink
    September 26, 2009 10:39 pm

    I figured someone should mention a counterexample, even if it’s an old and obvious one: Shannon’s proof by counting argument that most functions have (almost) exponential circuit complexity. Surely there must be other non-constructive proofs like this?

  13. September 26, 2009 11:39 pm

    Jonathan Katz’ post is entirely correct that Claude Shannon’s 1948-9 derivation of the fundamental limits to channel capacity was (a) largely nonconstructive, and (b) explicitly geometric (as contrasted with algorithmic).

    For example, key sections of Shannon’s classic article Communication in the presence of noise are titled “III. Geometrical Representation of the Signals”, “IV. Geometrical Representation Of Messages”, and “V. Geometrical Representation of the Transmitter And Receiver.”

    Not everyone was enamored of Shannon’s geometric approach. See, for example, Doob’s 1949 AMS review of Shannon’s work (MR0026286 (10,133e) 60.0X), whose (snarky?) conclusion was “[Shannon’s] discussion is suggestive throughout, rather than mathematical, and it is not always clear that the author’s mathematical intentions are honorable.”

    Subsequent developments in communication theory and practice have validated Shannon’s framework; it is a mainstay (for example) of design work in atomic-resolution quantum spin biomicroscopy—microscopy being viewed as a (two-way!) communication channel between the sample and the observer; and our articles take care to cite Shannon’s (exceedingly useful) channel capacity theorems.

    So yes, Shannon’s work is (seemingly) a good example of a non-algorithmic (or minimally algorithmic) framework yielding solidly useful results.

    —- BibTeX follows —–

    @article{Shannon:1949la, Author = {Shannon, C.E.}, Journal = {Proceedings of the Institute of Radio Engineers}, Pages = {10 – 21}, Title = {Communication in the presence of noise}, Volume = {37}, Year = {1949/01/}}

    @article{Author = {J. A. Sidles}, Journal = {Proc. Nat. Acad. Sci. USA}, Number = {8}, Pages = {2477–2478}, Title = {Spin microscopy’s heritage, achievements, and prospects}, Volume = {106}, Year = {2009}}

  14. September 27, 2009 1:47 am

    The proof that parity has formula complexity at least n^{3/2} that considers random restrictions (I may have misremembered the details but they don’t affect the general point) doesn’t seem to me to be algorithmic in any interesting way. I’d say the same about the switching lemma and constant depth circuits. And if you interpret “algorithmic” in a suitably broad sense, why does it give any evidence towards P=NP? Surely it is just as much evidence that one day we’ll find an algorithm clever enough to demonstrate that P does not equal NP.

    • rjlipton permalink*
      September 27, 2009 3:34 pm

      You make two points. The latter is correct: if I had to bet I would bet that if P is proved to be different from NP, a new algorithm will play a role. So I agree with that.

      The first point I think one can argue for my point. For example, constant depth circuit lower bounds use an algorithm. The algorithm is a method to simulate an OR gate with small error by a low degree polynomial. I think that is a real algorithm: it samples the bits, computes the parity, and then takes a majority.

      • September 28, 2009 10:45 am

        I’m not completely sure, but I think we’re talking about different proofs here. The one I’m talking about uses Hastad’s switching lemma to prove that if you randomly restrict then with high probability you reduce the depth, or something along those lines. And I’m not sure in what interesting sense that is algorithmic.

      • Josh permalink
        September 28, 2009 12:45 pm

        Interestingly, whether or not the original proof by random restrictions is or is not algorithmic, it can be derandomized (Manindra Agrawal, Eric Allender, Russell Impagliazzo, Toniann Pitassi, Steven Rudich, Reducing the complexity of reductions. Computational Complexity 10(2): 117-138, 2001). In particular, this implies that there is a deterministic algorithm which, given a constant depth circuit, outputs a string who’s parity the circuit computes incorrectly.

        Furthermore, if you look at Fortnow-Laplante’s proof of the switching lemma, which is based on Razborov’s proof, then you see that in fact even the random restrictions proof can be thought of directly as being algorithmic. Fortnow-Laplante argue by Kolmogorov complexity, where the arguments proceed by showing that there is an algorithmic description of certain combinatorial entities and then deriving bounds on Kolmogorov complexity from these descriptions. Although not quite the same as “a new algorithm,” I’d say it’s similar in spirit.

  15. October 5, 2009 1:30 pm

    Another example that we could add to the list would be Nisan’s proof that RL is a subset of SC2. At the heart of his proof is basically the approximation of a probability transition matrix using an efficient randomized algorithm with very few bits of randomness and thus is able to simulate any algorithm in the former class using the latter. This result has been considered as one of the better results of the 90’s.

Trackbacks

  1. Highlights of FOCS Theory Day « Gödel’s Lost Letter and P=NP
  2. Web Roundup: Links for May - My blog

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading