Skip to content

Complexity as Enabler

September 8, 2011


How to be a good cop on the problem-solving beat

Guy Blelloch is best known for his seminal work on parallel computing; if you need advice on parallelism he is the one to ask. Not surprisingly, he is a strong advocate of parallel programming. See his essay, “Is Parallel Programming Hard?” Spoiler alert: he argues that it is not.

Today we wish to talk about complexity theory as an enabler of problem solving, rather than a disabler when a problem is proved hard for {\mathsf{NP}} or some other complexity class of difficult problems.

Blelloch’s approach emphasizes how parallel structure emerges from specific problems, rather than focusing on control in parallel machine systems. The algorithms realizing this structure are informed by computational complexity theory. For instance, the “scan programs” in the book of his Ph.D. thesis are influenced by theorems relating vector machine models to the circuit classes of the NC hierarchy.

Blelloch has taught a course at Carnegie Mellon titled “Algorithms in the Real World” for many years. This covers classic algorithms in text compression, string searching, computational biology, high-dimensional geometry, linear versus integer programming, cryptography, and others. Some of these areas are informed by complexity, such as {\mathsf{NP}}-hardness, reduction to and from factoring, and hardness of approximation.

However, even with immediately practical tasks like computing Nash equilibria of games, which as we noted in the last post is complete for a class called {\mathsf{PPAD}}, or pricing financial derivatives as discussed and shown hard here, complexity is playing the “bad cop,” policing attempts to find feasible exact algorithms where ostensibly there aren’t any. Ken’s colleagues Hung Ngo and Atri Rudra are developing a seminar to showcase complexity as the “good cop,” enabling applications. The rest of this post is by Atri Rudra.

An Early Example

The earliest example I, Atri, know is the now famous pattern matching algorithm by Donald Knuth, James Morris, and Vaughan Pratt, which is included in Blelloch’s course. The point is that automata theory was used to design a linear time pattern matching algorithm. The following is quote is from the “Historical Remarks” section of the original paper:

“This was the first time in Knuth’s experience that automata theory had taught him how to solve a real programming problem better than he could solve it before.”

This whole section is worth reading to see how Knuth’s version of the algorithm was implicit in Stephen Cook’s linear time simulation of deterministic two-way pushdown automata (see this reference), which is clearly a complexity result. What could a “deterministic two-way pushdown automaton” have to do with a practical algorithm? Plenty.

Automata theory in itself is a treasure-trove of practical applications. For instance, the ubiquitous regular expressions and parsers in compilers, as they exist today, would not be around without automata theory.

Making Lower Bounds Your Friend

Probably the best known use of complexity as an enabler is to change the rules of the game: use the hardness results to foil the adversary. Cryptography has done this with great effect. There are other similar works, e.g, recently there has been work on designing elections that cannot be manipulated under complexity assumptions: see the survey by Piotr Faliszewski, Edith Hemaspaandra and Lane Hemaspaandra.

For the rest of the post, I’ll go back to the good cop role of complexity. A small bit of warning though: when I talk about practical applications below, I will not be talking about results that can find themselves in the complexity equivalent of the Algorithms in The Field Workshop—rather I will be content to talk about applications where one uses complexity to solve problems that have a clear and direct practical motivation. Given that I have ruled out cryptography, I hope you will allow me this luxury.

Applications of Complexity-Based Tools

Complexity theory has developed many tools that can and should find applications in practical areas where on the surface the two do not seem to have anything in common. Below the surface are beautiful connections that are being exploited can be developed further. Let me try to substantiate this with the following examples:

{\bullet} Unbalanced bipartite expanders. An expander is a sparse graphs in which any subset of vertices that isn’t too big has many edges leaving it, or alternately has many neighboring vertices. These objects in general have many applications both in complexity theory as well as other practical areas—see for instance this great survey by Shlomo Hoory, Nathan Linial, and Avi Wigderson. I picked unbalanced bipartite expanders for my seminar, as they have been developed almost solely by complexity theorists. Unbalanced means that one side of the bipartite graph has many more vertices than the other. These objects have been used to construct compressed sensing matrices, as noted in this survey by Anna Gilbert and Piotr Indyk.

The particular version of the compressed sensing problem is to design a matrix {\Phi} that is long and skinny such that for any real-valued vector {x}; given {\Phi x} one can recover a vector {\hat{x}} such that

\displaystyle \|x-\hat{x}\|_1 \le C\cdot\|x-x_k\|_1.

Here {x_k} is the vector with all but the largest {k} components in {x} have been zeroed out and {C\ge 1} is the approximation factor. When {\Phi} is the adjacency matrix of a lossless unbalanced bipartite expander, one can obtain {C} arbitrarily close to {1}, and one can efficiently reconstruct {\hat{x}} form {\Phi x}. A general resource page on compressed sensing shows other applications.

{\bullet} Randomness extractors. Pseudo-randomness leads to another great place to look for applications besides expanders, namely extractors, which enable refining the output of partially random sources down to pure uniform bits. Indeed, the best known constructions of extractors, which is due to Venkat Guruswami, Chris Umans, and Salil Vadhan use unbalanced bipartite expanders. Michael Mitzenmacher and Vadhan used these techniques also to demonstrate why limited-wise independent hash functions seem to work as well as what theory predicts for fully independent hash functions in practical applications, owing to the fact that data often has enough inherent partial randomness.

{\bullet} List decoding. Oded Goldreich and Leonid Levin found the first applications of list decoding in complexity theory itself, and this opened the way to much algorithmic progress in list decoding being made by complexity theorists, as surveyed here. List decoding has the potential to correct more errors in practice than traditional decoding, as hailed in this August, 2004 NSF item.

List decoding has also been applied to IP traceback of denial-of-service attacks, to games of guessing secrets, and to verifying truthful storage of client data. The last paper also applies Kolmogorov complexity—disclaimer: this paper is joint work with (my colleague Ram Sridhar’s student) Mohammad “Ifte” Husain, my colleague Steve Ko, and my student Steve Uurtamo.

{\bullet} Locally decodable codes. These are codes where one can compute an arbitrary message symbol with high probability by querying very small number of (random) positions in a corrupted codeword. Complexity theorists can take complete credit for this coding primitive. Along with locally testable codes, these coding objects came out of the work on probabilistically checkable proofs (PCPs). Unlike list decoding, the theoretical limits on locally decodable codes are still not known, though there has been some recent progress starting with this breakthrough work of Sergey Yekhanin. Typically these codes are studied in the regime of a constant fraction of errors, where the amount of redundancy needed in the code has to be super-linear. Recently, locally decodable codes for a single error have been studied by Yekhanin with Parikhshit Gopalan, Cheng Huang, and Huseyin Simitci. This work is tailored for distributed remote storage systems where the number of errors is small and local decoding translates to lower communication between servers needed to recreate the server that is down.

{\bullet} Superconcentrators. Superconcentrators are sparse graphs where certain designated set of input and output nodes are connected by many node-disjoint paths. (One can construct superconcentrators using expanders.) These objects were originally studied by Leslie Valiant and others in the context of complexity lower bounds. This was also inspired by connections to switching networks, namely routing boxes that switch traffic from one fiber to another, and for work showing the connections go both ways, see here and here and here. See also this post on a problem about routing network traffic on expanders.

Models

Complexity theory studies computation through the lens of various machine and program models. I believe that these models themselves can be a valuable export to practice. Probably the most widely exported model is that of communication complexity, which has found applications in diverse areas such as data stream algorithms and game theory. However, these are again used to prove lower bounds, on the bad-cop’s beat. Below are some examples I am aware of that prove positive results:

{\bullet} Natural Computational Models. The idea here is to model natural phenomena using computational ideas, and then to use complexity tools to prove interesting results. One recent example is Valiant’s new research on quantitative aspects of evolution. This and follow-up work bring in tools developed in computational learning theory. Another example is this work by Nikhil Devanur and Lance Fortnow developing a computational model of awareness and decision making, which uses Leonid Levin’s foundational complexity concept of universal enumeration.

{\bullet} Restricted Models. The idea is to take restricted computational models that have already been studied in complexity, and adapt them to model practical applications where upper bounds have practical value. For one of probably many examples, James Apsnes and Eric Blais and Ryan O’Donnell have a project with me, my colleague Murat Demirbas, and my student Uurtamo that adapts decision trees to model delicate issues in single-hop wireless sensor networks. The conventional wisdom in wireless networks is that packet collisions are bad.

We created a model in which collisions become a cost akin to decision-tree queries as wireless devices detect them, thus measuring the efficiency of computing certain functions of data distributed over sensor nodes. Our first paper characterized the complexity of different functions in this new model. Then we ran experiments and showed in a follow-up paper that our algorithms improve over existing ones. So collisions are bad, but not always; sometimes you can make “collision-ade” out of them.

{\bullet} PCP was a Model. The PCP Theorem has been so important in hardness results that we seem to have forgotten that it originated in the positive-minded model of probabilistically checkable proofs. Madhu Sudan’s recent long survey restores attention to the PCP Theorem as a potential enabler of applications. Until the challenge of fully realizing them is met, however, complexity theorists may have to regard this as a proverbial “fish that got away” with its own success.

There is some encouraging recent news on this front. Eli-Ben Sasson and Eran Tomer are leading a project with Ohad Barta, Alessandro Chiesa, Daniel Genkin and Arnon Yogev funded by the European Community that is aimed at implementing PCPs for practical applications in security and verifiable computation. The project is still in its early stages, but hopefully it will allow us to think of PCPs as the fish that almost got away.

Open Problems

What other categories of complexity as an enabler can we collect? What are other applications of the above examples?

Ken and I (Dick) also wonder whether complexity theory got carried away with emphasis on classes—per our last post—and did not give enough to problems. Note that this parallels Guy Blelloch’s point insofar as most complexity classes were originally defined in terms of machines, rather than emerging directly from features of problems. Can complexity profit from such a shift in focus?

Ken and I, finally, thank Atri for this contribution. We invite others who have something they wish to say to contact us for a potential “guest” post. We do not pay much to the author of the post—actually about $2.56 less than Knuth would—but we hope that some of you may wish to do one.

6 Comments leave one →
  1. September 9, 2011 9:30 am

    Parallel computing turns out to be a natural way of constructing fast serial algorithms for parametric search algorithms, as was discovered by Megiddo. (In parametric search you take a problem like shortest path but instead of constant lengths you have lengths which are affine in some parameter T.) Somehow each parallel step enables the serial algorithm to make a “batch” out of several steps.Here is a link to a more detailed description: http://daveagp.wordpress.com/2011/07/01/parametric-search/

  2. September 10, 2011 7:04 pm

    Didn’t complexity theory give us SAT, the queen of NP-complete problems? While NP-completeness seems like a “bad cop” aspect of complexity at first, today many NP problems are solved in practice by reducing them to SAT. Integer Linear Programming plays a similar role.

    • atri permalink
      September 11, 2011 9:13 pm

      Hi Amir,

      Great examples.

      When I listed the examples, I was thinking about my planned seminar with Hung, where we’re looking for specific applications, preferably with theorems, so I’m probably missing other relevant example like yours.

      BTW integer linear programming is cover in the Algorithms in the real world course.

  3. September 12, 2011 11:42 am

    When we view thermal noise as algorithmically-incompressible complexity, then we appreciate one of the many great lessons owed quantum information theory is the intuition that thermal noise is an enabler of efficient dynamical simulation.

    That thermal noise (and its accompanying dissipation) makes simulation exponentially easier is true both classically (e.g. von Neumann viscosity for example) and quantum mechanically (see e.g. Plenio and Virmani’s arXiv:0810.4340). Moreover this principle is robust, in the sense that it is both formally provably for idealized dynamical systems and works well empirically for dynamical systems having real-world complexity.

    Systems engineers especially are exceedingly fond of this ubiquitous complexity-enabling principle, since real-world systems invariably are bathed in thermal noise. As David Deutsch writes entertainingly in his new book The Beginning of Infinity: Explanations That Transform the World:

    It may well be that the interiors of refrigerators constructed by physicists are by far the coldest and darkest places in the universe. [These refrigerators are] Far from typical.

    Broadly conceived, systems engineering seeks to predict and control the noisy universe that is outside of the physicists’ ultra-cold ultra-dark refrigerators … and recent advances in QIT now have given us fundamental reason to hope that the predictive aspect of this goal is efficiently achievable.

    A key point is that the complexity-theoretic noise-enabling principle can be exploited both forwards and backwards, in the sense that separative transport mechanisms that increase entropy gobally can be reversed to create separative transport engines that diminish entropy locally; a classic textbook book that entertainingly surveys this key branch of engineering is J. Calvin Giddings’ Unified Separation Science (1991).

Trackbacks

  1. links for 2011-09-10 « Dan Creswell’s Linkblog
  2. Theory In The Time of Big Data | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading