Skip to content

Constructive Sets

July 16, 2014


An approach to complexity lower bounds?

AfterGodel
Book source

Kurt Gödel did it all, succinctly. His famous 1938 paper “The Consistency of the Axiom of Choice and of the Generalized Continuum-Hypothesis” was {1\frac{1}{3}} pages long. Since the Proceedings of the National Academy of Sciences was printed on single-column pages in fairy large type, it would have been under one page in FOCS or STOC format. Only Leonid Levin has famously been that succinct.

Today Ken and I wish to advance another approach on complexity questions based on succinctness and Gödel’s brilliant idea. We have covered succinctness before, but now our angle is more arithmetical.

What Gödel showed is that if ZF is consistent then so is ZF + AC, where AC is the axiom of choice. He threw in GCH, the generalized continuum hypothesis, and two statements about analytic sets, but AC drew the most attention. Many were and remain worried about the Axiom of Choice. Gödel showed that AC is safe to use in the sense that a result employing it cannot reach a contradiction in set theory, unless set theory already is inconsistent without AC.

The proof method Gödel used is based on a notion of a set being constructible. Roughly, very roughly, a constructible set is one that can be built up inductively by simple operations. He then proved that if ZF has any model, then these sets are embedded inside it and form a model by themselves, in which AC, GCH, and the other two statements also hold.

Well, he didn’t exactly prove it. His paper begins with “THEOREM” but there is no “Proof” anywhere. Once he states his inductive definition, the rest is left to the reader as evident. Nor did Gödel feel a need to give us any more detail when he laid out his entire version of set theory on one little porta-board during the interview we published last November—see the “One Choice” section toward the end. When a definition accomplishes a proof by itself, that’s brilliance.

Constructible Sets

The ingredients of Gödel’s construction are ordinals {\alpha} and first-order formulas {\phi}. Once you know these ingredients, the definition is easy to figure out. For {\alpha = 0}, {L_\alpha = \emptyset}. Then a set {X} belongs to {L_{\alpha + 1}} if and only if there is a first-order formula {\phi} with one free variable {y} and quantified variables and constants ranging in {L_{\alpha}} such that

\displaystyle  X = \{y \in L_{\alpha} : \phi(y)\}.

And of course, if {\beta} is a limit ordinal, then

\displaystyle  L_\beta = \bigcup_{\alpha < \beta} L_\alpha.

This definition is so natural and nice that it does not appear in Gödel’s paper. It does not need to. The idea is clear enough. That is true brilliance. To be fair, the above undoubtedly came up in interaction with John von Neumann and others in the 1930s. Note, incidentally, that when {\alpha = 0} we get {X = \emptyset}, which gives {L_1 = \{\emptyset\}}, which is not the same as {L_0}. Thus Gödel’s sequence gets off the ground.

Today the notion of constructible sets is a huge part of logic and set theory. Ken and I ask, is there some way to make this notion useful for complexity theory? This involves a third ingredient: a set can be an element, even a number. Thus {\emptyset = 0}, {\{\emptyset\} = 1}, and {2} can be represented by {\{\emptyset,\{\emptyset\}\}} or by various other 2-element sets. There are also various ways to represent binary strings, ways that are naturally more compact than the representation {[n] = \{n-1\} \cup [n-1]} for numbers, but we will try to treat them all as numbers.

Constructive Integers, and Sets of Them

Our idea is to define a constructible number {x}, and then look at sets of such numbers. To distinguish from Gödel we now say “constructive.”

Let {x} be an integer of {n} bits, where {n = 2^k} for some {k}. In a natural way we can view {x} as a boolean function from {\{0,1\}^{k}} to {\{0,1\}} where the value {x(i)} is {x_{i}}. Say {x} is {s(n)}constructive provided the Boolean function {x} has a circuit of size at most {s(n)}.

Clearly, this is only interesting if {s(n)} grows much slower than {n}, such as {s(n)} being polynomial—or at least sub-exponential—in {k}. We will discuss later trying to imitate Gödel’s definition further by using logical formulas in place of circuits. We keep it simple with circuits and numbers for now. Note that {k} increases only when {n} doubles, so if {s(n)} is really a function of {k}, then we get nesting finite sets {{\cal L}_k}. The following definitions are asymptotic, but have this concrete basis.

Definition 1 A set {A} of natural numbers is {s(n)}constructive provided for each {x} in {A}, it is {s(|x|)}-constructive.

Definition 2 A set {A} is poly-constructive provided there is a {c} so that for each {x} in {A}, {x} is {s(n)}-constructive—where {s(n)} is bounded by {O(k^c) = O(\log^{c} n)}.

Thus a set {A} is poly-constructive if the members of the set each have a small circuit description. It does not require that {A} has a uniform circuit description, nor that the elements of {A} have circuits that are related to each other in any manner.

Let the universe of all poly-constructive sets be denoted by {\cal {PC}}.

Set Properties Of {\cal {PC}}

Let {A} and {B} be sets in {\cal {PC}}. Then we claim that the following are true:

{\bullet } The sets {A \cup B} and {A \cap B} are in {\cal {PC}}.

{\bullet } The set {A - B} is in {\cal {PC}}.

Thus the sets in {\cal {PC}} form a Boolean algebra. Even more, suppose that {S} is a subset of {A}, which is a set in {\cal {PC}}. Then {S} is also in {\cal {PC}}. This is just because for each {k}, we are talking about subsets of {{\cal L}_k}.

Next, let us consider bitwise operations. Regarding numbers {x,y} as {n}-bit strings, let {z = x \oplus y} be their bitwise-XOR. Now if {x,y} have circuits of size {s(n)} then {z} may need size {2s(n) + O(1)} including an XOR gate or gadget at the output. But asymptotically we still have poly constructiveness:

{\bullet} The set {A \oplus B = \{x \oplus y: x \in A \,\wedge\, y \in B\}} is in {\cal {PC}}.

Additive and Multiplicative Closure?

Now we turn to addition: {A + B = \{x + y: x \in A \wedge y \in B\}}. Note that unlike bitwise XOR, addition can increase the length of numbers by one, though this is not as important as with multiplication whereby lengths can double. The main issue is that the bits {z(i)} can depend on non-local parts of {x} and {y} via carries.

In the descriptive logic approach to complexity this is no problem, because the Boolean function {z(i)} can be defined by a first-order formula {\phi} in terms of the small indices {i,j,\dots} and fixed predicates denoting {x} and {y}. The formula is independent of {n}, so it is a single finite characterization of a set over all lengths {n}.

The problem is that the circuitry for simulating a first-order quantifier {\forall i} ranging over most of the indices {i} has size proportional to {n}, not to {k} or poly in {k}. So it might not stay succinct. We believe, however, that we can simulate it in poly({k}) size if parity counting can be done in polynomial time:

Theorem 3 (?) If {\oplus \mathsf{P} = \mathsf{P}}, then the universe of poly-constructive sets is closed under addition.

Contrapositively, if {{\cal{PC}}} is not closed under addition, then {\oplus \mathsf{P} \neq \mathsf{P}}, which is close to {\mathsf{NP} \neq \mathsf{P}}. Our idea for a proof is related to parity-based prediction in carry-lookahead adders (see this for instance).

Closure under multiplication, {A * B = \{x\cdot y: x \in A \,\wedge\, y \in B\}}, is even thornier. We wonder if the correspondence in descriptive complexity between multiplication and {\mathsf{TC}^0}, which is kind-of a scaled down analogue of {\mathsf{PP}} (unbounded error probabilistic polynomial time), carries through here:

Theorem 4 (??) If {\mathsf{PP} = \mathsf{P}}, then the universe of poly-constructive sets is closed under multiplication.

We suspect this but do not have a proof idea. Since {{\cal L}_k * {\cal L}_k} is contained in length-{2^{k+1}} strings, we can also ask concretely whether for appropriately-chosen {s(n)} it is contained in {{\cal L}_{k+1}}.

A Meta-Conjecture

What we actually want to establish is this:

Theorem 5 (???) There is a natural numerical operation {\circ} such that the universe of poly-constructive sets is closed under {\circ} (that is, on going from {A,B} to {A \circ B = \{x \circ y: x \in A \,\wedge\, y \in B\}}) if and only if {\mathsf{P}=\mathsf{NP}}.

A result like this would finally bring our Gödel-inspired idea to full force as a potentially useful re-scaling of the issues involved in {\mathsf{P=NP}}.

We can also consider using the intuitively looser condition of having a first-order formula to define members {z} of {A \circ B} in terms of members {x} of {A} and {y} of {B}, which immediately works for addition. This is close in form to Gödel’s definition itself. We suspect that this leads to involvement with the rudimentary relations and functions, which were introduced by Raymond Smullyan and studied in complexity theory by Celia Wrathall and others. Again it may yield an interesting re-scaling of the linear-time hierarchies involved.

Open Problems

Is the notion of {\cal {PC}} interesting? Can we prove the above theorems?

10 Comments leave one →
  1. Bonnie permalink
    July 16, 2014 10:34 pm

    There is always one question that puzzles me about this blog: who is Pip? Seriously, I really don’t have any idea who that is.

    • July 17, 2014 7:57 am

      It’s under “About”: Pip is Charles Dickens’s narrator in Great Expectations, and this is Dick-Ken’s blog. A “pip” is like a “bit” but different, so it works for CS. WordPress seems not to allow listing multiple authors at the top, so we created the separate handle. Thanks for the interest :-).

  2. July 17, 2014 1:02 am

    I think theorem 3 (and laters), or at least their proof, might even depend on your “natural way” of encoding boolean functions as numbers. I mean e.g., a number and its reverse serve just as well to encode the same function, but addition is a different operation for them. Of course for both +P=P might be sufficient.

    ps. What do you have against calling a conjecture a conjecture instead of a theorem with a question mark? I guess more ? means less certain.

    • July 17, 2014 8:09 am

      This could be a post topic in itself: degree of confidence in ideas. Some posts are on things we’ve researched for some time, and then we feel confident we’re not missing an easy answer and can say “conjecture” if we wish. In this case it’s been just a week, and I’m coming off travel and occupied with developments in the chess world. Plus our dog is having surgery today. We thought the addition case was a theorem, but setting up the predicates for parity based on carry-lookahead still seemed to involve the same issue as we note for first-order formulas. Noting the nearness to descriptive complexity and to the rudimentary relations was our “due diligence”—but maybe someone will find a closer connection that resolves the status of the theorem(?) statements. In some ways that makes the post “fair game” for participation, which is what we desire.

  3. July 17, 2014 6:15 am

    This multi-scale approach to complexity theory incorporates attractive elements to Ken Wilson’s multi-scale approach to condensed matter theory; see (for example) Wilson’s lively exposition “Renormalization group methods: to Stanislaw Ulam on the occasion of his 65th birthday” (1975; a Google Search finds it). Moreover, its explicit constructivism is compatible with the MathOverflow question

    Q  For which Millennium Problems does undecidable -> true?

    for which a GLL-compatible “burning arrow” answer is

    A  For all of the Millennium Problems … as we might have posed them.

    Thanks for (yet another) wonderfully enjoyable Gödel’s Lost Letter and P=NP essay.

  4. July 17, 2014 10:13 am

    I guess you really want this mystery operation to be numerical, right? If not, I could imagine an operation like xy = xy1 if x is a k-SAT formula and y encodes a satisfying assignment for x, and xy0 otherwise. (Is there a polylog-sized circuit for checking if a string encodes a k-SAT formula?)

  5. July 17, 2014 4:32 pm

    +=o, in other words, I think that already being closed to addition is equivalent to (the non-uniform version of) P=NP. On one hand if this holds, then also P=co-NP and we can simulate the first order formulas by small circuits.

    On the other hand, let A be a singleton that consists of only the number 1, and let the circuits describing the members y of B be such that for every k they encode some CNF such that y(i)=1 if and only if i is NOT a satisfying assignment. Now y+1 is either a k+1 digit or a k digit number, depending on whether the CNF is in SAT or not.

    Well, unfortunately I have cheated a bit in the previous proof, as B does not have to be given through these circuits. E.g., for every all 1 number, we can have a simple circuit that outputs all 1’s, and for every other number we can have the circuit encoding the CNF and another circuit encoding the smallest non-1 digit, making addition by 1 trivial.

    To fix this, we need to change the definitions of the sets. Set B would be defined by dividing k into two halves, the first half would describe a CNF of length k/2, and the second half would be its inputs. Set A would not be a singleton, but rather a set that consists for every k an x of length k such that x(i)=1 if and only if the last k/2 coordinates of i are 0’s. Now from a poly-size description of x+y we could decide the SAT membership for any CNF, which would imply P=NP.

Trackbacks

  1. Shifts In Algorithm Design | Gödel's Lost Letter and P=NP
  2. DC Post | Gödel's Lost Letter and P=NP
  3. Cancellation is a Pain | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading