Skip to content

Levin’s Great Discoveries

March 14, 2011


Levin’s discovery of universal problems and more

Leonid Levin is one of the creators of the question of computer science—the {\mathsf{P}} versus {\mathsf{NP}} question. He published this work back in 1972–1973,

Today I want to discuss Levin’s original paper.

The reason I thought it would be interesting to look at his original paper is that I believe many have not actually studied it. I could be wrong, but my informal poll, with a tiny sample, shows that this is the case. While there are beautiful modern treatments of all contained in his paper, I think we can learn quite a bit from seeing the original masterpiece.

A book treatment of a topic, like the {\mathsf{P} \neq \mathsf{NP}} question, is likely to be perfect, completely clear, filled with details, and easy to follow. But the original papers often yield insights into how hard it was to get the ideas right. I have another prized reason for introducing his work for discussion today, but that will come in a future post.

The Original Paper

In his famous paper he studied universal problems. The paper is from June 7, 1972, although it appeared in 1973. It has many beautiful ideas, several key definitions, six important examples, and not a single proof. I tried to get a copy of the paper in English in TeX, but the best I could do is to get a scanned version. Thanks to Farbod Shokrieh for helping to find even this version. It is from a paper by the eminent Boris Trakhtenbrot. Note that Levin’s paper has words changed with a strikeout.

Levin’s Problems

The following are the six problems that Levin lists as his examples of universal search problems.

Problem 1. A list determines a finite set and a covering of that set by 500-element subsets. Find a subcovering having a prescribed cardinality (determine whether such a subcovering exists).

Problem 2. A table generates a partial Boolean function. Find a disjunctive normal form of the prescribed dimensions realizing that function in its domain (determine whether such a DNF exists).

Problem 3. Determine whether a given formula of the propositional calculus is deducible or refutable (or, equivalently, whether a given Boolean formula is equal to a constant).

Problem 4. Two graphs are given. Find a homomorphism of one onto the other (determine whether such a homomorphism exists).

Problem 5. Two graphs are given. Find an isomorphism of one into the other (onto part thereof).

Problem 6. Consider matrices composed of integers from 1 to 100 and a certain stipulation as to which integers can be vertically adjacent and which can be horizontally adjacent. When the outermost integers are given, continue over the entire matrix, observing the given stipulation.

Here are some comments on these problems.

  • Problem 1 is: Given {S_1,\dots,S_n} each a 500-element size set. Find a subcollection {S_{k_1},\dots,S_{k_m}} so that

    \displaystyle  S_1 \cup \cdots \cup S_n = S_{k_1} \cup \cdots \cup S_{k_m}.

    He makes no comment why he made the sets cardinality {500}.

  • Problem 2 is: Let {(x_k,y_k)} be pairs for {k=1,\dots,m} such that {x_k} is an {n}-bit number and {y_k} is a single bit. Is there a Boolean function {f} with a DNF of size {s} so that

    \displaystyle  f(x_k) = y_k,

    for all {k}? If so, find one.

  • Problem 3 is a what we would call formula {\mathsf{SAT}}: is a Boolean formula satisfiable, or not? Of course we usually are strict in distinguishing between the problem and its complement. But this is essentially {\mathsf{SAT}}.
  • Problems 4 and 5 are subgraph isomorphism and subgraph homomorphism.
  • Problem 6 is quite interesting; it is essentially a finite tiling problem. Again Levin does not explain why there are 100 types of tiles. I assume that he had worked out that these were enough to encode a universal Turing machine.

Levin’s Theorems and Gems

Levin’s main theorem is:

Theorem: If there exists a search (quasi search) problem unsolvable in a time less than {f(n)} for an argument of length comparable with {n}, then Problems 1-6 also have this property.

This essentially shows that all the problems are the same in their deterministic computational cost—up to a polynomial. It is curious that he states the theorem in the negative: if any are hard, then all are hard. Perhaps that is a translation issue. Indeed the eminent Trakhtenbrot says Although Levin’s 1973 paper is laconic (as are Levin’s publications) in general, the formulations are absolutely crisp in the Russian original. Unfortunately, the English translation is awkward and contains misrepresentations and confusing formulations.

Levin does not give a conventional proof. Instead, he witnesses his ability to prove it by giving a gem of a definition. Edited down from Trakhtenbrot’s version and rearranged, it reads:

Search problem {A(x,y)} reduces to {B(x',y')} if there are three polynomial-time algorithms {r(x)}, {p(y')}, and {s(y)} such that {A(x,y) \equiv B(r(x),s(y))} and {B(r(x),y') \equiv A(x,p(y'))}.

Here {r(x)} is what we know as a Karp-reduction from the language {L_A = \{x : (\exists y)\;A(x,y)\}} to the corresponding language {L_B}, and {s(y)} merely gives the transformed witness in a typical Karp-reduction proof. The distinctive part of Levin’s definition is the {p(y')} function. This says that any solution to the target search problem can be transformed back to get a solution to the source problem {A}. Some sources relax the definition to functions {p(x,y)} and {s(x,y)} that depend on {x}, but this does not change the essence. This is now called a Levin-reduction. These slides by Tal Kramer and Gillat Kol put Levin in the context of the whole 20th-century history of {\mathsf{NP}}-completeness.

Levin simply gave without proof a statement equivalent to saying the above six problems are complete under Levin reductions.

Lemma: Problems 1–6 are universal search problems.

Thus his great counterpart to Cook’s Theorem was “just a lemma.” His reductions give the extra information that {A} is equivalent to the sub-problem of {B} defined by the range of {r}. If {r} is onto {L_B}, then {B} is equivalent to {A} in turn.

Levin’s paper has one further gem, one that Trakhtenbrot claims was missed because of the excitement of the rest of the paper. This is Levin’s insight, his theorem 2:

Theorem: For any search problem, there exists an algorithm that solves it in time that is optimal up to multiplication by a constant and the addition of a number polynomial in the length of the input.

Levin gave no indication of how to do this, and Trakhtenbrot points out that Vladimir Sazonov eventually worked out a proof.

Some Quotes By Levin

Besides his original work on {\mathsf{P}} versus {\mathsf{NP}}, Levin has done many other wonderful things. His work on average complexity is very important, as are his many other papers on various areas of complexity.

His homepage includes his list of publications, but also a number of essays and position papers. To start, he is not a fan of quantum computing. In person he has told me so, many times. Here is a quote of his:

Archimedes made a great discovery that digital representation of numbers is exponentially more efficient than analog ones (sand pile sizes). Many subsequent analog devices yielded unimpressive results. It is not clear why QC’s should be an exception.

By QC’s he refers, of course, to quantum computers. This does not seem positive to my ears.

Here is another quote from a paper on incompleteness theorems:

Gödel’s Incompleteness Theorem leaves open a way around it, vaguely perceived for a long time but not clearly identified. (Thus, Gödel believed informal arguments can answer any math question.)

I would like to discuss this paper in the future, since it is related to a problem that a few of us are thinking about right now.

Open Problems

To me, one of the continuing mysteries of mathematics and theory is why results are often discovered independently at almost the same time. This seems to happen repeatedly, here Cook and Levin, and I still have no completely satisfying reason on why it happens.

47 Comments leave one →
  1. David permalink
    March 14, 2011 8:23 am

    Why Levin didn’t get Turing award or even any other major ones?

  2. March 14, 2011 10:53 am

    The open problem may solved in this way:“when
    the time is ripe for certain things, these things appear in different places
    in the manner of violets coming to light in early spring” (Wolfgang Bolyai
    to his son Johann in urging him to claim the invention of non-Euclidean
    geometry without delay [Herbert Meschkowski, Noneuclidean Geometry,
    Academic Press, New York, 1964, p. 33]).

    This is cited from An Introduction to Kolmogorov Complexity and It’s Applications

    • March 14, 2011 10:55 am

      An Introduction to Kolmogorov Complexity and It’s Applications is by Ming Li at Deparment of computer science of University of Waterloo

  3. lucas permalink
    March 14, 2011 11:00 am

    A related question: both Cook and Karp received a well-deserved Turing award – the “Nobel of Computing” – for their groundbreaking contributions to NP-completeness. When a great discovery is made by several people independently the same prize usually goes to all independent discoverers. Yet Levin did not receive the big prize. I guess most computer scientists recognize him anyway as one of the great contributors of theoretical CS, with or without a prize.

  4. History permalink
    March 14, 2011 12:19 pm

    “When a great discovery is made by several people independently the same prize usually goes to all independent discoverers. ”

    Not really.

    People like to claim that time of publication is used for co-discovery claims. Because of that Stallings, who was a few weeks late with his proof of Poincare’s conjecture for high dimensions, got no credit.

    Except that using that measure Levin was a year late and his claim should be disallowed (or should it? ).

    On the other hand Erdos co-proved the prime number theorem simultaneously, with the proof appearing in the same issue of the journal and was passed for the Fields medal.

    If you look at the actual record, there is no established practice. Sometimes credit is shared sometimes is not.

    The rule seems to be: we use publication date except when we don’t.

    • March 14, 2011 8:41 pm

      Different approaches to the same problem should be allowed since they add to our knowledge about the statement that is proven.

      However, I am confident that not only publications dates but also later contributions and “adopting” a problem or a field are important (rather than making a single discovery, unless we are talking breakthroughs).

      I believe Levin should have shared the Turing award, but I don’t know why he did not. Perhaps it was the politics of the time? I am way too young to know, but I hope that researches active back then can enlighten us.

    • Anonymous permalink
      March 14, 2011 8:47 pm

      Because of that Stallings, who was a few weeks late with his proof of Poincare’s conjecture for high dimensions, got no credit.

      This is a little misleading, since Stallings knew about Smale’s proof at the time, although he had not yet read the details. That’ s an enormously different psychological position than knowing nothing, and there’s no reason to think Stallings would have completed his proof nearly as quickly (or possibly at all) if he hadn’t had the confidence of knowing it could definitely be done.

      By contrast, Levin knew nothing about Cook’s work when he wrote his paper. If he had known about it, then his accomplishment would have been substantially less impressive, even if he had proved everything from scratch after merely hearing about Cook’s achievement.

      On the other hand Erdos co-proved the prime number theorem simultaneously, with the proof appearing in the same issue of the journal and was passed for the Fields medal.

      This is another tricky situation. Selberg definitely deserved 95% of the credit for the proof, although Erdos contributed an important step. Basically, Selberg told Turan he could almost give an elementary proof of the prime number theory, except for one lemma he was stuck on, and he showed him the proof and the missing lemma. Turan told Erdos (without Selberg’s permission) and Erdos quickly proved the missing lemma and announced to the world that he and Selberg had jointly proved the theorem. Meanwhile, Selberg found a different proof of the lemma without Erdos. The facts are undisputed: Erdos was the first to complete the proof, almost all of the overall proof was due to Selberg, and Selberg would probably have completed the proof himself if Turan hadn’t spilled the beans to Erdos.

      This led to a bitter dispute. Selberg proposed two separate papers, one in which Erdos proved the lemma and a second in which he explained how it implied the prime number theory, and he offered to delay publication of his paper until Erdos’s had appeared. Erdos rejected this option and insisted instead on a single, jointly authored paper. Many mathematicians of the time (such as Weyl) thought Erdos was being unreasonable, and I agree with them. In the end, they each wrote separate papers giving full proofs, Erdos in PNAS and Selberg in the Annals of Mathematics.

      As for the Fields medal, it would have been nice if Erdos had received a Fields medal, and he certainly deserved it, but I don’t think it was ever plausible. Among the things he did before he was 40, some of the ones that seem most important in hindsight (like applying the probabilistic method to bound Ramsey numbers) were not fully appreciated at the time, and his work wasn’t really considered fashionable or central to pure mathematics. By contrast, Selberg was an equally brilliant but more mainstream choice, so unfortunately I think Erdos never really had a chance.

      • History permalink
        March 15, 2011 3:32 pm

        “That’ s an enormously different psychological position than knowing nothing, and there’s no reason to think Stallings would have completed his proof nearly as quickly (or possibly at all) if he hadn’t had the confidence of knowing it could definitely be done.”

        I think there’s quite a bit of hyperbole there.

        “This is another tricky situation. Selberg definitely deserved 95% of the credit for the proof, although Erdos contributed an important step. ”

        This was the subject of a series of recent articles in the math press, and neither side described the situation quite like that. In fact several contemporary mathematicians said that joint authorship was called for.

        We are straying from the original subject, but certainly not giving credit to Leo Levin for a result published a year later is closer to standard practice than it would be doing so.

      • Anonymous permalink
        March 15, 2011 11:29 pm

        I think there’s quite a bit of hyperbole there.

        I disagree – knowing that something has been done, and even having a rough idea of how, makes a huge difference. Smale’s article “The story of the higher dimensional Poincare conjecture (what actually happened on the beaches of Rio)” quotes Stallings’ write-up:

        “When I heard that Smale had proved the Generalized Poincare Conjecture for manifolds of dimension 5 or more, I began to look for a proof of this fact myself.”

        (And Kirby’s review of Batterson’s biography of Smale says that Stallings had furthermore read Smale’s proof sketch in Bulletin of the AMS, although I can’t say how reliable this is.)

        This can’t be considered fully independent. It definitely demonstrates that Stallings was smart enough to find his own proof based on limited hints, but that’s not the issue. To share in the credit, it’s not enough to say “Hey, I think I could prove that myself” and then give your own proof after hearing about someone else’s proof.

        In fact several contemporary mathematicians said that joint authorship was called for.

        Are you talking about the 2009 Mathematical Intelligencer article by Spencer and Graham? I don’t think they made any explicit statement of their position on joint authorship, but they definitely are sympathetic towards Erdos (as well as close friends and collaborators of his). My impression is that one reason they published this article so many years later was because the general historical consensus is in Selberg’s favor, so they wanted to make the case for why Erdos wasn’t actually behaving so unreasonably. However, maybe I am misinterpreting things.

        We are straying from the original subject, but certainly not giving credit to Leo Levin for a result published a year later is closer to standard practice than it would be doing so.

        I agree, with two caveats. One is that the big communication barriers between the US and USSR are relevant, and the other is that the relevant question isn’t when Levin’s paper appeared, but rather when the work was done, when he told people in the USSR, and when he submitted it for publication. (It’s not clear to me what those dates are, or which of them is most relevant, but they are each at least as relevant as the actual date of publication.) Incidentally, Trakhtenbrot says in his 1984 survey article that Levin spoke on his work in 1971 in Moscow and Leningrad, but that’s all I know.

  5. Anonymous permalink
    March 14, 2011 1:00 pm

    I like his objection to quantum computers, because it works in both directions! Classical physics is an analog theory, whereas quantum physics is inherently discrete; the essential property of quantum mechanical systems is that they can only carry an integer number of energy quanta.

    The moral may be that shallow arguments have limited usefulness, but they remain amusing.

  6. Paul permalink
    March 14, 2011 1:04 pm

    Thanks for the link to the slides. The title of slide 17 should have been ‘The World According to Karp.’

  7. Yuly Shipilevsky permalink
    March 14, 2011 5:41 pm

    Hi everybody.

    Please see my paper regarding P vs. NP problem at:

    http://www.optimization-online.org/DB_HTML/2011/02/2920.html

    Thanks,

    Yuly

    • Rune permalink
      November 11, 2011 6:30 am

      Yuly, the Digest for your publication is not peer reviewed. As stated: “… They do not usually evaluate the report for quality or mathematical correctness. Consequently, we make no claim about quality or correctness of the reports on the site. ”

      I am not convinced by your algorithm. Can you show your Integer Factorisation algorithm will work in polynomial time on some RSA challenge numbers that have not yet been factored.

  8. March 14, 2011 6:00 pm

    Hi there David,

    Thanks for this interesting post 🙂

    An answer to your final question. I don’t think it’s a mystery at all. The maximum Conceptual Jump Size (CJS), and computational speed of humans are more or less the same. Therefore, when they start induction (Levin Search) from similar reference machines (similar knowledge bases), they arrive at the solutions about the same time, since Levin search takes 2.CJS at most. 🙂 Naturally more disciplined minds carry out the search more effectively (disregarding incorrect candidates and thinking mechanically), but there is a computational limit to that. That’s why many solutions are invented simultaneously. Consequently, if we freed up all the research papers from the tyranny of scientific publishers, and encouraged many more people to work on significant theoretical problems (better allocation of computational resources), we would accelerate science greatly, as we could avoid redundant searches in a better fashion. Hopefully, we won’t need to do that anymore, as we’re about to create human-level AI which will always work in a systematic fashion and will always have access to the complete set of papers 🙂

    Cheers,

    Eray

  9. March 14, 2011 6:08 pm

    Hi there Prof. Lipton,

    Thanks for this interesting post

    An answer to your final question. I don’t think it’s a mystery at all. The maximum Conceptual Jump Size (CJS), and computational speed of humans are more or less the same. Therefore, when they start induction (Levin Search) from similar reference machines (similar knowledge bases), they arrive at the solutions about the same time, since Levin search takes 2.CJS at most. Naturally more disciplined minds carry out the search more effectively (disregarding incorrect candidates and thinking mechanically), but there is a computational limit to that. That’s why many solutions are invented simultaneously. Consequently, if we freed up all the research papers from the tyranny of scientific publishers, and encouraged many more people to work on significant theoretical problems (better allocation of computational resources), we would accelerate science greatly, as we could avoid redundant searches in a better fashion. Hopefully, we won’t need to do that anymore, as we’re about to create human-level AI which will always work in a systematic fashion and will always have access to the complete set of papers

    Cheers,

    Eray

    PS: please erase my previous comment

  10. Anonymous permalink
    March 14, 2011 7:54 pm

    Levin’s other works surely suffices for a Turing award, but Turing award is a rare resource with too many deserving it, Les was one example that everyone agree. Hopefully Levin will be in the list.

    Q: When did researchers in the western hemisphere learn about Levin’s work? Was it after USSR’s fall or before it?

    • March 14, 2011 8:49 pm

      According to wikipedia he emigrated to the U.S. in 1978, so I guess his work was well known before the fall.

  11. March 15, 2011 12:33 am

    On Levin’s Reduction Method:

    I have to provide a counter-proof on Levin’s Statement:

    1. The problem is represented in the source language as A(x,y).

    2. The reduction includes a composite transformation (relation transformations, F-homotopic and homographic transformations, and kernel transformations)
    from A(x,y) to the target problem.
    2.1 The reduction in 2 may not be polynomials as Levin assumed. In real life, the exponential reductions are the usual cases due to register/resource allocations,
    instruction/scheduling, and optional optimizations such as common expression elimination, parallelism, etc. These factors contribute to the results of the reduction
    in the partial orders instead of the sequential forms. These cases math with what were changed with a strikeout in the Levin’s paper.
    2.2 The reduction keeps the mappings (N-to-N) as continuation in the usual cases or the mappings (1-to-N) in the simpler cases when the optimization methods are not applied.
    Levin’s p(y) is limited to the simpler cases, which establish the inverse relations from the point in the target to the point in the source (1-to-1). These special cases make Levin’s results in polynomials. However, the forward relations are the 1-N mappings in the simpler cases, which contain continuation for the non-polynomial reduction.

    3. The target problem is presented in the host language B(r(x),s(y)).

    4. There is an option to relax variable x during the reduction in 2.
    4.1 Keeping x gives more information about the source problem. However, the mappings contain continuation which is the mathematical object establishing the relationships between the two problems because the reduction usually goes exponentially in real life (non-polynomials).
    4.2 Without keeping the source problem during the reduction, we can monitor and evaluate the execution stacks of the target problem (non-polynomials).
    4.3 By keeping the source problem after the reduction, we can set check points to step-into (reduce the N-to-N mappings into the 1-to-1 mappings) the the solution to the source problem. Hence, we can obtain the polynomial verification paths.

    Here is the generalized results from my proof:

    (1) We can perform non-polynomial reduction from the source problem to the target problem.
    (2) We can perform polynomial verifications by configuring the check points.

    This is what I wish for Levin’s corrections:
    (1) Levin should generalize r(x) as a non-polynomial instead of a polynomial.
    (2) Levin should generalize the reduction problem as a non-polynomial instead of a specific reduction function or series (polynomial).
    (3) Levin should generalize the inverse relations as a mapping (continuation of a non-polynomial)instead of a 1-to-1 inverse function or series (polynomial).
    (4) Levin should generalize the verification paths from the 1-to-1 inverse function or series (polynomial) to the configurations of the check points to construct polynomial verification paths.

    Since I have provided a generalized proof, there is no need to go through his six problems.

    My response to Levin’s quote on quantum computers:

    Efficiency, accuracy, and continuation are important for computation:

    1. Archimedes’s discovery in the real domain has influenced the generations of minds including Levin. The common and significant limitation is the missed link to the imaginary world.

    2. As I discussed in my proofs on “P vs NP”, polynomials exist in the real domain and the complex domain so does continuation. If the problems or solutions require both
    modulus and phase completeness, we need the missed link. If the problems and solutions cannot be reduced to the discrete series, we need the missed link to study continuation.

    3. In the mathematical universe, efficiency is not the only concern but also continuation such as flows, structures, relations, incompleteness, etc. In practice, continuation such as availability of energy supplies, computation, and communications also plays a significant role as efficiency.

    4. Even it is possible to start from an arbitrary pair of real quantities to construct polynomials in the complex domain, mathematical relations, objects, operations require the missed link. In addition, real measurements result in non-polynomials, which can be errors, noises, etc.

    The above brief analysis illustrates the requirements of complete understanding of the relationship between polynomials and non-polynomials, which are fully discussed
    in my proofs on “P vs NP”. For details, please read my proofs on the internals and the externals of the subject mater.

    Hence, I disagree with Levin. Quantum computers do not terminate continuation but it is possible for quantum computers to provide accuracy, resolvability, and continuation.

    My Response to Levin’s quote on Gödel’s incompleteness theorems:

    1. Mathematics promises all solutions of a computational procedure at any ordinary point (Strong consistency). This includes the ordinary solutions, which are entire and complete non-polynomials.

    2. Mathematics promises convergent inference or deduction and even polynomials to reach a simple closure (Regular consistency).

    We can understand three and four levels of complexity to obtain polynomials. In addition, it is possible to solve N+1 level of complexity to obtain polynomials.

    We can may obtain non-polynomials with two levels of complexity or a single greedy computation.

    3. Mathematics promises asymptotic behaviors in the cases of conflicts (Inconsistency).
    These results hold for automorphisms of relations and solutions. Hence, strong consistency is not required for a theory to be solutions homographic.
    A contradiction with strong consistency is possible.

    If there is strong consistency, then it is incomplete. It is true as stated in 1.
    If there is consistency, then it is possible to obtain polynomials or convergent non-polynomials in (2). We can obtain both completeness and incompleteness.

    In conclusion, Gödel’s incompleteness theorems do not provide what is not promised by mathematics but only keeps what is promised. Hence, I disagree with Levin. The “open end” commented by Levin should be closed.

    My Response to the Open Problem:

    1. Orthogonal polynomials exist on the unit circle. Hence, independent discoveries exist in the real domain (results) and the imaginary domain (scientific reasoning).

    2. There are recurrence relations of the orthogonal polynomials in 1. This is why “continuing mysteries of mathematics and theory of results” happened.

    2.1 The recurrence relations are polynomials.

    2.2 If one of the contributors is well known or connected, this contributor can be accepted and judged in a single polynomial term.

    2.3 If another contributor is not known, his results may experience misunderstanding, reluctance of acceptance, public assessment, politics, and even an early rejection.
    It requires a polynomial of N terms to accept the results in this case. But the final result holds as a recurred polynomial.

    3. Orthogonal polynomials exist on the real line. Hence, independent discoveries can occur at the same time.

    4. There is a connection between the orthogonal polynomials on the unit circle in 1 and the orthogonal polynomials on the line in 3. Hence, the independent discoveries can exist with respect to space, time, and their weight functions.

    5. Since these are what mathematics promised, everyone has observed the above relations.

    Hence, correctness determines the results of proofs but not the forms of the publications.

  12. March 15, 2011 12:42 am

    My Proofs on “P vs NP” with “Mass Gap”
    is located at: http://leibnizengine.files.wordpress.com/2011/02/np-arxiv.pdf

    1. If you are curious about the mystery of the Universe with Space and Time, it is written for you, please unlock it now !!!
    2. If you cannot appreciate the proofs, you may not solve the problems now. Do not worry, keep your wonders to solve the problems !!!

  13. gender bender permalink
    March 15, 2011 7:15 am

    If you have group of people thinking about a problem and they are of the same intelligence and ability then they they will arrive at the same conclusion at about the same time. How else could it be?Why is that so mysterious?

  14. Paul Beame permalink
    March 15, 2011 7:37 am

    Note that an English version of Levin’s 1971 PhD thesis was just published in Annals of Pure and Applied Logic in 2010.

  15. March 15, 2011 8:27 am

    That’s a terrific slide presentation by Tal Kramer and Gillat Kol (titled “The Tale of NP Completness”); for which, thank you very much!

    The presentation’s reference to the early work of Juris Hartmanis (slide 4) is well-justified (IMHO). In particular, Hartmanis’ early monograph Feasible computations and provable complexity properties (1978) and his follow-up article Observations about the development of theoretical computer science (1981) are well worth reading in conjunction with (say) chapters 1,2, and 6 of Sanjeev Arora and Boaz Barak’s Computational Complexity: a Modern Approach.

    Just as Dick and Ken say:

    A book treatment … is likely to be perfect, completely clear, filled with details, and easy to follow. But the original papers often yield insights into how hard it was to get the ideas right.

    My experience has been that the early writings of Hartmanis wonderfully complement the polished presentation of Arora and Barak in precisely this respect … it’s a deeply thrilling experience to read them in parallel, with each complementing the other.

    The Tal Kramer and Gillat Kol’s slide 24 (which is titled “Is it [P vs NP] Really THE Question?”) is excellent too. I have often thought that section 1.5.2 of Arora and Barak’s textbook (the section titled “Criticisms of P and some efforts to address them”) would be even better if it were longer … very much longer … and in particular better if it followed-up more of the considerations that are laid out in Hartmanis earlier monograph and articles, as viewed in the light of subsequent theorems, and with the clarity lent by polished modern notation and proof technologies.

  16. March 15, 2011 8:34 am

    I fully understand the value of the “open problem”: (1) stress the roles verifications and checking (2) further test “P vs. NP”:

    1. Each independent discovery represents an orthogonal polynomial.
    2. Independent discoveries construct independent verification paths in the lower plane, which are polynomials.
    3. Independent verification paths (Polynomials) perform multiple checking with the correctness of the results.
    4. With a group of collaborations, it is possible to construct a polynomial with regular consistency. Hence they may arrive at the same conclusion at about the same time (this is not the usual case in real life). But it does not provide multiple independent verification paths to confirm the correctness of the proofs. The group only perform self-checking. It is critical to understand the importance of polynomial verifications for “P vs. NP” proofs.
    5. With verification, a discovery can be transformed from a polynomial to a non-polynomial (see my proof for details).

    Verification has been fully discussed in my proofs on “P vs. NP”. It is not so mysterious now since I have proved it.

    Please read through “Verifications” section of my proofs on the subject matter.

  17. leibnizengine permalink
    March 15, 2011 9:14 am

    It is clear to see that publications cannot serve the needs of the advanced topics like “P vs. NP”.
    Publications can spread awareness of science but cannot replace independent verifications.

  18. Edward A. Hirsch permalink
    March 15, 2011 10:03 am

    We don’t really need to restrict s to be *computable* in polynomial-time (though Levin defines it so). Do we have any candidate example where it is important? (I think no.)

  19. Markus S permalink
    March 15, 2011 10:08 am

    ad Open Problem: Was there ever a case, where n>2 persons have discovered the same (important) result independently at (nearly) the same time?

    ad Levin: For me, this situation reminds me of the situation with the Church-Turing Thesis. (Nearly) everybody in Computer Science heard of Turing, Turing machines etc…, but I do not think that much people know about Church’s approach.

    • March 16, 2011 1:41 am

      Turing, Post and Church came up with models of computations and proofs of non-computability at around the same time

    • Vadim P. permalink
      March 16, 2011 11:49 am

      Murray Gell-Mann and George Zweig proposed the quark model in ’64, while Feynman came up with the parton model in ’69, but as I understand it he had different motivations and it was only later that people realized they were describing the same particle. I’m not sure to what extent Feynman was aware of the quark model before creating his own.

  20. Anonymous permalink
    March 15, 2011 12:14 pm

    Many Turing Award went to people from Databases, and Systems area that are probably non-deserving. These work(s) may be “influential” in a way that someone laid out some standard for doing things, but by no means are comparabled to discoveries of abstract truth.

  21. Anonymous permalink
    March 15, 2011 12:17 pm

    Many a times the Turing Award went to people from Databases, and Systems area – that are probably non-deserving. Well, by no means I want to show disrespect to those giants in those fields, but their work(s), although “influential” in a way that someone laid out some standard for doing things, certainly aren’t comparable to discoveries of abstract truth in terms of creative penetration.

  22. leibnizengine permalink
    March 15, 2011 5:37 pm

    I do not know what level of abstract truth you are pondering. If you are curious about the universe with space an time, P vs. NP should win your respect. If you are interested in theoretical mathematical abstractions, then structures, internals, and externals are deep to know. If you are interested in physical abstractions, mass gap is worth to study. You should appreciate what P vs. NP
    can convey.

  23. abelmolina permalink
    March 15, 2011 8:00 pm

    No disrespect for Levin intended, but in general it would be nice if Russian people wrote papers trying to properly explain things to the reader.

  24. March 16, 2011 2:46 am

    On independent discovery:

    For N+1 results in the complex domain, two independent groups (each group has n > 2 persons) may be assigned within an open interval. The two groups can obtain the independent discovery along the same time indexes with regular consistency or even the same results. But these are not general verification paths which are under special conditions. Sometimes it is unknown to allocate each person to the target location.
    Hence, it is just a special case.

    On turning machine, hope this can address the abstraction concerns of “Anonymous”:

    The turning machine runs to a state or a condition at which is prescribed to halt by a given input and a set of policies. For the details on delta functions, weight functions, transformations, and policy functions for continuation, please read externals of
    continuation in my article.

  25. March 16, 2011 9:14 am

    Luca & Markus:

    For n=3 person case, independent verification paths can be established in two dimensions and three dimensions. These have been discussed in the verification section of my proofs on “P vs. NP”. The independent discovery from Turing, Post, and Church is a valid example.

  26. March 16, 2011 9:20 am

    Edward:

    For real life examples, systems and applications in real-time do require polynomial decisions. In addition, mission critical ones do require polynomial computation too.

  27. March 16, 2011 1:32 pm

    A critical comment on the proofs in the complex domain for the problems:

    It is critical to understand the importance of the proofs in the complex domain because the structures, relations, and analysis in the real domain provide orthogonality but cannot prove the existence. In addition, real quantities construct non-polynomials. Hence, physical intuitions such as geometrical properties are not the correct tools for the problems. However, geometrical properties and physical quantities can provide verifications such as virtual checking.

    This is why we need the missed link to complete the proofs for the problems.

    One more note: I need to correct the typo on “Turing Machine” in the previous comments.

  28. March 16, 2011 2:15 pm

    I can feel that the great minds are calling intellectuals to rethink the values and true meaning of the Poincaré conjecture.

  29. March 17, 2011 1:57 pm

    1. Polynomials and continuation exist in the real and the imaginary domain. So does the relation between two models. Science discoveries and engineering practices introduce a complex kernel. The complex kernel can terminate the continuation between the two models to construct a polynomial, which is a particle structural representation. This complex polynomial represents what we “know” now, which contains both the facts in the real domain and the knowledge or experience in the imaginary domain.

    2. The relations among two models and the particle can be a confluent structure as our knowledge in the history of science and engineering.

    3. Two models can be the composite transformations of the particle with a substitution of variables and parameters.

    • william hird permalink
      February 10, 2014 5:48 am

      @leibnizengine: What does anything you have said here have to do with Mr. Levin?

Trackbacks

  1. Second Xamuel.com Linkfest
  2. Tại sao vấn đề P vs NP khó vậy ? « ZetaMu
  3. A World Without Randomness | Gödel's Lost Letter and P=NP
  4. “Who are you today, Doc? Einstein?” Lord John Whorfin: “Lord John Whorfin. If there’s one thing I hate, it’s to be mistaken for somebody else.” « Pink Iguana

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading