Skip to content

P vs NP Proof Claims

August 10, 2021


Ken Ribet once was sent a freebie book that he looked at and decided he didn’t want, so took it to a second hand bookstore on his lunch break, sold it, and bought lunch with the proceeds. On the way back to the math department he realized he’d turned theorems into coffee.

IAS page

Robbert Dijkgraaf is a mathematical physicist and the current Director of the Institute for Advanced Study in Princeton. He is also a down-to-earth communicator of mathematics and a spacy surrealist artist. He wrote a guest column for Quanta two years ago titled, “The Subtle Art of the Mathematical Conjecture.”

Today I talk again about what I feel is a meta-error in all the claims to resolve our favorite conjecture, {\mathsf{P < NP}}. Or {\mathsf{P = NP}} if you are so inclined.

I have an issue with them that goes beyond well-noted advice by Scott Aaronson on how to tell claimed proofs of {\mathsf{P < NP}} are wrong. It is not about whether they are incorrect. They all are incorrect. And I believe that they will continue to be so into the future.

My problem with these claims is: they are always all or nothing.

The claims never improve what we know about {\mathsf{P}} versus {\mathsf{NP}}. They always resolve the entire question. There is no partial result, no improvement of what we know. They always solve the conjecture and are ready to get the $1,000,000 dollars. Of course the prize money is not in any jeopardy.

Climbing a Mountain

This is where Dijkgraaf’s article comes in. He talks first about mountain climbing (as we have also done) rather than art:

Mountain climbing is a beloved metaphor for mathematical research. … [T]he role of these highest peaks is played by the great conjectures—sharply formulated statements that are most likely true but for which no conclusive proof has yet been found.

The highest summits are not conquered in a single effort. Climbing expeditions carefully lay out base camps and fixed ropes, then slowly work their way to the peak. Similarly, in mathematics one often needs to erect elaborate structures to attack a major problem. A direct assault is seen as foolish and naive. These auxiliary mathematical constructions can sometimes take centuries to build and in the end often prove to be more valuable than the conquered theorem itself. The scaffold then becomes a permanent addition to the architecture of mathematics.

What strikes me and Ken about the {\mathsf{P}} versus {\mathsf{NP}} claims we see so often is that they try to reach the summit in one bound. There is no scaffolding, no rope, no new handhold. The reader learns little.

The term that most needs expounding is base camp. A base camp is not at the bottom. The main base camps for Mount Everest are halfway up to the summit. Getting to a base camp takes substantial work by itself. Building a base camp certainly does. But getting to one is essential. This is what is missing for {\mathsf{P}} versus {\mathsf{NP}}.

P<NP Base Camps

Let’s look at {\mathsf{P < NP}}. Suppose you claim that you can show that CLIQUE requires super polynomial time. This is what you need to do to prove {\mathsf{P < NP}}. This is way beyond anything we can imagine proving.

Suppose rather you claimed that CLIQUE requires {m^2} deterministic time where {m} is the number of edges. This would be the best result ever on the difficulty of CLIQUE. It would easily be the best paper in complexity theory in decades. Would win prizes of all kinds.

It is even worse. If one could prove that CLIQUE is not in deterministic time

\displaystyle  m\log\log\log m

that would also be a major result. Forget about proving a lower bound of

\displaystyle  m^{1000}

and more that is needed to solve {\mathsf{P < NP}}. Just the above would be major.

If we skirt around some technical issues with time on Turing machines, we can pitch our camp right at going beyond linear time. Or certainly linear size circuits. Nobody knows a language in {\mathsf{NP}} that does not have linear size circuits. Proving one would bring enough renown for anyone.

P=NP Base Camps

Let’s look at {\mathsf{P = NP}}. Most regard this as the far side of the mountain but please bear with us to the end. Suppose you claim that CLIQUE is in polynomial time.

The usual paper of this type claims it is doable by some straightforward polynomial time algorithm. The method might use linear programming in some standard manner. This might lead to a practical algorithm, but none of those has ever been observed in the wild. Even worse, any proof that CLIQUE can be resolved in time

\displaystyle  n^{C}

would be also the best result ever on this problem. This applies even if {C} is an unknown constant, or is equal to some astronomically large value. Think Graham’s number or the Skewes number:

\displaystyle 10^{10^{10^{502}}}

Nor do we need {C} to be constant. Say it is {O(\log n)} or some polynomial in {\log n}. This would be a quasi-polynomial time algorithm. Here a really big campsite was built by László Babai, who proved that Graph Isomorphism is in quasi-polynomial time. His paper has a wealth of ideas that might be extended.

But what really might be attractive about this side is that you can make progress without “shaking the Earth.” These results would be in the direction of {\mathsf{P = NP}} but not opposed by as many factors:

  • Refute the Strong Exponential Time Hypothesis (SETH). That is, find an algorithm in time {2^{O(cn)}} with {c < 1} for CNF-SAT. We have written about SETH here.

  • Refute the Unique Games Conjecture (UGC). Unlike SETH, this technically does not imply {\mathsf{P < NP}.} But refuting it does reduce the reach of {\mathsf{NP}}-hardness. A large special case of UGC was, however, proved three years ago.

Or prove that they cannot both be true simultaneously. My old post on UGC covered a sense in which there is no “SETH for UGC.”

But …

The trouble with our insight is that in the past, sometimes a full conjecture has been solved. That is, partial progress did not happen first—the mountain was scaled in one go. Or at least a lot of it, from a relatively low base camp. For {\mathsf{P < NP}} there is an essence of “polynomial” versus “exponential” that is sharply defined in other ways, for instance Mikhail Gromov’s theorem about growth rates in groups, which we wrote about here.

Ken and I have differed on how long and steep and sudden the final ascent was for the Four Color Theorem and Fermat’s Last Theorem (FLT). The proof of the former by Kenneth Appel and Wolfgang Haken was a watershed for its use of computers, but drew on ideas that had gone through the non-computer proofs-and-refutations process. Andrew Wiles’s announcement of FLT was a shock even with a three-day buildup of his lectures at a conference in 1993 having hinted it to several attendees. But he drew on partial progress that had been ramped up by Ribet and others since the mid-1980s.

Maybe if Évariste Galois had beaten Niels Abel to showing the unsolvability of the quintic, his invention of group theory for the proof used today would have been a single bound. But Abel got a big lift from Paolo Ruffini’s 500 pages of work in 1799. (Évariste is the same name as Everest, go figure.)

The proof of the Boolean Sensitivity Conjecture two years ago by Hao Huang was short and sudden. But along lines remarked also in Dijkgraaf’s article, perhaps it was “more of a foothill.” Or maybe a base camp for harder problems, such as improving the upper bound from quartic to cubic or quadratic, as we discussed here and here.

This leads Ken into a historical daydream, taking over from here.

A Fermat Fantasy

Pierre de Fermat famously wrote the following in French in the margin of his copy of the famous book on arithmetic by Diophantus:

Un cube n’est jamais la somme de deux cubes, une puissance quatrième n’est jamais la somme de deux puissances quatriémes et plus généralement aucune puissance supérieure à 2 n’est la somme de deux puissances analogues. J’ai trouvé une merveilleuse démonstration de cette proposition, mais la marge est trop étroite pour la contenir.

I (Ken) think he could just as easily have written the following—and in place of the margin being too narrow, he could have given a more reasonable excuse, one I know all too well:

Un cube n’est jamais la somme de moins que trois cubes, une puissance quatrième n’est jamais la somme de moins que quatre puissances quatriémes, et plus généralement aucune puissance n’est la somme de un moindre nombre de puissances analogues. J’ai trouvé une merveilleuse démonstration de cette proposition, que je rédigerai après avoir traité onze nouveaux cas de triche aux échecs en ligne.

The stronger statement here is that no cube can be a nontrivial sum of fewer than three cubes (such as {6^3 = 3^3 + 4^3 + 5^3}), no fourth power a sum of fewer than four other like powers, and so on. This also was a hallowed conjecture that stood for centuries, named for the giant Leonhard Euler no less. Well, if Pierre had just changed a few of his words, then this is what we would have known as FLT. Call it EFLT. It could have been just as worthy. That there were reservations known to Euler, even as he lent his name to it, might have made it all the more Olympian.

Then what we actually know as FLT would have been a hard-work base camp for EFLT. Would we have seen the vast number of unsuccessful FLT proofs directed at EFLT instead? By people claiming to climb this peak in one bound—without first trying to prove that a sum of just two {n}-th powers cannot be a higher {n}-th power, for all {n \geq 3}? Well, there would have been a problem with that, one we discussed in connection with solutions that might be easy after all:

\displaystyle  \begin{array}{rcl}  144^5 &=& 27^5 + 84^5 + 110^5 + 133^5.\\ 20615673^4 &=& 2682440^4 + 15365639^4 + 18796760^4. \end{array}

Open Problems

Do you have your own issues with these claimed proofs? Or, do you see other cases of people having suddenly scaled a mountain in one stride?

38 Comments leave one →
  1. Wolfgang Keller permalink
    August 10, 2021 4:15 pm

    A consideration that I often have is:

    Why do the complexity theorist concentrate so much on resolving P vs. NP instead of P vs. PSPACE?

    If we could show [math]P = PSPACE[/math], then [math]P = NP[/math] would be a trivial corollary. On the other hand, I am very sure that a proof of [math]P \subsetneq PSPACE[/math] would give a wealth of ideas that would turn out to be very useful for resolving the P vs. NP question.

    The problem class PSPACE is also in my gut feeling better “understood” than NP; in particular, we know that [math]PSPACE = NPSPACE = IP = QIP.[/math]

    • rjlipton permalink*
      August 11, 2021 10:22 am

      Dear Wolfgang:

      This is right on the mark. If P < PSPACE is hard then of course P<NP is worse. I like this point.

      Best

      Dick

  2. August 10, 2021 8:45 pm

    I still think my proofs are correct, since
    parameter “m” is originally defined as
    a non-binary and that’s why cannot be
    voluntarily “binarized” and must be
    considered in non-binary format ONLY.

    https://vixra.org/abs/1809.0204

    • Yuly Shipilevsky permalink
      August 14, 2021 9:11 am

      My proof is the only one which is based on latest fundamental papers of
      world-class mathematicians.

    • Abrahim Ladha permalink
      January 19, 2022 8:11 pm

      If you think you can factor integers in polytime, you don’t need to convince anyone through proof. You can just implement it and factor the RSA challenges: https://en.wikipedia.org/wiki/RSA_Factoring_Challenge

      If you can do that, then anyone can be convinced. If you can’t, maybe you could be convinced its incorrect.

      • Wolfgang Keller permalink
        January 20, 2022 10:47 am

        @Abrahim Ladha

        If you think you can factor integers in polytime, you don’t need to convince anyone through proof.

        “Polynomial time” not necessarily means “fast”.

        What about a running time of [math]O(n^{64^{32}})[/math] steps? This is clearly polynomial in [math]n[/math].

        Is this an artificial example?

        No: the fact that USTCON=L (which implies that in polynomial time using only additional [math]O(\log n)[/math] space, we can detect whether there exists a (undirected) s-t path in an undirected graph) implies that there exists a polynomial time algorithm for this that only uses [math]O(\log n)[/math] space.

        But according to Wikipedia (https://en.wikipedia.org/w/index.php?title=SL_(complexity)&oldid=1041330865#Important_results), if you analyze the algorithm of Omar Reingold, you realize that this algorithm needs

        • [math]64^{32} \log n[/math] memory and
        • [math]O(n^{64^{32}}[/math] runtime.

        This is far too much to be practical.

  3. August 10, 2021 10:02 pm

    As a footnote, indeed no language in deterministic time {2^{O(n)}}, even with an NP-oracle that can answer {2^{O(n)}}-sized queries, is currently known to lack linear time circuits.

  4. August 10, 2021 10:21 pm

    How about Godel’s incompleteness theorem as an example of “scaling a mountain in one leap”? Base camp there was pretty much Principia, Mathematica, as far as I know.

  5. kamouna permalink
    August 11, 2021 2:09 am

    Dear Richard Lipton,

    Stating the Cook-Levin theorem:\

    \forall w\in L\in NP \exists A(w)=SAT iff M accepts w

    Let w be any paradox, then there is no such SAT. Hence SAT is (NOT) NP-complete.

    I confronted Stehen Cook with this argument. He told me then your are the world’s greatest mathrmatician.

    Tell me how to reduce a paradox to SAT.

    https://kamouna.wordpress.com/

    Here you find my appeal to the ACM president.

    best,

    Rafee Kamouna.

  6. Sam permalink
    August 11, 2021 7:01 am

    But who referees such proofs? NOT me.. I refuse to handle such submissions.

    • rjlipton permalink*
      August 11, 2021 10:24 am

      Dear Sam:

      I understand. I think we are saying here is a test: If you claim P<NP then show that SAT is not in linear time.

      Best

      Dick

      • Sam permalink
        August 17, 2021 5:16 am

        Dear Richard, Unfinished message? Your message just says “Dear Sam: I understand. I think we are saying here is a test: If you claim P”.

  7. kamouna permalink
    August 11, 2021 10:26 am

    DearSam,

    To prove the conjecture:”P=NP iff P!=NP” you never need a proof. Just state the Cook-Levin theorem for a paradoxical input, you immediatel7 get the result. That’s why Stephen Cook called me the greatest mathematician sarcastically.

    Start with any paradox, say, “This sentence is false”, tell us how you reduce it to SAT.

    Cook-Levin:

    \forall w\in L\in NP exists A(w)SAT iff M acceots w

    So, tell me how will you derive SAT from a paradox, by pencil qne paper.

    If you are Sam Buss, you reviewed versions of my work and you were sure of its corectness together with the EiC of ACM ToCL jOma Kupferman.

    Best,

    Rafee Kamouna.

  8. August 11, 2021 10:27 am

    How about Godel’s incompleteness theorem as an example of “scaling a mountain in one leap”? The base camp seems to have been Principia Mathematica and some very general ideas of Hilbert’s.
    Shannon’s creation of information theory scaled a mountain that had barely been spotted from a pretty low base camp established by Nyquist and Hartley.

    • August 11, 2021 4:18 pm

      Ah, putting Gödel’s theorems through that lens is an excellent idea!

  9. Timothy Chow permalink
    August 12, 2021 8:09 pm

    Regarding examples of suddenly scaling a mountain in one stride, perhaps Paul Cohen’s proof of the independence of the continuum hypothesis counts. Maybe this isn’t quite the sort of thing you have in mind, because Cohen did study carefully Goedel’s proof that the continuum hypothesis is consistent with ZFC. The typical claimant for P vs. NP has not bothered to learn much, if any, of the work that has been done by previous researchers. But Cohen did pretty much invent the machinery of forcing out of whole cloth, and solved the continuum problem in one fell swoop, without a long sequence of incremental improvements.

    With hindsight, some precursors of forcing can be discerned in the work of previous researchers, but I believe Cohen was unaware of those precursors. Also, Cohen’s key idea did not even rely essentially on Goedel’s consistency proof (which proceeded by defining the constructible universe L). Cohen’s original proof does talk about L a lot, but people eventually figured out that that wasn’t necessary to make forcing work.

  10. William Gasarch permalink
    August 13, 2021 10:02 am

    When I was a naive grad student and I came upon a “proof” that P=NP or P<NP
    my thought was

    this is surely wrong but maybe it has some interesting ideas in it. It never did.

    Why is that? becaues the people who thought they solved it were not very good.

    Contrast: When Mike Fellows told me about the Graph Minor Theorem and FPT he never said `hence P=NP' or anything of the sort. He knew better.

    Contrast: When Mike Sipser told me that Clique requires exp monotone circuits he was excited that this might be an inroad to showing P < NP but he never said `hence P<NP or anything of that sort. He knew better.

    The only case where something interesting came out of an alleged proof was when Swart thought he proved P=NP and Yannakakis showed that the approach could not work. The proof that the approch could not work is interesting.

  11. kamouna permalink
    August 14, 2021 5:41 am

    Dear Gasarch

    You forgot when you were are editor of Journal of Computer & Systems Science in 2010. The editors [Karp, Aho, Hopcroft, Ullman & Rabin] agreed tp review my papers, then declined, then agreed again, then decline again.

    Where were you then?

    What does that mean?

    What is the position of Stephen Cook today? Do you know?

    Best,

    Rafee Kamouna.

  12. Ivan Gusev permalink
    August 14, 2021 2:14 pm

    When I was a kid, I was thinking what success is, not like an achievement but by its nature. And I realized that it is a formula SUCCESS = MATURITY OF KNOWLEDGE * (INTELLIGENCE+PERSONAL QUALITIES) * POSSESSION(OR SKILL OF FORMULATING) OF CLEAR GOAL. And as this formula demonstrates, if one of multiples is zero, so is the product. In case of P!=NP any is somewhat zero. One does not know what can be considered matured knowledge because nobody solved such problems before. He may never possess level of intelligence needed for solving such a problem. And clear goal is not quite clear, are they equal or not? What one supposed to aim for? Not even speaking about subgoals that can be present in climbing such mountain. I, personally, abandoned P!=NP, I know that they are not equal and now I have a focus on my new development of Logic of Mutual determination that I call Abstract Gradient Dynamics. Idea is straightforward, let’s take TSP as an example. Let’s assume that one can find such suggestion for any instance of TSP that helps it to be solved faster. And let every instance have it’s own unique signature(binary string) that has all the states assigned to it written. Lets skip some cities in sequence that we need to parse and replace them with a state in abstract space of states plus modifying our signature accordingly. This state will be our assumption of existence of such suggestion. Now we need to do it for all instances and repeat it infinitely going more and more abstract on states that already “worked” for solving particular instances. And now we have to start matching “certain” states from every signature(infinite in length) to all possible algorithms, here comes LoMD for our help and special tools from AGD. But how do they work? Recall natural barrier, we have a problem that in certain sense, we “cannot be smarter than random number generator” and it means that we hit a ceiling that prevents us from knowing certain properties of function that we are trying to cherry-pick. And it turns out with right approach this ceiling can be turned into our floor. We can merge all signatures into just one using differential forms, infinitely dimensional complex analysis, number theory, random matrix theory, quantum chaos and Hamiltonian mechanics. At the end we will yield Holomorphic/Meromorphic direct analytic continuation like it happens with RZF in RH. Analytic continuation will be quite legitimate because our floor tells us that we cannot find more of such abstract states and connecting them into one state space gives us sequence-based function in C with Hamiltonian-related manifolds and new Operator of Choice. Approaching such floor actually looks like some sort of convergence, this is why it is natural to imagine some sort of Cauchy sequence based on it. Also, it will contain all possible ways we can transform this abstract space to make it converge on solution. This is why we need Hamiltonians. And our suggestions will be simply strings(like numbers constructed by primes) that will have signature on complex plane. And in case of P we will have no poles to the right, in NP case we will have many more. It simply measures rate of non-periodicity of inner abstract states in any suggestion which is all related to Hamiltonian’s energy and Brownian motion. It is like min-maxing to simplest algorithm reducing complexity of algorithms and trying to pick up the best encoding without perturbing the Hamiltonian or going beyond energy limits. This is what Abstract Gradient Dynamics is. I’m going to post simple proof of Collatz using it soon but I will hold RH and abandon P!=NP because I’m afraid of world’s oligarchy that will use my math for unbeatable AI control and suppression of population. Nature of Virtue is irrational and can only be yielded through highly abstract parameters and growth of individuals on them with beneficial qualities for whole entire community, technocrats suppress this irrationality for minor set of parameters they believe are beneficial, depriving peoples(‘s ?) contact with those high-end abstract states, killing the virtue and its evolution. This is how civilizations end, by loosing their reaction-ability to changes in environment, simply by being suppressed. Sorry for off-topic and bringing policy but AGD is all about supper abstract states and its evolution, knowing it gives understanding of AI at such scale that you start to understand that everything you believe in is also such simple abstract state. Resolution of P vs NP will be civilization-changing event in my opinion, this is why we have to think carefully what kind of tools we are creating for other people. I will try to deliver simplest and barely penetrable form of my new math to somewhat slowdown the process of its implementation and give the time to people to understand consequences of their actions. Maybe it will happen to the end of this year or a bit later. If you are interested I can give more explanation on what is going on with LoMD and AGM.

  13. August 15, 2021 2:01 pm

    Refute the Strong Exponential Time Hypothesis (SETH). That is, find an algorithm in time {2^{O(cn)}} with {c < 1} for CNF-SAT.

    It is wrong. I have an algorithm https://github.com/davidmather/3SatSolver I am too lazy to finish but it should be O(n^2)

  14. javaid aslam permalink
    August 15, 2021 4:50 pm

    And then there is another problem with the base camp– that it may not viewed as significant at all.
    Would anybody consider a graph theoretic technique for enumerating a well known set as something worth paying attention to?

  15. August 17, 2021 10:04 pm

    Maybe we could find the answer looking at the past:

    https://core.ac.uk/download/pdf/211817623.pdf

  16. August 19, 2021 9:19 pm

    Dear Dr. Lipton and Dr. Regan,

    I agree with you that three bases, at least, are needed to climb the NP Mountain. The first base is to modify the current axioms to a new set of axioms (see http://fx.wm3dao.com.) The new set of axioms needs to be consistent with axioms of physics. It needs to provide an inverse function capacity, not only one-way function. A set of operations in this new set of axioms have a countable set domain D, and a countable set range R, where D = R = Σ* = 8n^3. You and Karp have indicated the same.
    The second base is to apply the new set of axioms to solve one of the NP problems, such as 3SAT (see http://tsat.wm3dao.com/sat04.) This solver provides a set of COMPLETE solutions, not one or partial solutions, to each 3SAT instance within O(8n^3). 
    The last base is to use the new axiom, not the current axioms, to prove the “Gödel conjuncture”. For this stage, we are looking for some ones like you, who can lead us to reach the peak of the mountain.
    Looking forward to hearing from you.
    
  17. Richard Harris permalink
    August 19, 2021 9:32 pm

    I think it makes a whole lot of sense to avoid the meta-error.

    I am all for it!!!

    For example, if we want to prove something huge, we must first establish some
    tiny little thing, like the following:

    Lemma

    For homomorphic f: (U, *) \longrightarrow (G, +) and isomorphic \phi: (U, *) \longleftrightarrow (H, +), where (H, +) \leq (G, +),
    f(u_{i}) = f(u_{j}) \Longrightarrow f(\phi(u_{i})) = f(\phi(u_{j})).

    Proof

    Let subscripted u and e (the identity) be members of U.
    Since f(u_{i}) = f(u_{j}) \Longleftrightarrow u_{i} \equiv u_{j}, so
    u_{i} = u_{i} * e \equiv u_{j} = u_{i} * u_{k} \Longrightarrow e \equiv u_{k} (for some u_{k}, relevant to u_{i}u_{j} pair).

    Therefore:

    \; \; \; \; f(\phi(u_{j})) \\ = f(\phi(u_{i} * u_{k})) \\ = f(\phi(u_{i} * e)) \\ = f(\phi(u_{i}) + \phi(e)) \\ = f(\phi(u_{i}) + 0) \\ = f(\phi(u_{i})) \\

    Note: * in (U, *) is not necessarily ordinary multiplication.

    We shouldn’t try to have a grand solution, hoping to settle the whole gambit.

    — RH

  18. Richard Harris permalink
    August 21, 2021 7:51 am

    Again, maybe we shouldn’t try to have a grand solution, hoping to have the entire problem with all the extensions settled in one sweep. Of course, advocating such a view could be due to my inability to do otherwise. I am just incapable of dealing with multiple MPPs in a single proof. Have to do it separately. Even for just one, I force myself to affectedly do the one-fell-swoop deal in steps just to cater. So the Lemma in the previous piece is really a settlement for Riemann Hypothesis.

    Well, no. It is just a lemma so that the proof luckily has no hope of getting the issue settled in one shot.

    It goes without saying that we have to instantiate f, \phi, as well as G, U and H (especially U) to have the proof complete.

    One step at a time.
    One step at a time.
    One step at a time …

    — RH

  19. August 22, 2021 11:07 am

    It is better to be focused on less outstanding
    math problems, instead of spending time
    in vein for P vs NP. 😉

  20. Douglas Felix permalink
    August 22, 2021 10:24 pm

    Royen’s proof of the Gaussian correlation inequality is an example of how to scale a mountain in one stroll (and no one noticing it).

  21. ohtoya permalink
    August 25, 2021 1:46 am

    When we attempt to solve an NP-complete problem constructively in polynomial time, its amount of computation becomes large enough. And, we have to adopt to knowledge or technique in other fields that has not been incorporated so far. In order to solve the maximum clique problem, which is one of the NP-complete problems, in polynomial time, we adopted the graph spectrum technique that no one has used in this kind of approach. Graph spectrum is a study field that explains the characteristics of a graph from the eigenvalues of its adjacency matrix or laplacian matrix. So, we developed a method to identify vertices that do not change the maximum clique size when deleting vertices in polynomial time using a graph spectrum. This method has the characteristic of not involving backtracking, unlike the tree search that is often used in graph problems. Our research results consist of many proofs and pseudo-code based on them. The amount of complexity of our algorithm is O(n^9). This is large compared to the familiar polynomial time algorithms. If someone interested in our attempt, please read our research.
    https://www.researchgate.net/publication/347522479_Polynomial-time_Extraction_of_the_Maximum_Clique_Using_Eigenvalue_Relation

  22. Frank permalink
    September 2, 2021 3:30 pm

    The language of all unprovable (relative to ZFC) sentences is not even in REC let alone NP!

    Cook’s Theorem states that every language in NP is polytime reducible to SAT.

  23. September 13, 2021 10:32 am

    Another proof from a crackpot, who claims, that P!=NP is proven

    https://arxiv.org/pdf/2108.09269.pdf

    • rjlipton permalink*
      September 14, 2021 11:35 am

      Dear Reiner:

      I did look at his “proof”. I believe it tries incorrectly to use the halting problem among other ideas. For example:: He writes:

      For any x ∈ W, it is undecidable, according to the halting problem, whether x not in L(M).

      This incorrectly states the halting problem. That says there is no algorithm that can tell if some x is in some set S. That is some algorithm that works in general. But for specific x and S it could be easy. Is 17 in the set of primes? Or is 167671772 in the set of primes?

      This makes the argument wrong.

      Best

      Dick

      • Reiner permalink
        September 23, 2021 2:44 pm

        Dear Dick.

        I agree, ‘any’ is the wrong word. Maybe
        the bug can be fixed with the word
        ‘arbitrary’.

        This looks better:
        For an arbitrary x (and arbitrary TM M) it is undecidable,
        according to the halting problem, whether x not in L(M).

        Thank you,
        Reiner

  24. Shuxue Jiaoshou permalink
    September 27, 2021 5:23 pm

    I think retired people should “go wild” and put all their efforts into solving the P-NP mystery.. Reasons: (a) They don’t have to worry about their careers any more; (b) They would like to know the answer to the puzzle before they die; and (c) Keep their brains active.

  25. Shuxue Jiaoshou permalink
    September 27, 2021 8:39 pm

    I think retirees should go “wild” and spend all their efforts to resolving the P/NP issue! Why? (a) They don’t have a career to worry about; (b) They have lots of time; (c) They would like to know the answer to the P-NP puzzle before they die; and (d) Keep their brains active.

  26. November 2, 2021 9:19 pm

    test

    • November 2, 2021 11:21 pm

      Blog rule is: first comment is modded, then they come in freely.

      • Richard Harris permalink
        December 7, 2021 8:00 am

        Ken,

        Thanks very much for the clarification that may need a bit of clarification.

        We all know that the rule was established years ago. For quite a few years at the beginning stretch however, there wasn’t the rule. It was established, if memory still serves, way after the day Dick excitedly discovered (and his office neighbor also somehow) hits of the site reached 7000.

        Anyway, my confusion is, does “first” mean:

        1. first comment made to the blog, or
        2. first comment made each month/day, or
        3. 'first' defined by PIP as seen fit?
        

        Now, somewhat more jokingly: do you abide by your rule:

        1. first time you establish it, or
        2. first time each year/month, or
        3. first time you freshly define it to be 'first time'?
        

        I use light tone sometimes, but the issue addressed may almost always be serious.

        Thanks again, seriously.

        — RH

Trackbacks

  1. Baby Steps | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading