Our Thoughts on P=NP
The Clay prize anniversary is soon.
|
| SME keynote lecture source |
Evelyn Lamb is a mathematician who is also a journalist. She has a blog called Roots of Unity on the Scientific American blog network. Also impressive is that she designed the AMS Page-a-Day Calendar on behalf of the American Mathematical Society. It is available from the AMS with member discounts and is filled with wonderful tidbits on math.
The other day she interviewed Ken and me on P=NP.
Ken just happened to be visiting me that afternoon in New York and we spoke to Evelyn about P=NP. The call was delayed because of equipment issues on both ends. Perhaps the fundamental problem is not P=NP after all, but making computerized equipment work. Oh well.
Evelyn is writing an article for the New Scientist magazine about P=NP. She said it was driven by the near 20th anniversary of the Clay Prizes. Recall there were seven of these, each with a million dollar prize. One, the Poincaré conjecture, was already solved. The others are still open—the million dollars is still there waiting for a brilliant researcher.
Questions and Answers
Here is our own rendition of some questions that came up. We did not keep a recording or notes on our end, and we have paraphrased and expanded some things that we said:
What is the update on P=NP?
Ken: The update is that here is no update. There is no recent progress on resolving P=NP—seemingly none this decade, I’d say. This is still light-years away from it and you could even say the difficulty needed for yea-much progress is discouraging. There are some conjectures that elaborate on P NP, including Unique Games and (S)ETH, but those two have gone less clear.
Does the Clay prize help researchers?
Dick: I do not see that the prize gets anyone to work on P=NP.
Ken: I disagree. The prize does help people explain quickly what they are working on to others and why. This is quite valuable.
Could P=NP be undecidable?
Dick: Who knows. I note that known proofs that some combinatorial problem is unprovable in Peano arithmetic somehow rely on a function that grows too fast. The famous Ramsey problem is a perfect example. I do not see any way to get such a function in the P=NP area. Of course I could easily be wrong.
Ken: This is the subject by which Dick and I first came to know each other in the 1980s, for instance this paper of Dick’s with Rich DeMillo vis-à-vis one of mine. I now believe it is resolvable but will need deep and complex techniques.
Here I, Ken, went into the topic of “Natural Proofs,” as Dick did again later.
When will P=NP be solved?
Ken: We just discussed this on our blog. The upshot is that the conjecture has been open long enough—coming to its 50th anniversary if you date it by Steve Cook’s famous 1970–71 paper, 64 years if you date it by Kurt Gödel’s 1956 letter to John von Neumann—that it is going outside the range where data on other conjectures has any predictive value.
Dick: There is a related issue with P=NP. Perhaps we have guessed the wrong direction. Most believe that P is not equal to NP. But many conjectures were finally resolved when someone worked on the right direction.
Who believes P=NP vs P
NP?
Ken: Our friend Bill Gasarch has polled this three times since 2002. His article finds 80% support for P NP, which he says goes higher among those who have worked more on it. I believe unequal, but Dick’s opinion is fluid and Don Knuth recently said he takes the “equal” possibility seriously.
I (Ken) started hunting for how we’ve covered Knuth’s opinion on GLL—it seems only a brief mention here and in comments here—but Dick related hearing it in person from Knuth.
Why so Hard?
Ken: If you believe P NP, then it is hard because Nature—mathematical nature—does a bang-up job of making it seem like P
NP. Most instances of NP-complete problems are easy; so called SAT-solvers have had much practical success. The larger issue is that nature has frustrated us from proving any nontrivial general lower bounds at all. You can allow a
exponential-time algorithm to make exponentially-long queries to NP and yet no one has proved that the resulting language cannot be decided by linear-sized Boolean circuits. Ryan Williams needed a lot of effort to prove that this class does not have constant-depth poly-size modular-counting circuits, but those could be weaker in relevant ways than general linear-size circuits. But this class is a lot bigger than NP.
I then said another factor is that sometimes algorithms seem to make no visible progress until at the very end when they suddenly come up with a correct answer. Dick and I had tried to quantify a notion of progress. I then started talking about the “hardness versus randomness” phenomenon and the “Natural Proofs” barrier again (for which this 2016 paper by Ryan is a good reference) but Dick cut in with a nub of all these matters.
Dick: A key issue is what I call “Bait-and-Switch” (indeed, in a post on the first day of GLL). Suppose an output is believed to be hard. Add a random
to it. The result
is also random. One branch of an algorithm computing
and another working on
seem to have nothing to do with
. Yet when you do
bitwise you have
. This destroys any lower-bound argument that would be based on metering progress toward
.
Guessing wrong way?
Dick continued saying that this issue only affects the “not equal” position and maybe it’s a hint of people guessing the wrong way. This went back into some things that were said before, and then the call started winding up.
Things We Didn’t Say
We had made mental notes while walking back from lunch across the street in time for the call, but forgot some of them. To recycle an old saying, a mental note isn’t worth the paper it’s written on.
One of them was to remark on Gerhard Woeginger’s P Versus NP claims page and the relative balance of claims of “Equal” and “Not equal.” As of its last update in September 2016, the 116 listed claims are (by Ken’s count) divided 62 for “Equal,” 49 for “Not equal,” 3 for unprovable/undecidable, 1 claiming no claim, and 1 for both “Equal” and “Not equal.” It may be that “Equal” predominates because its logical formula begins with and it seems easier to imagine one has found a single
that works rather than to exclude all
—an infinite number of them.
I (Ken) had intended to connect this and the P=NP poll results to our post two months ago about cases of open questions where one direction seems overwhelmingly supported both by evidence and sentiment. Whatever one thinks about the value of all the P-vs.-NP claims, they witness that P-vs.-NP is certainly not one of those cases.
Last, I had intended to mention the deepest evidence in favor of “not equal.” This is the sinuous thin line between hard and easy cases of problems. Going back at least to Thomas Schaefer’s great 1978 work classifying cases of SAT, we’ve been aware of a surprisingly sharp dichotomy between “hard” and “in P” with seemingly no buffer zone. And the line sometimes seems to fall capriciously. In the second half of this post we mentioned Jin-Yi Cai’s work on dichotomy and a case relevant also to quantum where counting solutions to a fixed family of quadratic mod-4 polynomials in is easy, but counting those in
to the same polynomial is hard. For another instance of this widely-appreciated point, see Scott Aaronson’s discussion of 3SAT here.
The point is that if P=NP (or counting is in P) then all of this finely filigreed structure would vanish—poof—an illusion all along. Like if the Mandelbrot Set became a smeary blob. But then we’d be left with: why did we have this illusion in the first place?
Open Problems
We suggested that she speak to some others—people more expert than we are. For this and other reasons the article may be quite different. We hope giving our own summary here is a help. What points would you add to it?
We also forgot to ask her about her own work in Teichmüller theory. Here are a couple of her articles on the ABC Conjecture. But that is not a Clay problem and is a subject for another decade.
[Edited 50 to 20]



I still believe the answer is here:
https://zenodo.org/record/3604362
https://www.academia.edu/39973754/P_versus_NP
https://www.preprints.org/manuscript/201908.0037
See the answer here:
https://zenodo.org/record/3605338
What points would you add to it?
I would add the problem presented by the Institute to win the prize: “Suppose that you are organizing housing accommodations for a group of four hundred university students. Space is limited and only one hundred of the students will receive places in the dormitory. To complicate matters, the Dean has provided you with a list of pairs of incompatible students, and requested that no pair from this list appear in your final choice.”
1. How do I put together a list of 50 pairs of compatible students?
2. Are all the students on the list of incompatible pairs?
If this is not the case, I make a distinction between problematic-unproblematic students, where the problematic ones are those who are on the list of incompatible pairs and the unproblematic ones are those who are not on this list. Note that I have created a new list of unproblematic students and renamed the list of incompatible pairs as problematic.
3. Who are the unproblematic students? They are those who have no problems with anyone and no one has a problem with them. There must be at least one student who is not on the list of incompatible pairs.
4. What to do now? Add students from the other list (problematic) to the list of unproblematic students until 50 pairs of compatible students are formed. When they move to the new list they lose their “problematic” character because the pairs of incompatible students they were part of are broken.
5. So: If I have 100 unproblematic students, then I have the list of 50 compatible pairs.
If I have 90, then I only need 10 students from the other list to complete the pairs.
If I have 80, then I only need 20.
If 70-30
If 60-40
If 50-50
If 40-60
If 30-70
If 20-80
If 10-90
If 1-99.
Thanks for the space and attention.
Important: The third set created by the distinction can also be an empty set. This does not prevent the operation of addition for forming 50 pairs of compatible students.
And a speculative question: What strange force would attract certain elements to an empty set and order them? The opposite seems simpler…
See https://twitter.com/joyce1
A minor comment: the reply to “Could P=NP be undecidable?” answers a different question, namely “Could P=NP be proven undecidable?” This is a common substitution, but it seems treacherous when reasoning about the undecidability of statements to replace “true” with “provable”.
Great point. This is a bit of a thicket. If P!=NP were like Riemann or Goldbach in terms of being provably refutable if false, then proving it undecidable would prove it true. There once was a serious claim of this. This was a last-minute edit from Dick’s original “unprovable” to “undecidable” because the sentence then seems to vote on P=NP versus P!=NP, whereas the post means “P=NP” as the problem. Dick’s reply is addressing one part of this thicket. My quoted reply is instead taking “undecidable” in the human modal sense: are we never going to resolve it?
Dear Regan, in line with this point, I think that: perhaps the way we see mathematical spaces and subspaces needs an update, I propose that they be considered possibilities, just as set theory organizes these same possibilities.
> She said it was driven by the near 50th anniversary of the Clay Prizes.
I think you rather mean 20th anniversary.
Dear Wolfgang Keller:
Oh my…I think you are right. I tried to see if we could argue we meant 20 in another radix, but even in radix 6 we are way too high. Thanks for the catch.
Best
Dick
This was also a last-minute edit—missed by both of us while conflating with the 50th anniversary of Cook’s paper.
If it is indeed P≠NP, it means that NP is completely different from P, in other words, the P vs NP problem may involve some important conceptual issues.
Maybe we can question about the two current definitions of NP (NP is verifiable; NP is decidable) : Whether do the two definitions capture the essence of NP?
If it is indeed P≠NP, it means that NP is different from P, in other word, the P vs NP problem may involve some important conceptual issues.
Therefore, we can question about the definitions of NP : whether do they capture the essence of NP?
No, people have failed to grasp the essence of NP: NP=P=NPhard=NPcomplete. See my simplest supportive proof of it, on this site, this page.
My above-mentioned proof won’t appear on this site; pls see it on Twitter @koitiluv1842.
In short, all the P≠NP-proof-tries use DM (= diagonal methods), which I have nullified/annihilated @koitiluv1842: P=NP.
p = np non existence of way-one functions is true
This might be the way to do it. Can we directly prove non-existence of OWF?
P = NP “” Non existence of one-way functions is true
P = NP se e somente se a inexistência de funções de sentido único é verdadeira !!! …
Here, a self-reflective and hetero-reflective process occurs at the same time with non-problematic students. Some of this is noted in a letter from Gotthard Günther to Gödel on October 2, 1954: “Three-valued logic, however, requires the introduction of a second negation.” (See Kurt Gödel, Collected Works, Volume IV, 2014)
A brief summary for those who do not have access to these letters:
-In the classical tradition the thinking subject and the process of reflection do not count at all. That is, all categories of logic must, if they are to be true, be absolutely objectively definable. Everything “subjective” is simply to be eliminated.
-There are two fundamentally different forms of thought-independence. And accordingly two in principle different identities of the object with itself: irreflexive and reflexive identity.
-But the opposition between “irreflexive” and “reflexive” is itself the object of an (iterated) reflection. We thus obtain a third value, which designates the thought process that distinguishes between “irreflexive” and “reflexive”. (Gotthard Günther, Letters to Kurt Gödel, 1954)
Thanks for the patience.
Dear Héctor,
the subjective resides within the subspace of brain simulation, which is why our belief system today needs a validator so that we can find the correct mapping so that it is possible to achieve the objectivities of the physical world. Join our discussion at: https://www.quantamagazine.org/new-map-of-meaning-in-the-brain-changes-ideas-about-memory-20220208/ Thanks.
Hi Dick,
I invite you and the readers of this blog to join into a discussion on P vs NP:
https://www.academia.edu/s/8cfdfa6126?source=link
Feel free to comment, criticize or maybe share this result with others.
Summary: This forum presents another tool to prove that P is not equal to NP.
Waiting your comments…
Frank
I was wondering if you could possibly sum up your “P ≠ NP proof,” please. Does it, too, use some DM (= diagonal method[s]) or any statement(s) equivalent with DM? If so, I’m afraid it fails to work; please see my other posts at this page and at other pages. (Google and Facebook WON’T somehow let me log in. Please see also Twitter @koitiluv1842.)
Mr. Koiti Kimura.
The site Academia requires an account or you just can log in with a facebook o gmail account. Academia is a similar site such as researchgate (I actually use both). My work presents an hypothesis which implies, in case of being true, that some well-known NEXP-complete problem is in NP when P = NP. Since this is a contradiction (EXP is not equal to P), then the proof shows that P is not equal to NP when the hypothesis is true. The good thing is the hypothesis can be easy to deduce that is true: see the comments of the forum
I fail to understand what you have actually been meaning to say so far; you had written that you yourself had proved that P=NP in this document:
https://zenodo.org/record/3605338#.YhxX4KvP2yI,
but do you now intend to say that the above-mentioned “Forum” proved that P≠ (not equal) NP?
Mr. Koiti Kimura,
I have tried in both directions. This is a very hard problem and I haven’t had real progress since 2007 when I started with naive intents. I have gained experience all these years, but that it is not a guarantee, that the work is correct this time: you have to see it by yourself.
Let me repeat my entreaty to look over my own P=NP proof plus other two authors’ P=NP proofs that I posted on this website (plus Twitter @koitiluv1842 for other results of mine).
Reinaldo: I will try to read (and understand) that arduous article. I brought up Günther about the problem proposed by the Clay Institute. How do I create the list (or set) of 50 pairs of compatible students? As data I have the list of 400 students and the list (or finite set) of incompatible pairs of students. So I ask: are all the students (400) in the list of incompatible student pairs?. It seems that we have no data on this or we do not know. Here the need to distinguish (Günther’s third value) with its negation (non-problematic students) makes its entrance. In this third list (or finite set) the “subjective” element comes into play to determine the extension of the set and allow the operation of addition between the lists (problematic-non-problematic students). How does the Dean determine this hypothetical list of non-problematic students? By appealing to the subjective element: this student has no problems with all the other students/all the other students have no problems with him. So, if I have 100 non-problematic students then I have the list of 50 pairs of compatible students and so on….
P.S: I don’t know if this solution based in part on set theory will later be translated into a universal algorithm.
“P.S: I don’t know if this solution based in part on set theory will later be translated into a universal algorithm.”
The technical question would be the following: Given a data set S (400 students) and its subset IS (incompatible pairs of students): Can a universal algorithm (establishing a P-NP students distinction) determine in polynomial time the NP subset of non-problematic students by putting it in relation and function with the P (or IS) subset of problematic students? If this is possible the machine would also determine the CS subset of 50 pairs of compatible students in polynomial time (recall that they have to occupy 100 rooms for two persons each).
Aclaration: Here, P-NP are the names of the subsets and 50 are the rooms for a total of 100 students. Thanks
For your convenience, let me repeat this proof “summary”: P=NP .. ⇔ .. Profs. Lance Fortnow’s and Michael Sipser’s P≠NP “proofs”(-tries) ⇔ DM (= diagonal methods) . & . my DMd (= diagonal methods disproofs).
Dr./Prof. Frank Vega:
Also: DM ⇔ EXP ≠ P (in your assumption). Thus, after all, P=NP. Pls search for my DMd on this website.
As you all already know:
https://en.wikipedia.org/wiki/EXPTIME
also shows that: DM ⇔ EXP ≠ P.
“P.S: I don’t know if this solution based in part on set theory will later be translated into a universal algorithm.”
In 1963 (after the missile crisis and before the death of Kennedy), the University of Chicago published a book by the Russian mathematician Boris Trakhtenbrot Algorithms and Automatic Computing Machines, where it is asked to what extent the concept of Turing machine and functional scheme is general and whether the procedure of presenting algorithms by means of functional schemes is universal in the sense that any algorithm can be given in this way. Then he states the basic hypothesis of the theory of algorithms: All algorithms can be given in the form of functional matrices and executed by the corresponding Turing machines.” (the spanish version says: Cualquier algoritmo puede ser presentado por medio de un esquema funcional de Turing y realizado en la correspondiente máquina de Turing). If this is correct, then the same would be applicable for the implementation of a universal algorithm.
P.D: I wish Mr. Lipton a speedy recovery.
I would like to add something more from this author in relation to the basic hypothesis of the theory of algorithms:
“All known algorithms which have been developed over the thousands of years of the history of mathematics can be written in the form of Turing functional matrices. But the hypothesis is not just a statement about algorithms which have been found in the past. It also has the quite different nature of a prediction concerning the future; it says that whenever any instructions are given as an algorithm, no matter what their form or elementary operations, it will be possible to give them as a Turing functional matrix.” (bold and italics are mine)
I’m afraid that no present-math-based computers including neuro-computers like brains fail to solve the Frame Problem ⇔ The Ugly Duckling Theorem: they fail to judge/classify any info pieces either as relevant or irrelevant to the problem-solving at hand.
Implications:
1. Non-brain-function, God-given souls exist.
2. No all-powerful anti-malware vaccines can be made within decades to come, against my-Esst(= expressions-systems satisfaction theories)-aided (post-)Ransack (= [post-]RSA-codes-cracking hackings), until the perfection of a pattern-matching theory based on a dualist-physics including a neo-psychophysics that Descartes and Bernhard Riemann envisioned. = The proof of The Chapter 18 of St. John’s “Revelation”: the last Scripture in The New Testament.
Here, the first reference I find about a distinction algorithm (B. Trakhtenbrot, Chicago University, 1963). I take it from the Spanish translation (Los algoritmos y la resolución automática de problemas, 1977). Nowhere in the current scientific literature does the concept of “distinction” appear.
“The deducibility problem may be formulated as follows:
For any two words (formulas) R and S in a logical calculus, determine whether or not there exists a deductive chain from R to S.
The solution is understood in the sense of an algorithm giving an answer to any question of this type (with whatever R and S). Now it is not difficult to see that the creation of such an algorithm would make it possible to obtain a general solution method for the automatic treatment of the most varied problems of all mathematical theories constructed on the basis of axioms. Indeed, the validity of this or that proposition S (for example, the statement of some theorem) is understood in this theory as the possibility of logical deduction from a system of axioms taken as premise R. Then, by applying the algorithm of distinction of the possibility of deduction, one could establish whether or not the statement S is valid in the theory under consideration.”
P.D: The english version translates as “The problem of deducibility”.
The spanish version: “The problem of the distinction of the possibility of deduction”
Remember that Cook-Levin theorem says, in essence, that some problems are easy to formulate but require so many computations that a machine capable of solving them cannot exist.
We know that no student can be a pair of itself and it is excluded that all students have problems with everyone. Therefore, all students in the IS subset have at least one (or more students) with whom to form a compatible pair. If we use distinction (a set formation process) as a method of reduction and deduction, we will reach to the following conclusion:
We will call Problematic (PS) the subset of students who belong to the subset of Incompatible pairs (IS) and Non-problematic (NPS) the subset of students who do not belong to the subset of Incompatible pairs (IS). Then, if I have x students en PS, then I have x students in NPS, and then, if it is necessary, I add students of IS in NPS for forming 50 pairs in CS.
The solution until 350 problematic students is easy. Only from 351 to 400 problematic students you have to select (in IS) successively (one by one) x number of students, with their respective at least one, to form 50 compatible pairs. Then…
PS (NPS +PS +IS) CS (in pairs)
If 100 → 300 → 0 → 0 → 50
If 200 → 200 → 0 → 0 → 50
If 300 → 100 → 0 → 0 → 50
If 310 → 90 + 10 → 0 → 50
If 320 → 80 + 20 → 0 → 50
If 330 → 70 + 30 → 0 → 50
If 340 → 60 + 40 → 0 → 50
If 350 → 50 + 50 → 0 → 50
If 351 → 49 + 49 + (1+1At least one) → 50
If 352 → 48 + 48 + (2+2Alo) → 50
If 353 → 47 + 47 + (3+3Alo) → 50
If 354 → 46 + 46 + (4+4Alo) → 50
If 355 → 45 + 45 + (5+5Alo) → 50
If 356 → 44 + 44 + (6+6Alo) → 50
If 357 → 43 + 43 + (7+7Alo) → 50
If 358 → 42 + 42 + (8+8Alo) → 50
If 359 → 41 + 41 + (9+9Alo) → 50
If 360 → 40 + 40 + (10+10Alo) → 50
If 370 → 30 + 30 + (20+20Alo) → 50
If 380 → 20 + 20 + (30+30Alo) → 50
If 390 → 10 + 10 + (40+40Alo) → 50
If 400 → 0 + 0 + (50+50Alo) → 50
IS: Incompatible Students
PS: Problematic Students
NPS: Non-Problematic Students
CS: Compatible Students
Alo: At least one
Thanks for the attention.
A better visualization of the picture, here: P=NP-Proof”
Gregory Chaitin says that there are extreme cases where mathematical truth has no structure at all, where it is completely accidental. Hence he deduces that randomness lacks structure. Here, on the other hand, randomness (which is only at the beginning, when we choose the first student with his at least one) has structure.
Thank you so much for your proof.
In connection with your above comment, as for Gregory Chaitin’s chaos/randomness theory (and its corollary macro-evolution theory) using his Ω-number, too, I have nullified them with my DMd (= diagonal methods disproofs) ⇔ Amir Alexander’s and my own criticisms against the notion of actual infinitesimals (= the self-contradictory end-/edge-digits of actually-infinite/endless/edgeless fractions) ⇔ the non-existence of any fractions with the “DM-proved” un-computability (= randomness which is said to be no computable quasi-randomness).
This concrete perfection/enrichment of the Shor’s algorithm details proves P=NP:
https://twitter.com/republicofmath/status/1522220263856816128
The criticisms against the assertion that quantum-computing and P=NP-proving Shor’s algo is infeasible:
https://twitter.com/koitiluv1842/status/1541337919272538112