Skip to content

Make A Trillion Dollars

December 31, 2021


For every problem, there is one solution which is simple, neat, and wrong— H. L. Mencken

2019 announcement

Lance Fortnow was a colleague of mine for a long time at Georgia Tech and is now the Dean of the College of Computing at Illinois Institute of Technology. He started one of the top blogs in theory of computing, Computational Complexity, which is partnered by Bill Gasarch of the University of Maryland at College Park.

Today, New Year’s Eve, we discuss his angles on the status of P versus NP.

Lance has written a new survey on the current status of the P versus NP problem. It is titled, “Fifty Years of P vs. NP and the Possibility of the Impossible.” By this he means two things:

  • The believed impossibility of solving NP-hard problems is a valuable source of “computational friction.” The analogy is that without physical friction, movement would be too easy—we would be unable to control it.

  • The seeming impossibility of resolving the P=NP question has redistributed focus to coping with NP-hardness and finding ways around it. Indeed, this impossibility may be technically bound to the ability of heuristic approaches to achieve practical success.

Well, as theorists, Ken and I would like to know about the status of resolving the question.

2022 and Beyond

I am old enough that I am unlikely to see the resolution of P=NP. But that does not stop me from thinking about the what could be coming down the pike. Lance describes the importance of this great problem—it is roughly fifty years from its first statement by Steven Cook, and then Leonid Levin and Dick Karp—and reviews what can be called progress on it. But as this illustration from his article speaks, we are still at a crossroads about what kind of computational world we live in.

We are just about to enter the new year—2022. This could be the year that resolves P=NP, or that could be another fifty years away. Another measure is the Prize of a million dollars for solving it is now entering its third decade. Yet there seems to be no hint whether we are close to its resolution or not.

Why P=NP Is So Hard

There is a fundamental issue with the P=NP problem. Yes it is hard. Yes we seem to have no clear path to resolving it. We do not even have a clear path to a partial result. But that does not really explain why the problem is so hard. Let’s try and look at why the problem is so impossible.

The reason I believe that the problem is so difficult is due to the inherent nature of algorithms for NP problems. This nature makes the resolution especially difficult, and especially interesting. Consider the problem of finding some {X} so that

\displaystyle  f(X) = b

Here {f(X)} is an easy to compute function. The difficulty is that finding {X} given {b} is hard. Think {f(X)} is some crypto type problem, or if you wish some search type problem.

Why is this such a nasty problem? The obvious issue is how hard is it to map an {b} to an {X} so that

\displaystyle  f(X) = b

But {it what exactly does this mean?} The first issue is when must we be able to do this? The second is how must we be able to do this? Let’s look at these in turn.

When? The obvious answer is always. We would love to be able to find {X} for each {b} that has a solution. This corresponds to being able to solve all problems. Clearly the best case. But in practice we might be quite happy with:

  • For most {b} find an {X};

  • Or even, for many {b} find an {X}.

The latter could be: at least {50\%} of the time we can find the {X}. In practice this could be more than enough. Think breaking a secret cryptosystem. If we could break even a positive fraction of the messages this could easily be enough.

How? The obvious answer is with an efficient algorithm. We would love to be able to run a standard polynomial algorithm that finds {X} from {b}. But in practice there are at least two issues:

  • The standard algorithm could be replaced by algorithms that break the standard definition of algorithms. They could for example be quantum algorithms or could be DNA algorithms.

  • The algorithms could be standard conventional algorithms. But they might run in time bounds that are exponential but are tricky. For example suppose that there is an solution that runs in time

    \displaystyle  2^{n/C}

    If {C} is modest size then the solution is exponential in practice. But if {C} is large, then the algorithm for real problems would be efficient.

The latter is interesting. If {C} is any constant no matter the size you would collect the million dollar prize for solving P=NP. But if {C} is large enough, this would also make a trillion dollars. This is because you could use the method to solve realistic sized problems.

Enter A Trillion Dollars

The following image hopefully helps set the stage for what a trillion dollars looks like, compared to a million.

2011 source, origin untraceable?

The reason for showing this that if P=NP has better when and how solutions some yield attacks that could make immense amounts of money. For example, an algorithm that breaks {50\%} of the problems in a practical time would be worth an immense amount of money.

So if you could show P=NP, even in some practical way that’s better than what we can do now, you could be richer than Auric Goldfinger or some other James Bond masterminds. We might all, however, end up with the poorer world that these villains would have created.

What has grown over the past fifty years—and the past decade in particular—is ways to make a trillion dollars by not resolving P versus NP. Discussion of these occupies latter sections of Lance’s survey where a “payoff” would ordinarily come in a review article.

Open Problems

What do you think about P=NP? Could the solvers win much more than the one million dollar prize? What would you do if you made progress? Is the real win the way the question acts as a ridge framing a valley that guides the flow of progress?

13 Comments leave one →
  1. Frank Vega permalink
    December 31, 2021 7:04 pm

    This problem is very very hard. I suspect that feasible algorithms have been already found. For example, this one (the number 51 in The P-versus-NP page):

    1. [Equal]: In March 2009, it was published an ICIMAF technical report called “P=NP”, with ISSN 0138-8916. ICIMAF is the Institute of Cybernetics, Mathematics and Physics in Cuba, and belongs to CITMA (the Cuban Ministry of Science, Technology and Ambient Medium). This report was announced in March 2009 in the usenet newsgroup comp.theory, but without providing any link to an electronic version. The author also mentions that he actually has already resolved the P versus NP problem in October 1985. The result was published in the proceedings of the ININTEF (Institute of Fundamental Technical Research, Cuban Academy of Science) Scientific Conference that took place around that time at Capitol (Havana, Cuba). The paper is “Método de solución para sistemas de ecuaciones simultáneas sobre un Campo de Galois y aplicaciones en Inteligencia Artificial” (Solution method for systems of simultaneous equations over a Galois Field and Artificial Intelligence applications), 1985 Annual Report, Vol.II, S2-25, p.274, Cuban Academy of Science Edition. As part of that paper, a polynomial-time algorithm is given that resolves an NP-complete problem.

    Note that ICIMAF and the Cuban Academy of Science are very prestigious organizations in Latin America. There are other similar claims in other countries such as Russia, etc. None of them can be read on internet.

    • CVS permalink
      January 13, 2022 2:52 pm

      P vs. NP has not been solved, Frank.

  2. kamouna permalink
    December 31, 2021 8:34 pm

    Dear Richard Lipton,

    Dear Lance,
    The following was posted to Lance and Bill. It certainly relates to you.
    You should have mentioned that the Clay Mathematics Institute has voted unanimously:”No Confidence to ALL mathematics journals”. This is one of the remarkable decisions in the history of mathematics. Why did they revamp their rules, entirely? What is the rationale? It’s worth discussion, isn’t?

    The social constructivists view of philosophy of mathematics it is like the:”English Common Law”. Below is an illustration comparing SAT and Factoring. Obviously they are incomparable, incompatible. The former is about “True” and “False”,while the later is about numbers. The EXISTENCE of both is a different EXISTENCE.

    If you ask me what’s the difference between both existence(s), the answer is:”A dress which is tailored of 26 letters (the English language) must be lacking. Or, as Billy Ocean said in his song (Suddenly):”When a thousand words are not enough to say what I feel inside”.

    Even Renes Descartes, the founder of Modern thought, Le Fondateur de la Pensee Moderne” told them:”I think, therefore, I am” (Je Pense, donc je suis”. This happened when he doubted everything, absolutely everything, he doubted his own existence. At the end, he realized that he MAY NOT doubt the fact that he is doubting. Hence:”I think, therefore, I am”.

    He concluded that he made his own existence (the motivation to drink water, or the motivation to run away from a lion), he made his existence a PROPERTY of his doubt!

    While doubting should have been a PROPERTY of his own (denied) Existence.

    I leave it for you to extrapolate this about the existence of numbers vs. the existence of Truth and Falsity.

    Below are some thoughts that was sent to your earlier thread.

    Best,

    Rafee Kamouna.

    ================================================
    SAT is a logical problem, while factoring is a mathematical problem. Mixing both types MUST lead to a contradiction. This was trivially observed by the ancients even before Aristotle. The ancients had the following Philosophy Levels:

    1. Divinities or the First Philosophy.
    2. Logic, The Upper Philosophy.
    3. Mathematics, the Middle Philosophy.
    4. Physics, the lower Philosophy

    Integer Factoring never involves a notion of truth.

    I sent to you both Bill and Lance copies of My P vs. NP Claim which was sent to the CMI President as well as all Scientific Advisory Board members.

    Just input any paradox in any language (Formal, or Natural), the Cook-Levin theorem collapses. I re-state it

    \forall w\in L\in NP \exists M(w)=SAT iff M accepts w

    Take the meaning of w to be any paradox, you will never be able to reduce it to SAT. Please try it by pencil and paper.

    It’s impossible tell the sun to leave the sky.
    It’s impossible to ask a baby not to cry.

    Can the ocean,
    Keep from rushing to the shore,
    It’s impossible. Courtesy of Perry Como

    I’m sure that Lance remembers that he refereed my 2008 JACM paper and 2009 ACM ToCT while Bill was the Complexity Theory area editor of JCSS, when its Managing editor refused to revise the 2 papers. Then declined the decision and asked me to select reviewers. I selected (Great Cook and Great Karp), then the Managing editor declined again.

    Enter the following as an input, then tell me what happens to the Cook-Levin:
    1. Thanks God, I’m an atheist.
    2. No for insulting the Idiot president.
    3. This sentence contains letters more than itself.
    4. The penalty that scored the goal before reaching the goal.

    Richard Lipton calls these sentences: “Self-Defeating Sentences”. When it comes to any of the paradoxes in my papers, he must avoid them.

    Deolalikar’s proof (11 years old) was an opportunity for him while mine is a threat.

    Best,
    Rafee Kamouna

    • Bhupinder Singh Anand permalink
      January 2, 2022 2:47 pm

      Dear Rafee,

      I shall not claim to follow your reasoning sufficiently to comment upon it.

      However, you make a pertinent point regarding the need to question the assumed categorisation of SAT—as either in P or in NP—in the absence of a constructive, evidence-based, definition of arithmetical truth (which is clearly not possible if the two classes are defined set-theoretically).

      Kind regards,

      Bhup

  3. Bhupinder Singh Anand permalink
    December 31, 2021 9:19 pm

    The paper `The truth assignments that differentiate human reasoning from mechanistic reasoning’, which appeared in the December 2016 issue of Cognitive Systems Research, considers the philosophical challenge that arises when an intelligence—whether human or mechanistic—accepts arithmetical propositions as true under an interpretation—either axiomatically or on the basis of subjective self-evidence—without any specified methodology for evidencing such acceptance.

    It then defines:

    Definition 1 (Algorithmic verifiability:): A number-theoretical relation F(x) is algorithmically verifiable if, and only if, for any given natural number n, there is an algorithm AL(F,n) which can provide objective evidence for deciding the truth/falsity of each proposition in the finite sequence {F(1), F(2), …, F(n)}.

    Definition 2 (Algorithmic computability:): A number-theoretical relation F(x) is algorithmically computable if, and only if, there is an algorithm AL(F) that can provide objective evidence for deciding the truth/falsity of each proposition in the denumerable sequence {F(1), F(2), …}.

    Note that algorithmic computability implies the existence of an algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions, whereas algorithmic verifiability does not imply the existence of an algorithm that can finitarily decide the truth/falsity of each proposition in a well-defined denumerable sequence of propositions.

    The paper then proves that:

    Corollary 8.3: In any model of PA, Goedel’s arithmetical formula [R(x)] always interprets as an algorithmically verifiable, but not algorithmically computable, tautology over N.

    So, if the arithmetical relation R(n) is algorithmically verifiable as a tautology, but not recognisable as a tautology by any Turing-machine, then it is trivially true logically that P=/=NP, since that immediately entails SAT is neither in P nor in NP.

    The reason this argument is not evidenced set-theoretically is that, standard, set-theoretical, interpretations of the formal definitions of the classes P and NP do not admit the relative strengths and limitations of first order set theories and first order arithmetics (properties which are finitarily entailed by the Provability Theorem for PA of the cited CSR paper).

    Such interpretations are thus liable to implicitly assume—falsely, as evidenced by Goedel’s reasoning—that every propositional formula which is algorithmically verifiable is necessarily algorithmically computable.

    It would then follow that the set-theoretical separation between the classes P and NP in terms of computational complexity is only expressed non-constructively by assuming that:

    (i) the existence of the class P can be well-defined as consisting of all, and only, those problems that can be solved in polynomial time by a deterministic Turing machine; and

    (ii) the existence of the class NP can be well-defined as consisting of all, and only, those problems that can be solved in polynomial time by a non-deterministic Turing machine.

    However, such a perspective cannot admit that the differentiation between the classes P and NP:

    (a) when defined non-constructively in a set-theory such as ZF (Cook’s definitions}, and

    (b) when defined constructively in an arithmetic such as PA (in terms of algorithmic computability/verifiability);

    is also qualitative; and that such distinction cannot be well-defined set-theoretically in terms of only computational complexity.

  4. Maksim permalink
    January 1, 2022 11:53 pm

    Just curious: has anyone made a 1-April-style proof of P=NP using division by zero? Is there a rating of most curious math results that were obtained by a creatively hidden trap like that?

  5. Shuxue Jiaoshou permalink
    January 2, 2022 8:30 pm

    What’s a million US dollars these days? N-O-T-H-I-N-G.. Impossible to buy even a one-bedroom apartment.. They must raise the prize money, otherwise you cannot motivate young researchers these days.

  6. Bhupinder Singh Anand permalink
    January 3, 2022 3:05 am

    Dear Professor Lipton,

    Apropos your remark that:

    “There is a fundamental issue with the P=NP problem. Yes it is hard. Yes we seem to have no clear path to resolving it. We do not even have a clear path to a partial result. But that does not really explain why the problem is so hard. Let’s try and look at why the problem is so impossible.”

    Hasn’t Goedel already addressed—and seemingly explained—this in his 1951 Gibbs lecture, where he sought to distinguish between universal propositions that are algorithmically verifiable as always true, but not algorithmically computable as always true?

    “I wish to point out that one may conjecture the truth of a universal proposition (for example, that I shall be able to verify a certain property for any integer given to me) and at the same time conjecture that no general proof for this fact exists. It is easy to imagine situations in which both these conjectures would be very well founded. For the first half of it, this would, for example, be the case if the proposition in question were some equation F(n) = G(n) of two number-theoretical functions which could be verified up to very great numbers N.”

    Isn’t Goedel implicitly suggesting that his formally undecidable arithmetical proposition [R(x)]—which is unprovable in PA, even though [P(n)] is numeral-wise provable in PA—may be one such?

    If the distinction sought to be made by Goedel—between algorithmic verifiability and algorithmic computability—entails that provability in PA is equivalent to Turing-computability (see the Provability Theorem 7.1 for PA in [An16]), wouldn’t it then follow that SAT is neither in P nor in NP?

    If so, the reason for the PvNP problem being so hard may be that the set-theoretically well-defined classes P and NP may not be well-defined arithmetically (as would be necessary to assign well-defined (constructive) truth values when addressing SAT).

    Kind regards,

    Bhup

    [An16] The truth assignments that differentiate human reasoning from mechanistic reasoning: The evidence-based argument for Lucas’ Goedelian thesis. In Cognitive Systems Research. Volume 40, December 2016, 35-45.

    doi:10.1016/j.cogsys. 2016.02.004.

  7. Bhupinder Singh Anand permalink
    January 3, 2022 6:12 am

    Dear Professor Lipton,

    In continuation of my previous post, Goedel’s conjecture is also implicit in Turing’s remarks in his seminal 1936 paper on computable numbers:

    “The computable numbers do not include all (in the ordinary sense) definable numbers. Let P be a sequence whose n-th figure is 1 or 0 according as n is or is not satisfactory. It is an immediate consequence of the theorem of §8 that P is not computable. It is (so far as we know at present) possible that any assigned number of figures of P can be calculated, but not by a uniform process. When sufficiently many figures of P have been calculated, an essentially new method is necessary in order to obtain more figures.”

    … Turing: ([Tu36], §9(II), p.139).

    The need for placing such a distinction on a formal basis has also been expressed explicitly on occasion. Thus, Boolos, Burgess and Jeffrey define a diagonal function, d, any value of which can be decided effectively, although there is no single algorithm that can effectively compute d.

    Now, the straightforward way of expressing this phenomenon informally should be to say that there are constructively well-defined number-theoretic functions that are effectively' computableinstantiationally’, but not `algorithmically’. However, as the authors quizzically observe, such functions are labeled as uncomputable!

    “According to Turing’s Thesis, since d is not Turing-computable, d cannot be effectively computable. Why not? After all, although no Turing machine computes the function d, we were able to compute at least its first few values. For since, as we have noted, f1 = f2 = f3 = the empty function we have d(1) = d(2) = d(3) = 1. And it may seem that we can actually compute d(n) for any positive integer n—if we don’t run out of time.”

    … Boolos/Burgess/Jeffrey: Computability and Logic, p.37.

    The reluctance to treat a function such as d(n)—or the function Omega(n) that computes the n’th digit in the decimal expression of a Chaitin constant Omega—as ‘computable/verifiable’ in some sense, on the grounds that the `time’ needed to ‘compute/verify’ it increases monotonically with n, is curious; the same applies to any total Turing-computable function f(n).

    The only difference is that, in the latter case, we know'---or are willing to accept as reasonable---that there exists a commonprogram’ of constant length that will compute f(n) for any given natural number n; in the former, we know we may need distinctly different programs for computing f(n) for different values of n, where the length of the program may, sometime, reference n.

    Kind regards,

    Bhup

  8. Paolo permalink
    January 3, 2022 2:28 pm

    Maybe one should focus on P vs NP for N < 10^100 instead of the general case, it's more useful in practice too.

  9. Ivan Gusev permalink
    January 10, 2022 12:49 pm

    In my Logic of mutual determinations every seemingly discontinuous transformation is actually continuous and I call it “structural inertia”. Every structure has its own topology and all possible transformations have inertia relative to certain abstract state that has different topology. So, your guess is actually not random even if it is. It allows me to find super sparse subsets of actuations of inner states of one function by another. I will explain it soon. I will show you that jensen polynomials have “polynomial inertia” and we can reduce it relatively to a statement that RH is true. I will publish the algorithm for the supercomputer to prove RH in inconceivable for humans way but one that is mathematically correct. It will be mindblowing. And I will be a villain, eventually.

  10. January 19, 2022 9:56 pm

    Prof Lance Fortnow’s P≠NP proof try, as well as Prof Michael Sipser’s try and others’ tries, relies on DM (= diagonal-methods), which I have refuted with my DMd (DM disproofs). And I have other “active/offense( = as opposed to passive/defense [= DMd-line] )” P=NP proofs including the one that also proved NP = NP complete = NP hard = P.
    See @koitiluv on Twitter.com including my Profile-link.

  11. February 6, 2022 7:10 am

    These are just two examples of my fool-proof diagonal-methods-disproofs:
    1. https://twitter.com/koitiluv1842/status/1484007515692023809
    2. https://twitter.com/koitiluv1842/status/1445570173109293070

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading