Skip to content

Predictions We Didn’t Make

January 2, 2018


With wishes for a memorable New Year 2018

Muhammad Afzal Upal is Chair of the Computing and Information Science Department at Mercyhurst University. He works in machine learning and cognitive science, most specifically making inferences from textual data. In joint papers he has refined a quantitative approach to the idea of postdiction originally suggested by Walter Kintsch in the 1980s.

Today we review some postdictions from 2017 and wish everyone a Happy New Year 2018.

In a 2007 paper, the postdictability {\text{post}(C)} of a concept {C} is defined as “the ease with which a concept’s inclusion in the text can be justified after the textual unit containing that concept has been read.” This contrasts with {\text{pre}(C) =} “the ease with which the occurrence of the concept can be predicted prior to the concept having been read.” The main equation defines the extent {M(C)} to which the concept—or event, I may add—is memorable by

\displaystyle  M(C) = K\cdot L(C)\cdot(\text{post}(C) - \text{pre}(C)),

where {L(C)} is the prior likelihood of {C} emerging and {K} is a constant. It says that the concept {C} is most memorable if you couldn’t have predicted it but after you see it you slap your forehead and say, “Ah, of course!” It relates to what makes ideas “stick.”

Mercyhurst is in Erie, Pennsylvania. Erie had lots of snow this past week. Recordbreaking snow. More than Buffalo usually gets. We had several relatives and friends who had to drive through it on their way to Michigan and Pittsburgh and points further south. Was that karma? coincidence with this post? memorable in a way that fits the framework?

And how about the Buffalo Bills making the playoffs after a miracle touchdown by the Cincinnati Bengals on fourth-and-12 from midfield in the final minute knocked out the Baltimore Ravens? In designing a two-week unit on data science for my department’s new team-taught Freshman Seminar on “The Internet,” I had the foresight to use years-since-a-team’s-last-playoff-win (not last playoff appearance) as the definition of “playoff drought” in activity examples. Hence—unless the Bills upset the Jacksonville Jaguars next Sunday—the local “nudge” of my materials will work equally well for next fall’s offering. Can one quantify my prescience as prediction? postdiction? Let’s consider some more-germane examples.

Some Standing Predictions

Last January we did not do a predictions or year-in-review post as we had done in all seven previous years. We were caught up in questions over László Babai’s graph isomorphism theorem and other matters. Several predictions were recurring, so let’s suppose we made them also for 2017:

  1. No circuit lower bound of 1000n or better will be proved for SAT: Well that’s a freebie, but note this about how P=NP implies strong circuit lower bounds elsewhere.

  2. A computer scientist will win a Nobel Prize. No—indeed, less close than other years.

  3. At least five claims that {\mathsf{P}=\mathsf{NP}} and five that {\mathsf{P} \neq \mathsf{NP}} will be made. We know of one of the latter and will say more about it below. Alas, Gerhard Woeginger’s page tracking these claims was not updated in 2017. We wish him the best.

  4. A “provably” secure crypto-system will be broken. For this one we don’t have to check any claims. We just pocket the ‘yes’ answer. Really, could you ever prove the opposite?

  5. An Earth-sized planet will be detected orbiting within the habitable zone of its single star. In 2017 they came in bunches.

  6. A Clay problem will be solved, or at least notable progress made. We are saddened to note that the institute founder Landon Clay passed away in August. For 2016 we might have noted a correspondence shown between part of the Yang-Mills problem and the ABC Conjecture. Burt Totaro wrote a nice survey last June about progress on the Tate Conjecture with implications for the Hodge and Birch-Swinnerton Dyer problems. There was an interesting new approach to the Riemann Hypothesis via quantum Hamiltonians published last spring. Some very recent news on the Navier-Stokes equations might instead be a kind of “regress.” Maybe we should look for progress on our own problem, P=NP? You can tell us how notable it has been.

Postdictions From 2017

Since some of our perennial questions have entered a steady state, it is time to find new categories. A week-plus ago we noticed that Altmetric publishes a top-100 list based on their “Altmetric Attention Score” every November 15. So it is natural to suppose we postulated:

  1. A paper in computational complexity or algorithms will make the Altmetric Top 100 list.

The answer with regard to the 2017 list is “yes” but the reason is unfortunate—it is Norbert Blum’s incorrect {\mathsf{P \neq NP}} paper coming in at #38. Blum gave a formal retraction and subsequent explanation, which we added as updates to our own item on the claim. The only (other) paper labeled “Information and Computer Sciences” is the AlphaGo Zero paper at #74. Actually, Blum’s paper was tagged “Research and Reproducibility.”


  1. An algorithm with zero supervision will unexpectedly demonstrate superiority in a human-centric capability.

AlphaGo Zero and most recently AlphaZero spring to mind. With much swallowing of pride from my having started out as a chess-player in the early 1970s when computers were minimal, I’m not sure that games of perfect information should ultimately be regarded as “human-centric.” Based on my current understanding of the AlphaZero paper and comments by Gwern Branwen in our previous post, what strikes me as the most stunning fact is the following:

Chess can be encoded into a ‘kernel’ {K} of well under 1GB such that {K} + small search comprehensively outperforms an almost 1,000x larger search.

More on the human-centric side, however—and allowing supervision—the most surprise and attention seems to have gone to the 2017 Stanford study adapting off-the-shelf facial-analysis software to distinguish sexual orientation from photographs with accuracy upwards of 90%, compared to human subjects at 52% from a balanced sample, which is barely better than random. For utility we would nominate LipNet, which achieves over 95% accuracy in lip-reading from video data, but the paper dates to December 2016.

The lip-reading success may be the more predictable. The extent to which it and the “gaydar”application are postdictable appears to be the same as our reaching a community understanding of what deep neural nets are capable of—which does not require being able to explain how they work. Setting up grounds beforehand for the justification by which Upal and his co-authors define postdiction might be a fair way of “giving credit for a postdiction.”


  1. Advances on the P=NP question will be channeled through the concept of dichotomy.

Per Lance Fortnow in his own 2017 review, the complexity result of the year is split between two papers claiming to prove full dichotomy for nonuniform CSPs—where dichotomy means that they are either in P or NP-complete. Meanwhile we have devoted numerous posts to Jin-Yi Cai’s work on dichotomy between P and #P-completeness, including recently. So can we get some credit for prediction? or for postdiction? Anyway, we make it a prime prediction for 2018 that there will be notable further progress in this line.


  1. ‘Quantum Supremacy’ will be claimed and rebutted in the same year.

We specify quantum supremacy to mean mean building a physical device that achieves a useful algorithmic task that cannot be performed in equal time by classical devices using commensurate resources. The words “useful” and “commensurate” are subjective but the former rules out stating the task as “simulating natural quantum systems” and furthers John Preskill’s emphasis in his original 2012 supremacy paper that the quantum device must be controllable. The latter rules out using whole server farms to match what a refrigerator-sized quantum device can do. The notion involves concrete rather than asymptotic complexity, so we are not positing anything about the hardness of factoring, and intensional tasks like Simon’s Problem don’t count—not to mention our doubts on the fairness of its classical-quantum comparison. Aram Harrow and Ashley Montanaro said more about supremacy requirements in this paper.

Our “postdiction” gets a yes for 2017 from the claims in this Google-led paper that 50-qubit devices would suffice to achieve supremacy and are nearly at hand, versus this IBM-led rebuttal showing that classical computers can emulate a representative set of 50-qubit computations. The notion of emulation allows polling for state information of the quantum circuit computation being emulated, so this is not even confronting the question of solving the task by other means—or proving that classical resources of some concrete size {X} cannot solve all length-{n} cases of the task at all. Recent views on the controversy are expressed in this November 14 column in Nature, which links this October 22 post by Scott Aaronson (see also his paper with Lijie Chen, “Complexity-Theoretic Foundations of Quantum Supremacy Experiments”) and this December 4 paper by Cristian and Elena Calude which evokes the Google-IBM case.


  1. Algorithms will increasingly be seen as interactive processes with multiple parties and goals.

That is, the notions of algorithm and protocol will fuse into greater structures with multiple objectives besides solving a task. In 2016 we noted Noam Nisan’s 2012 Gödel Prize-winning paper with Amir Ronen titled “Algorithm Mechanism Design.” Noam’s 2016 Knuth Prize citation stated, “A mechanism is an algorithm or protocol that is explicitly designed so that rational participants, motivated purely by their self-interest, will achieve the designer’s goals.” In November we covered mechanisms for algorithmic fairness. There is a nicely accessible survey titled “Algorithms versus mechanisms: how to cope with strategic input?” by Rad Niazadeh who works in this area. It alloys techniques from many veins of theory and has a practical gold mine of applications. What we are watching for is the emergence of single powerful new ideas from this pursuit.


  1. Mathematical logic and descriptive complexity will play a larger role in advances.

We see some sign of this in the dichotomy-for-CSPs result, but we have thoughts that we will talk more about in a later post.

Open Problems

What concepts {C} do you think will have the highest {M(C)} in 2018?


[some word and spacing changes]

13 Comments leave one →
  1. Craig permalink
    January 3, 2018 9:07 pm

    Prediction: 2018 will be the year in which mainstream scientists will start to seriously question whether large-scale quantum computing is even possible, given that Google and IBM will fail to produce the working 50 qubit universal quantum computer that they have tried to create.

  2. January 4, 2018 12:51 am

    thx Clay for your scientific philanthropy and semi advertent “funding” of one of modern sciences greatest problems namely P vs NP, RIP. who would have guessed itd be unclaimed after nearly 2 decades? coming up “shortly” on its ½ century mark. 😮 o_O 😐

  3. Javaid Aslam permalink
    January 6, 2018 7:03 pm

    Any guesses as to why NP != P proof papers get 2 orders of magnitude more attention than those proving NP==P?
    After Devolalikar this is the send paper which has got so much of attention.

    • Javaid Aslam permalink
      January 6, 2018 7:25 pm

      I meant Norbert Blum’s paper.

    • o0o permalink
      January 9, 2018 9:21 am

      Because if the contention is that P=NP the author can reasonably be asked to provide an implementation of their algorithm. Nothing shows up flaws in an algorithmic idea like trying to implement it!

      • javaid aslam permalink
        January 9, 2018 3:25 pm

        So the assumption is that every polynomial time algorithm will have a feasible implementation, even if the exponent is 100?
        And an implementation w/o a demonstration for a meaningful size, would it have any meaning?
        Wonder if anyone has tried to estimate the run-time even for O(n^20) kind of time complexity for n as small as 100, on a fastest available CPU.

      • Mykl Clason permalink
        January 18, 2018 5:51 pm

        @javaid aslam

        Keep in mind that for n = 100, n^20 > 2^n.

        I believe the hope is to get it down to less than n^10. Much more than that, and there’s not much practical difference between polynomial and exponential.

  4. January 21, 2018 7:54 am

    Finally, I found an elegant solution of P vs NP problem
    (based on Cook-Hartmann theory):

    http://vixra.org/abs/1801.0100

Trackbacks

  1. The Entropy of Baseball | Gödel's Lost Letter and P=NP
  2. Comprendre l’informatique quantique – panorama des acteurs
  3. Vu sur le web > Comprendre l’informatique quantique – panorama des acteurs – La veille de Stan
  4. Predictions For 2019 | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading