Skip to content

The End Of Human Science?

April 1, 2017


Could we go the way of telegraph operators?

Pixabay source

Lofa Polir has sent us some new information that will have widespread ramifications for math and theory and science in general.

Today Ken and I wish to comment on this information.

Polir is sure that this information is correct. If he is correct the consequences for all will be immense.

The Cover Story

His information is based on recent work of Sebastian Thrun—one of the world’s experts in machine learning. This week’s New Yorker has a featured article in which Thrun’s work on replacing doctors who diagnose skin diseases is presented. The article describes him thus:

Thrun, who grew up in Germany, is lean, with a shaved head and an air of comic exuberance; he looks like some fantastical fusion of Michel Foucault and Mr. Bean. Formerly a professor at Stanford, where he directed the Artificial Intelligence Lab, Thrun had gone off to start Google X, directing work on self-learning robots and driverless cars.

Thrun’s work is really interesting, and he has stated that medical schools should stop teaching doctors to read X-rays and other images, since robotic systems will soon be better at this. His system for skin images already beats expert doctors at detecting abnormal growths.

But this project along with his others is a smokescreen for his most important project, claims Polir. Thrun has put together a double-secret project that has been running for over five years. The project’s goal is: the automation of math and other sciences. Thrun predicts—well, let’s take a look at what he is doing first.

The Real Story

Thrun’s project is to use machine learning methods to build a system that can outperform us in doing science of all kinds. It requires huge amounts of data and he has access to that via the web. The strategy is an exact parallel of how Google DeepMind’s AlphaGo program. Quoting our friends on Wikipedia regarding the latter:

The system’s neural networks were initially bootstrapped from human gameplay expertise. AlphaGo was initially trained to mimic human play by attempting to match the moves of expert players from recorded historical games, using a database of around 30 million moves. Once it had reached a certain degree of proficiency, it was trained further by being set to play large numbers of games against other instances of itself, using reinforcement learning to improve its play.

In place of reading and digesting master games of Go, Thrun’s system reads and digests scientific papers. The ability to have his algorithm “read” all papers in science is the secret:

Thrun points out that mathematicians in their lifetime may read and understand thousands of papers, but his system is capable of understanding millions of papers.

This ability is one of the reasons his algorithm will outperform us. Another is it can use immense computational power 24/7. It never needs to sleep or rest. Polir claims that Google has made an entire secret data center of over a billion CPU cores available to this project. In a closed agreement with the University of Wisconsin, the center is housed in the new IceCube neutrino observatory. Polir justifies revealing this on grounds it should be obvious—would they really hew out a cubic kilometer of ice in Antarctica just to observe neutrinos and ignore the cooling cost benefits of placing a huge processing center in the cavity?

How it Works

Old-time theorem provers used lots of axiom and proof rules. This kind of approach can only go yea-far. Homotopy type theory, which tries out a more topological approach, provided part of the inspiration to Thrun that he could find a better way. Another part was Roger Penrose’s argument that humans are less blocked by Kurt Gödel’s Incompleteness Theorems than logic-based systems are. So Thrun was spurred to start by making his machine learn from humans, much like AlphaGo.

In the New Yorker article—with extra information gleaned by Polir—Thrun describes the situation this way:

“Imagine an old-fashioned program to identify a dog,” he said. “A software engineer would write a thousand if-then-else statements: if it has ears, and a snout, and has hair, and is not a rat . . . and so forth, ad infinitum. But that’s not how a child learns to identify a dog, of course.” Logic-based proof systems work the same way, but that’s not really how we go about identifying a proof. Who checks modus ponens on every line? “The machine-learning algorithm, like the child, pulls information from a training set that has been classified. Here’s a dog, and here’s not a dog. It then extracts features from one set versus another.” Or like a grad student it learns: here’s a proof, and here’s not a proof. And, by testing itself against hundreds and thousands of theorems and proofs, it begins to create its own way to recognize a proof—again, the way a grad student does. It just knows how to do it.

Polir confirmed that Thrun’s machine first runs the papers through the kind of “Lint”‘-like module we posted about. This is not only a data-cleaning step but primes the reinforcement learning module on the mathematical and scientific content.

Then comes a Monte Carlo phase in which the system randomly generates alternative proofs of lemmas in the papers and scores the proofs for economy and clarity. This completes the automated paper-rewriting level of their service, which is under negotiations with Springer-Verlag and Elsevier and other academic publishers for deals that may assure steady funding of the larger project. Finally, the results of these runs are input into the deep-learning stack, which infers the kinds of moves that are most likely to lead to correct proofs and profitable discoveries.

One of the predictions that Thrun makes is that like with doctors we may need to start thinking about training students to get PhDs in math soon. He goes on to raise the idea that the machine will make such basic discoveries that it will win Nobel Prizes in the future.

Some Results

The results of Thrun’s project are so far secret, and it is likely that he will deny that it is happening right now. But Polir found out one example of what has been accomplished already.

Particle physics of the Standard Model uses quite a few elementary particles. See this for a discussion.

These 31 elementary particles are the most fundamental constituents of the universe. They are not, as far as we know, made up of other particles. The proton, for example, is not an elementary particle, because it is made up of three quarks, whereas the electron is an elementary particle, because it seems to have no internal structure.

Although the Standard Model has worked impeccably in practice, it has higher complexity than physicists have expected from a bedrock theory of nature. The complexity comes from the large number of particles and the large number of constants that the model cannot predict.

A cluster of Thrun’s dedicated machines has already found a new model that reduces the number of elementary particles from 31 to 7. The code name for the cluster and its model, in homage to AlphaGo, is AlphaO. The AlphaO model is claimed to still make all the same predictions as the standard one, but the reduction in undetermined constants could be immensely important.

Open Problems

Is Polir fooling? He may be and not be at the same time. If you had told us a year-plus ago that AlphaGo would wipe out the world’s best Go players 60-0 in online semi-rapid games, we would have cried fool. The AlphaGo project is an example of a machine coming from nowhere to become the best in a game that people thought was beyond the ability of machines. Could it be soon the same with AlphaO? We will see.

13 Comments leave one →
  1. April 1, 2017 2:08 am

    Here is a relatively trivial challenge. Play perfect nim. Humans have known how to do this for quite some time. And if a computer system is going to have any chance of competing with humans at nim, they will have to also play perfectly.

  2. April 1, 2017 3:17 am

    Isn’t one of the tags too revealing?

  3. Guille permalink
    April 1, 2017 3:18 am

    You cruel, cruel people… Half asleep and going to work, with my guard down, you totally got me until the last paragraph… Anyway, it doesn’t seem so farfetched, my good friend Pilar Lofo told me similar things and she’s always been right!

  4. Abdullah Naeem Malik permalink
    April 1, 2017 3:27 am

    Why should this be a secret? It isn’t entirely impossible.

  5. Geoffrey Irving permalink
    April 1, 2017 10:07 am

    https://arxiv.org/abs/1701.06972

  6. April 1, 2017 1:00 pm

    wow, missed the joke until saw a comment about “cruel, cruel people”. was about to write a very supportive comment but am very happy averted any egg on my face. but cant resist, will dive in anyway. this is a case of a lot of truth in fiction. it does look like computers will be able to make scientific advances esp wrt AI/ machine learning approaches, but these are early days in this research. think it will blossom spectacularly in the decades ahead. lots more analysis in my blog esp wrt auto thm proving, have been pursuing this theme for years and it does feel lonely at times, but far less so these days. one can laugh today but think tomorrow one may be slackjawed at where this field goes & what it accomplishes. you mentioned funding by journals in the article. amusing, but funding is indeed one of the key current aspects/ criteria for advancement. have collected a bunch of recent articles/ papers/ success stories of use of machine learning in physics alone… as usual you seize on the zeitgeist in an uncanny way but you dont realize how close to truth/ reality this will be someday…

  7. April 2, 2017 9:48 am

    double-secret projects are typically much more interesting than regular projects and secret projects.

  8. mamamia permalink
    April 3, 2017 2:28 pm

    If you had an integer factoring algorithm that is in NC^1 for $\frac1{1+\frac1\sqrt N}$ fraction of all inputs of $\log N$ bits but in subexp complexity for all inputs where does that put integer factorization in?

  9. April 5, 2017 4:43 pm

    Doron Zeilberger’s latest essay speaks to these same issues.

    Opinion 158: The American Mathematical Society Should not Force People to Stop using The Historical Names of Mathematical Theorems …

    The American Mathematical Society has proposed … effective Jan. 2019, to prohibit the use of the traditional names of the numerous programs (e.g. Erlangen, Langlands, …), theories (e.g. Galois, Iwasawa, and dozens of others), theorems (e.g. Pythagoras, Lagrange, and thousands of others), polynomials (e.g. Legendre, Jacobi, and hundreds of other), conjectures (e.g. Riemann and many others), inequalities (e.g. Cauchy-Schwartz, Holder, and dozens of others), groups (e.g. Conway, O’ Nan, Lyons, Sims, Thompson, …), algorithms (e.g. Dijkstra, Buchberger, Robinson-Schenstead-Knuth, and thousands of others) etc.

    Few people read papers any more, and eventually, only computers will write and read papers, and for them using numbered-theorems would be preferred, so let’s postpone this reform [of number-naming all mathematical objects] for another fifty years, when humans will no longer care how unreadable a mathematics paper is, since no human will be stupid enough to read them.

    Ted Chiang’s Nature article “Catching crumbs from the table” (2000) makes the same point.

    It is reassuring that the consensus of the 2015 Breakthrough Institute Math Panel — comprising Sir Simon Donaldson, Maxim Kontsevich, Jacob Lurie, Terence Tao, and Richard Taylor — is that the day that Zeilberger foresees, namely the day when “no human will be stupid enough to read the mathematical literature” (because no human is smart enough), is at least twenty-five years in the future. That’s a relief!

    • April 6, 2017 5:04 am

      As a followup, the primary agents of benign machine fascism will be none other than Vladimir Voevodsky’s seemingly benign “proof assistants”.

      As evidence, see for example the Voevodsky’s interviews “La bifurcation de Vladimir Voevodsky: De la théorie de l’homotopie à la théorie des types” (2014) and his Heidelberg Laureate Forum lecture “Unimath” (2016).

      Children are not spared; indeed they are a primary machine takeover-target. See (e.g.) Voevodsky’s “How I became interested in foundations of mathematics” (Lecture at the 9th Asian Science Camp in Pathumthani, Thailand, 2015).

      It’s incredibly obvious, isn’t it? All too soon, ever-more-capable proof assistants will irresistibly and irretrievably ratchet-down humanity’s ability to appreciate high-level proof methods. Inevitably the day will arrive when we humans don’t understand (in any human way) further advances in high-level mathematical theorems either.

      That the machines will masquerade, first as our assistants, then as our teachers — and in the end, as our therapists? — only increases the danger.

      One and only one solution can be envisioned:

      Resolved  The International Mathematical Union must immediately and permanently forbid the use of computers for any professional mathematical purpose

      Uhhh … except mathematical typography (obviously); a return to IBM Selectric “type balls” being entirely unacceptable.

      Concomitantly, Ken Regan must extend his chess-cheat detection algorithms, to detect and flag arxiv preprints that show indications of surreptitious computer use.

      Ken, the human future of mathematics depends on your efforts!

  10. April 9, 2017 8:31 am

    I would argue that human brain has large non-deterministic component (probably evolutionary not developed to be perfect for solving large NP-complete problems in one head), especially in doing science, and if this component would not be supported in education, not only the human science will end, but also the civilization will go … in logical constructions computers are superior and we either will be expelled once computers can use actuators, or we would be useful counterpart of the logical system if we would be able to improve non-deterministic part of the brain, until we would discover the polynomial algorithm for NP-complete problem.

    To save human civilization I accept ten million dollars to stop working on how human brain works, that will slowdown the practical discovery of NP=P.

    BTW, in quantum computers everybody assumes that the supperposition can exist without the energy support. This is either profanation, or something fundamental about the world. In the first case, exponential number of states (superpositions, represented as a wave this would have large uniform RMS) require exponential energy, or exponential physical space to keep energy density final. It is worth developing large scale quantum computer to test the second case. This can be tested by making computation with the limited number of photons, where the number of pure photons would be smaller than the number of states required for computation.

  11. April 12, 2017 3:50 am

    Some of us we are looking seriously into this problem 🙂
    You can read about my work at http://www.mathematicsunknown.com/

Trackbacks

  1. Google Deepmind strikes again: AlphaGo 3-0 over worlds best player Ke Jie | Turing Machine

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading