“Cantor” Video Quick Survey
A little help wanted
|
| Edited from src1, see also src2. |
Julian Fellowes is the creator and chief writer of the hit British ITV series “Downton Abbey.” Actually, he is The Right Honourable Lord Fellowes of West Stafford, and also holds a military commission as a Deputy Lieutenant. Thus he certainly is and knows what he writes. His knowledge even extends to the likes of shocking events that happen in the series. Really shocking. Then again, the shows are not exactly intended as course videos in early-1800s British architecture.
Today we ask a short survey on our ideas for course videos, well not-course-videos, such as the one on Cantor’s Theorem which we released last week. Read more…
Making Primes More Random
Mathematical tricks
Ben Green grew up about 100 meters from the house in Bristol where Paul Dirac was born and lived until age 10. He has followed Dirac to Cambridge University as a Fellow of Trinity College, which adjoins St. John’s College where Dirac studied and taught. Both also held appointments at Princeton. Green is known for many things including, proving with Terence Tao that for every natural number there exist
-many primes in arithmetic progression. That is, there is a prime
and an integer
such that for all
, the number
is also prime.
Today I want to talk about mathematical tricks—not magic, not games, but small insights that are very useful. Tricks.
There are two kinds of tricks. One is a device for understanding a theorem, the other is a shortcut to getting it in the first place. The Dirac belt trick is the former kind: it illustrates physical systems that are preserved under two turns of a circle—a 720-degree rotation—but not under one turn. The Dirac delta function is the latter kind: It allows you to postulate a function that is zero except at one point, such that
, and also whose Fourier transform is the constant-1 function. This can speed up calculations, and certain “meta-theorems” often remove the need to do anything more to make a proof involving such calculations rigorous.
Green’s co-author Tao maintains tricks as a category on his blog. Both of them assisted Tim Gowers in setting up the Tricki Wiki for collecting and discussing mathematical tricks, along with Alex Frolkin and Olof Sisask. In a followup, Gowers asked whether it needed more examples of research-level tricks compared to undergraduate-level tricks. Let’s talk a little more about what tricks are.
Tricks
I really like “tricks.” One recurrent theme in mathematics is that big results are easy to find, usually are well known—especially to experts—and beginners can find them in almost any textbook. This is true whether the area is graph theory, number theory, or complexity theory; real analysis, complex analysis, or functional analysis; algebraic geometry, differential geometry, or topology. How did the last manage to avoid being called “-theory” or “-analysis”?
A trick is different from an ordinary “tool” in that it has some element of surprise. It achieves something by a way that is at first counter-intuitive. That’s what makes it fun. Magic tricks without surprise, without the watcher’s expectation being deceived, are nothing.
The trouble with tricks is that they often have no name, or an un-descriptive name, as with the “W-trick” below, and consequently are hard to find. The tricks often are not written down, or if they are, they are hard to find even if you have a general idea of what to search for. A trick may be used in a long proof, causing a causal reader to miss it. Or even if you see them, you may not internalize them and add them to your own bag. Another way that tricks are learned is through oral conversations, lectures, and technical talks. When I teach a class I try to highlight the tricks—at least I try.
The Tricki Wiki was an attempt to make tricks, mathematical tricks, better documented. I thought it was, and still is, a brilliant idea; yet it seems to not have become as popular as it should have. Maybe in time it will.
For now let me explain one trick that is simple, useful, and powerful: the W-trick. Then Ken will ask whether research-level power can be obtained by extending an old and simple trick of arithmetic.
The W-Trick
The trouble with the prime numbers is that while they embody considerable randomness, they are biased in many obvious ways. For one they are all odd, except the poor number . Even among the odd numbers the bias is progressive: only one prime is divisible by
, and so on. This messes up lots of simple arguments, and makes some proofs about primes extra complicated. A second trouble is that their density is close-but-not-quite constant: the number of primes below
goes as
.
Enter the W-trick. Consider the mapping
where is the product of all the primes less than some threshold
and
is relatively prime to
. Often
is just fine.
The numbers that are mapped to primes are called called “W-tricked primes.” They behave more like “pseudorandom numbers” than primes do. W-tricked primes are well distributed modulo provided
is greater than or equal to all the prime factors of
.
The key to using W-tricked primes in the Green-Tao theorem on arithmetic progressions is that such progressions are preserved by this linear map: if the W-tricked primes have a progression of length , for example, then so do the real primes.
Recently I have used this same idea to show that W-tricked numbers are more likely to be relatively prime: have no common factor in common. If and
are chosen randomly in a long interval, then it is well known that they will be relatively prime about
of the time, about
. But W-tricked numbers will be relatively prime almost all the time: as
goes to infinity the probably of a common factor goes to zero.
Duality
Duality is an umbrella term for a host of tricks. For example, the proof of the Four-Color Map Theorem begins by forming the dual graph in which every “country” becomes a vertex, and two vertices are connected if their countries share a boundary. Repeating the process would restore the original map, which makes the graph strictly a dual. In linear programming and other optimization techniques the presence of dual programs is often important. The Fourier transform gives a dual form to any function, and the Hadamard transform provides an important dual basis in quantum computation. Change-of-basis is another umbrella term for many tricks in linear algebra—or maybe many of them are just tools.
A kind of duality may operate with the Green-Tao theorem. Instead of the set of primes being “tricked up” to be denser, one can also “trick down” the integers into a suitably pseudorandom subset . Whereas the primes have density approaching zero in
, their density in
may stay above some constant. The trick is set up so that one can carry over Szemeredi’s Theorem that every subset of
of positive density has arbitrarily long arithmetic progressions, from
to
. This view was developed in a paper by Omer Reingold, Luca Trevisan, Madhur Tulsiani, and Salil Vadhan, people we know well in complexity theory. Gowers proved something similar independently.
Often the dual is “the same” as the original but just viewed in a mirror. This may be the case here, since this is actually proved by enlarging to
where
stays having positive density in
. The W-trick is a concrete instance of the latter. Still, these papers show that the dual view of the Green-Tao “transference principle” has its own uses, all derived from tools of pseudorandomness developed in complexity theory. Cool and worth deeper study.
The Square Trick
We have previously discussed an old trick known to the Babylonians for reducing any integer multiplication to the lookup of three squares, or alternatively two squares:
Ken uses this as a first example of a Turing reduction that needs more than one oracle call. In the post I stopped at consulting a table of large enough squares, but Ken wants to go further with simulating the squaring part. The hitch is in the denominators of the fractions—if we could pretend they don’t exist, we’d be home.
The reason is another trick that allows replacing squaring by shifting. Regard the finite field as a vector space of dimension
over
. A vector
in this space is normal if its iterated squares
are all linearly independent, and hence form a basis of the vector space. Then for any field element represented over this basis, the left-shift (wrapping the first bit around to be the last) represents
. This trick is exploited to implement codes that do frequent squaring.
Alas this only works in characteristic 2, so that the final dividing by 2 or 4 in the formulas for is like dividing by zero. Over
the corresponding concept involves
-th powers of
, not squares. The vanishing of multiples of the characteristic is needed anyway so that expressions of the form
leave only the terms in
, which is what gives the shifting behavior for
to begin with. Unfortunately, the formula above for
becomes undefined in characteristic 2. Ken still wonders, though, whether there is some consistent way to postulate it, so that—as with the Dirac delta function—it can be used in computations.
Open Problems
The general question is how do we document and pass on tricks to others? This is an important issue especially for new researchers to an area of mathematics.
A specific question that I think arises is, can we create a “non-linear” W-trick? The idea would be to build a nonlinear mapping that allowed us to map primes to another set and use the existence of progressions there to conclude something useful about primes themselves? Is there any way to create a W-tricked set of primes that allows progress on open questions in the structure of primes? I am sure that Green and Tao, among many others, have thought about this. But perhaps you might have a new idea here?
[that primes map to –> that map to primes]
Cantor’s Theorem: The Movie
A short film about the famous diagonal method of Cantor
|
| Source: “One Water” |
Ed Talavera is the chair of the Department of Cinema and Interactive Communication in the University of Miami School of Communication. He is an award-winning cinematographer and director of narrative and documentary feature films. His work has aired on HBO, Showtime, Cinemax, and theaters worldwide. He was the cinematographer for a wonderful film about food that aired on DirecTV, “Mistura.” The film is about how a food festival unites the diverse nation of Peru, but what we notice in the trailer is the food. Gorgeous food.
Today Ken and I wish to announce a new film directed by Ed on Cantor’s Theorem. Read more…
The Crown Game Affair
What constitutes evidence of cheating?
|
| src |
Faye Dunaway is an Academy Award-winning actress who co-starred with the late Steve McQueen in the 1968 movie “The Thomas Crown Affair.” She plays a freelance insurance fraud investigator, Vicki Anderson, who believes that millionaire playboy Thomas Crown is guilty of instigating a $2.6 million bank heist, but falls in love with him anyway. The most famous scene in the movie shows her both defeating and seducing Crown in a game of chess.
Today I write about the difficulty of detecting fraud at chess, and the role of statistical evidence.
Back To The Past
The role of social interaction in mathematics
Charlotte Simmons started her career as a pure mathematician, later moving into the history of mathematics. She is currently an administrator at her institution, but still finds time to do research and write wonderful pieces. Perhaps a “pure” mathematician can do research and also write about the history of the subject. I only included the statement starting as a “pure mathematician,” since it was in her own bio.
Today I want to talk about an article that she wrote entitled “Augustus De Morgan behind the Scenes.”
Predictions and Principles
With a year end thank-you to Ken
Ken Regan is the co-editor of GLL and is one of the internationally-ranked chess players in the US. He has won the city championship of Buffalo every time he has played. He is also a great friend and great researcher. His specialty is of course computational complexity.
Today I want to thank Ken personally for joining the GLL team and making this year one of our best.
Read more…
The Year That Was
That was, that almost wasn’t?
|
| src |
Thomas Lehrer is a mathematician and co-author of the paper, “The Distribution of the Number of Locally Maximal Elements in a Random Sample.” It was published in 1957 in the journal Annals of Mathematical Statistics, and had the distinction of being reviewed in French. The paper after it in the journal was written by John Hammersley, Ken’s Doktorgroßvater. Another paper co-authored by Lehrer was reviewed by Richard Hamming, of Hamming distance fame.
Today we do a brief review of results and ideas from 2012. Read more…
Scientific Gifts
And a new place to look for them?
|
| src |
George Dyson is the author of Turing’s Cathedral. This book was newly released when we saw copies at Princeton’s celebration of the Alan Turing Centennial last May. It covers John von Neumann and his wife Klára von Neumann even more than Alan Turing in a multifaceted canvas of the emerging digital age. He also wrote Darwin Among the Machines: The Evolution of Global Intelligence. His works show both the lyricism and the futurism of his renowned physicist father, Freeman Dyson.
Today we discuss evolution and technology as we close the Turing centennial year, and consider how Nature provides scientific gifts.
What Would Be Left, If…?
Apocalypses in math and theory, plus a complexity question
|
| src |
Douglas Adams wrote the watchword for today:
Don’t Panic.
Still, his novel and series The Hitchhiker’s Guide to the Galaxy begins with the impending destruction of the Earth, which goes ahead 5 minutes too soon. The remainder is post-apocalyptic. It is also pre-apocalyptic, because there are multiple Earths that face destruction at different times. At least having multiple days of prophesied doom is something we’ve recently been dealing with.
Today—the last time we can use this word?—we wish to cover real apocalypses in mathematical and scientific theories.
We have already blogged about what would happen to complexity theory if were true and proved. As we said, “Most of the 573 pages of [the] Arora-Barak [textbook] would be gone…” Well this is still hypothetical. Now we will look at cases in the past where whole theories were blown up by a surprise result.
This is different from theories going out of fashion and dying out, even if it was from internal causes. Likewise we don’t consider the fadeouts of past civilizations to be catastrophes, only ones destroyed by things like volcanoes. Ironically, the branch of mathematics called catastrophe theory itself is said to be one of the fadeouts. As mathematical historian David Aubin wrote in “Chapter III: Catastrophes” of his 1998 Princeton PhD thesis:
Catastrophe Theory is dead. Today very, very few scientists identify themselves as ‘catastrophists’; the theory has no institutional basis, department, institute, or journal totally or even partly devoted to it. But do mathematics die?
He goes on to cite an article by Charles Fisher that proclaimed the death of Invariant Theory. To be sure, theories like that sometimes get revived. But first a word about the Mayans and ultimate catastrophe.
Baktun The Future
All the fuss is about today’s ticking over of a Mayan unit of time called a baktun, or more properly b’ak’tun. It’s not even a once-in-5,000-years event like everyone says, but rather once-in-144,000 days, making just over 394 years. The point is there have been 13 of them since the inception of the Mayan creation date according to their “Long Count” calendar, making years in all. So the 14th b’ak’tun starts today—big whoop. The buzz comes from many Mayan inscriptions seeming to max out at 13, but others go as far as 19 and it is known that they counted by 20. Hence the real epoch will be when the 20th and final baktun ticks over to initiate the next piktun. That will be on October 13, 4772. If human civilization lasts that long, that is.
This still has us thinking, what if Earth really were suddenly blown up by Vogons or by Vegans or by a space rock a little bigger than last week’s? What would be left? Anything? The reason is that according to a recently-agreed principle in fundamental physical theory, the answer should be everything.
The principle, as enunciated in small capitals by popular science author Charles Seife in his 2007 book Decoding the Universe, states:
Information can be neither created nor destroyed.
As we mentioned last March, the agreement was symbolized by Stephen Hawking conceding a bet to John Preskill, who has graced these pages. Hawking underscored the point by making it a main part of the plot of a children’s novel written with his daughter Lucy. The father falls into a black hole, but is resurrected by a computer able to piece back all the information because it was all recoverable. At least in theory.
Hence even if Earth really is swallowed up later today, or if we disappear—leaving all our stored information and literary artefacts to decay within a 50,000-year time-span estimated for The History Channel’s series Life After People—all the information would in principle still exist. Is this comforting?
Perhaps not. It could be that while all natural processes conserve information, the more violent ones might embody the computation of a one-way function. It then becomes an issue of complexity theory whether the output of that function could be reverted to its pre-apocalyptic state.
|
|
src |
Apocalypses In Math
Here are a few examples from mathematics of “extinction” events: usually the extinction was of a theory or whole approach to math.
Bertrand Russell and Gottlob Frege:
Frege was just finishing his tome on logic when the letter from Russell arrived showing that Frege’s system was inconsistent. The letter basically noticed that the set
was not well-defined. This destroyed the whole book that Frege had worked so hard on for years. Frege’s reaction was recorded in his revised preface:
A scientist can hardly meet with anything more undesirable than to have the foundations give way just as the work is finished. I was put in this position by a letter from Mr. Bertrand Russell when the work was nearly through the press.
A study in being low-key as we might say today.
David Hilbert and Paul Gordan:
Gordan was known as “the king of invariant theory.” His most famous result is that the ring of invariants of binary forms of fixed degree is finitely generated. Hilbert proved his famous theorem that replaced “binary” by any degree, and replacing horribly complex arguments with a beautiful existence proof. To quote Wikipedia:
[This] almost put an end to classical invariant theory for several decades, though the classical epoch in the subject continued to the final publications of Alfred Young, more than 50 years later.
Gordan was less low-key than Frege, since his comment on Hilbert’s brilliant work was:
“This is not mathematics; this is theology.”
Oh well.
Kurt Gödel and David Hilbert:
Hilbert, again, wanted to create a formal foundation of all mathematics based on an axiomatic approach. He had already done this for geometry in his famous work of 1899. No, Euclid did have an axiomatic system thousands of years earlier, but it was not really formal. Some proofs relied on looking at diagrams and other obvious facts, so Hilbert added extra notions that made geometry based on a complete system. For example, Hilbert added the notion of betweenness of three points: point is between
and
.
Of course Gödel proved via his famous Incompleteness Theorem that what Hilbert could do for geometry was impossible to do for number theory.
Stephen Kleene and Barkley Rosser:
Once Alonzo Church’s lambda calculus and Haskell Curry’s combinators were discovered in the 1930’s, it seemed natural to build systems of logic around them. That was the original intent of both Curry and Church. It was therefore a shock when Kleene and Rosser, as students of Church, showed at a stroke that they were inconsistent. The reason is that the theories’ standard of “well-defined” claimed too extensive a reach, as with Frege’s formalization of the notion of “set.” It essentially allowed defining an exhaustive countable list of well-defined real numbers, for which the Cantor diagonal number was well-defined within the system, a contradiction. Ken likens this paradox phenomenon to the collapse of the Tower of Babel.
Riemann’s Non-Conjecture Refuted:
At the very end of his famous 1859 paper which included the Riemann Hypothesis, Bernhard Riemann made a carefully-worded statement about the relationship between the prime-counting function and the logarithmic integrals
and
:
Indeed, in the comparison of
with the number of prime numbers less than
, undertaken by Gauss and Goldschmidt and carried through up to
three million, this number has shown itself out to be, in the first hundred thousand, always less than
; in fact the difference grows, with many fluctuations, gradually with
.
Further calculations were consistent with inequality holding in general, until in 1914, John Littlewood refuted this not just once, but infinitely often. That is, he did not find a counterexample by computation, but rather proved that must change sign infinitely often. In fact, the first number giving a sign flip is still unknown, though it must be below
.
Although this is included on Wikipedia’s short list of disproved mathematical ideas, its significance is not the inequality hypothesis itself, but the fallible nature of numerical evidence. Michael Rubinstein and Peter Sarnak showed an opposite surprise: the set of integers giving a negative sign has non-vanishing density, in fact about 0.00000026, so it is disturbing that no such
is within the current range of calculation.
Mertens Conjecture Refuted:
The conjecture, which was first made in 1885 by Thomas Stieltjes not Franz Mertens, states that the sum of the first values of the Möbius function has absolute value at most
. That is,
Despite the fact that all computer calculations still support this, Andrew Odlyzko and Herman te Riele disproved it theoretically in 1985. At least it has an exponentially bigger leeway than the previous one: the best known upper bound on a bad is currently
Moreover the following weaker statement, which Stieltjes thought he had proved, is still open:
The reason this is portentious is that the following slight further weakening,
is actually equivalent to the Riemann Hypothesis.
The failure of Riemann would have a definite apocalyptic effect: it would wipe away all the many papers that assume it. It is not clear whether those papers could even be saved by the kind of “relativization” we have in complexity theory, whereby results obtained assuming and so on may still be valid relative to oracle languages
such that
.
Our Scientific Neighbors’ Houses
Still, the loss of papers assuming Riemann would be nothing compared to what would happen in physics if supersymmetry were disproved, as its failure could take all of string theory down with it. The Standard Model of particle physics seems also to have survived problems the absence of the Higgs Boson would have caused, although issues with the Higgs are still causing apocalyptic reactions from some physicists. At least news today is that other bosons are behaving well according to Scott Aaronson and Alexander Arkhipov’s protocol, which is related to our kind of hierarchy collapse.
Perhaps we in computer science theory and mathematics are fortunate to experience less peril. Even so, we are left with this quotation attributed to Hilbert by Howard Eves:
One can measure the importance of a scientific work by the number of earlier publications rendered superfluous by it.
Open Problems
Do you have other favorite examples of results in the mathematical and general sciences that caused the collapse of theories?
Does Nature compute complexity-theoretic one-way functions?
[fixed Seife quote, minor tweaks]
Branch And Bound—Why Does It Work?
One of the mysteries of computational theory
|
| source—our congratulations |
George Nemhauser is one of the world experts on all things having to do with large-scale optimization problems. He has received countless honors for his brilliant work, including membership in the National Academy of Engineering, the John Von Neumann Theory Prize, the Khachiyan Prize, and the Lanchester Prize. He has been a faculty member at Georgia Tech for many years in the ISyE school, which has been one of the top industrial engineering schools in the world for years—these events are certainly correlated.
Today I wish to talk about a class of algorithms that are used all the time in practice, that George has used in his research for decades, but that have eluded our making any definite statements about their theoretical performance.












