Skip to content

Descending Proofs Into Algorithms

August 28, 2016


A way to make indirect reasoning more palpable

Worthies_of_Britain'_(_Nicholas_Saunderson)_by_John_Bowles
Wikimedia Commons source

Nicholas Saunderson was the fourth Lucasian Professor at Cambridge, two after Isaac Newton. He promoted Newton’s Principia Mathematica in the Cambridge curriculum but channeled his original work into lecture notes and treatises rather than published papers. After his death, most of his work was collected into one book, The Elements of Algebra in Ten Books, whose title recalls Euclid’s Elements. It includes what is often credited as the first “extended” version of Euclid’s algorithm.

Today we raise the idea of using algorithms such as this as the basis for proofs.

Saunderson was blind from age one. He built a machine for doing what he called “Palpable Arithmetic” by touch. As described in the same book, it was an enhanced abacus—not a machine for automated calculation of the kind a later Lucasian professor, Charles Babbage, attempted to build.

We take the “palpable” idea metaphorically. Not only beginning students but we ourselves still find proofs by contradiction or “infinite descent” hard to pick up at first reading. We wonder how far mathematics can be developed so that the hard nubs of proofs are sheathed in assertions about the availability and correctness of algorithms. The algorithm’s proof may still involve contradiction, but there’s a difference: You can interact with an algorithm. It is hard to interact with a contradiction.

An Example

It was known long before Euclid that the square root of 2 is irrational. In the terms Saunderson used, the diagonal of a square is “incommensurable” with its side.

Alexander Bogomolny’s great educational website Cut the Knot has an entire section on proofs. Its coverage of the irrationality of {\sqrt{2}} itemizes twenty-eight proofs. All seem to rely on some type of infinite descent: if there is a rational solution then there is a smaller one and so on. Or they involve a contradiction of a supposition whose introduction seems perfunctory rather than concrete. We gave a proof by induction in a post some years ago, where we also noted a MathOverflow thread and a discussion by Tim Gowers about this example.

We suspect that one reason the proof of this simple fact is considered hard for a newcomer is just that it uses these kinds of descent and suppositions. Certainly the fact itself was considered veiled in antiquity. According to legend the followers of Pythagoras treated it as an official secret and murdered Hippasus of Metapontum for the crime of divulging it. To state it truly without fear today, we still want a clear view of why the square root of 2 is irrational.

Our suggestion below is to avoid the descent completely. Of course it is used somewhere, but it is encapsulated in another result. The result is that for any co-prime integers {r} and {s} there are integers {x} and {y} such that

\displaystyle  rx + sy = 1.

The {x} and {y} are given by the extended Euclidean algorithm. Incidentally, this was noted earlier by the French mathematician Claude Bachet de Méziriac—see this review—while Saunderson ascribed the general method to his late colleague Roger Cotes two pages before his chapter “Of Incommensurables” (in Book V) where he laid out full details.

An Old Attempt

Here is the closest classical proof we could find to our aims. We quote the source verbatim (including “it’s” not “its”) and will reveal it at the end.



Proposition 15. If there be any whole number, as {n}, whose square root cannot be expressed by any other whole number; I say then that neither can it be expressed by any fraction whatever.

For if possible, let the square root of {n} be expressed by a fraction which when reduced to it’s least integral terms is {\frac{a}{b}}, that is, let {\frac{a}{b} = \sqrt{n}}, then we shall have {\frac{a}{b}\frac{a}{b} = \frac{n}{1}\;}; but the fraction {\frac{a}{b}\frac{a}{b}} is in it’s least terms, by the third corollary to the twelfth proposition, because the fraction {\frac{a}{b}} was so; and the fraction {\frac{n}{1}} is in it’s least terms, because 1 cannot be further reduced; therefore we have two equal fractions {\frac{a}{b}\frac{a}{b}} and {\frac{n}{1},} both in their least terms; therefore by the tenth proposition, these two fractions must not only be equal in their values, but in their terms also, that is, {aa} must be equal to {n}, and {bb} to 1: but {aa} cannot be equal to {n}, because {a} is a whole number by the supposition, and {n} is supposed to admit of no whole number for its root; therefore the square root of {n} cannot possibly be expressed by any fraction whatever. Q.E.D.



The cited propositions are that two fractions in lowest whole-number terms must be identical and that if {a} and {b} are co-prime to {c} then so is {ab}. The proof of the latter starts with the for-contradiction words “if this be denied,” so the absence of such language above gets only part credit. This all does not come trippingly off the tongue; rather it sticks trippingly in the throat. Let’s try again.

A Proof

In fact, we don’t need the concepts of “lowest terms” or co-primality or the full statement of the identity named for Étienne Bézout. It suffices to assert that for any whole numbers {r} and {s}, there are integers {x} and {y} such that the number

\displaystyle  d = rx + sy

divides both {r} and {s}. This is what the extended Euclidean algorithm gives you.

Then for the proof, suppose that {\sqrt{n} = \frac{r}{s}} for integers {r} and {s}. We take {x} and {y}, and let {r_0 = r/d}, {s_0 = s/d} be the resulting integers. Now let’s do some simple algebra:

\displaystyle  r_0 x + s_0 y = 1,\quad\text{so}\quad r_0 = \frac{1 -s_0 y}{x},\quad\text{so}\quad x^2r_0^2 = (1 - s_0 y)^2;

\displaystyle  n = \frac{r^2}{s^2} = \frac{r_0^2}{s_0^2},\quad\text{so}\quad r_0^2 = s_0^2 n.

It follows that

\displaystyle  x^2 s_0^2 n = (1 - s_0 y)^2 = 1 - 2s_0 y + s_0^2 y^2.

Now divide both sides of this by {s_0}. We get

\displaystyle  \frac{1}{s_0} = 2y + s_0 x^2 n - s_0 y^2 = \text{some integer}.

The conclusion is that {s_0 = 1}, hence {s = d}. Thus {\sqrt{n} = \frac{r}{d}}. But {d} also divides {r}. So {\sqrt{n}} is an integer—the same end as the classical proof.

This is a contradiction. But it is a palpable contradiction. For instance, of course we can see that {\sqrt{2}} isn’t an integer. Thus we claim that the effect of this proof is more concrete.

Open Problems

Is this a new proof? We doubt it. But the proof is nice in that it avoids any recursion or induction. The essential point—the divisibility of {d} into {r} and {s}—is coded into the Euclidean algorithm.

Is ours at least smoother than the classical proof we quoted? The latter is from Saunderson’s book, on pages 304–305 which come soon after his presentation of the algorithm on pages 295–298.

What other proofs can benefit from similar treatment by “reduction to algorithms”?

[fixed missing n in last line of proof, some word tweaks]

11 Comments leave one →
  1. John N permalink
    August 28, 2016 7:24 pm

    Do all of the square root proofs require recursion? I don’t see such a requirement in proof #14 (to me, perhaps the clearest one), or proof #20 (more sophisticated, but really the same idea).

    Determine the residue (mod 3) of both sides of a^2 = 2*b^2. Assuming lowest terms, they aren’t both 0 (mod 3). Then they cannot be the same.

    • August 28, 2016 8:26 pm

      That’s similar to the proof we gave back in 2010, but we made more of a meal of it by induction. Also, variation #14′′ of that proof boldfaces the use of “infinite descent.” Our new(?) proof needs no “modulo” or notation base. The issue is what one takes as basic—for instance, whether it matters how we finessed the point of r/s not having to be in lowest terms in our proof.

  2. Peter Gerdes permalink
    August 28, 2016 9:53 pm

    The argument that for any r, s with gcd d the Euclidean algorithm gives you numbers x, y such that d = rx+sy proceeds by descent on the remainders calculated in the algorithm. In particular, one establishes that the algorithm terminates by arguing that there can’t be any infinite descending sequence of natural numbers.

    You acknowledge that there may be some use of contradiction in the algorithm but I really don’t see how this is any better.

    Indeed, if you are happy with this kind of reasoning why not skip all the algebra and just use the following algorithm:

    Given \sqrt(n) = x_i/y_i. If y_i is 1 terminate. If not we compute x_{i+1} and y_{i+1} as follows. Search for the least d > 1 that divides both x_i^2 and y_i^2. Let x_{i+1} = x_i/d and y_{i+1}=y_i/d. One can prove by contradiction that such a d must divide both x_i and y_i. Furthermore as x_{i+1} < x_i the algorithm must terminate thus \sqrt(n) = x_i/1.

    I don't really see the advantage here. The fact that the part that proceeds by contradiction is in an algorithm doesn't really help.

  3. Peter Gerdes permalink
    August 28, 2016 10:15 pm

    Actually I want to say that the situation is even worse than I suggested above. You are actually using a particular characterization of the results of the Euclidean algorithm in the proof. You don’t just need a list of the steps of the Euclidean algorithm and the fact that it terminates but a description of the property the result has, namely that the final remainder is a linear combination of the inputs and divides both of them.

    Think about how one would actually prove this. You would say let r_n be the result of the n-th step of the algorithm. One can establish the linear combination by simple induction. However, the most natural way to prove the later claim is to suppose that i is the greatest such that r_i is not divisible by r_k (where r_k is the result of the algorithm).

    But if I’m allowed to use contradiction to get a *characterization* of the output of the algorithm and then use that in my proof I can use the following trivial proof:

    Consider the following algorithm.
    Given n search for integer m < n such that m^2 = n. If such an integer is found output m/1. If not output 0. I characterize this algorithm as giving the rational sqrt of n if it exists and 0 otherwise. I can use whatever means I like to prove this description is true and then trivially use this fact to get the result.

    • rjlipton permalink*
      August 29, 2016 10:45 am

      Peter

      I think the point is this: we use descent yes but it is proved in a general manner that has nothing to do with square roots. It seems that this might make things easier to follow.

      But that is our opinion

  4. kamouna permalink
    August 29, 2016 2:46 pm

    Prof R.J.Lipton:

    What other proofs can benefit from similar treatment by “reduction to algorithms”?

    Kamouna:

    The answer to this question:”Is paradox recognition paradoxical or not paradoxical?”

    Recursion is paradoxical by its nature, or by its definition if you prefer.

    To get a job, you need experience; and to get experience you need a job.

    That’s recursion, it must be paradoxical. If you see recursion such that it is not paradoxical, then you have never understood what recursion is about.

    Ask yourself:”The lambda-calculus recursion operator “Y”; why is it called the Paradoxical recursion operator, for more than 8 decades?

    If a gang occupies your house, you go to the police station to report; the police tells you go to court. Then, you go to court, they tell you:”Go to the police”.

    The very concept of an algorithm which is necessarily built around recursion harbours a paradox. How? There exists an algorithm D such that:

    1. Alice can invent an algorithm D to solve a problem X.
    2. Bob (independently or not) invented the same D to solve the same problem X.
    ==========================================================
    The result: “Alice algorithm works if and only if Bob’s algorithm does not work”.

    Where is the catch:

    Answer the question:”Is paradox recognition paradoxical or not paradoxical?”

    Best,

    Rafee Kamouna.

  5. September 1, 2016 6:54 am

    If one is looking for a transparent approach that avoids explicit recursion by putting it inside a black box, then I think the standard proof that uses the fundamental theorem of arithmetic is about as clear as one could possibly hope for. I’m referring to the argument that the equation p^2=2q^2 cannot be solved in integers, because 2 occurs an even number of times in p^2 and an odd number of times in 2q^2.

    This seems to me to be much more conceptual than a proof that says, out of the blue, “Now let’s do some simple algebra.”

    Of course, in a sense it’s more complicated than the usual proof, because we don’t need anything like the full strength of FTA. Another approach would be to prove as a lemma that every number can be written uniquely in the form 2^k(2m+1). The existence is easy, and for uniqueness, if 2^k(2m+1)=2^r(2s+1), then k=r or we could divide through by the smaller of the two powers and get an even number equal to an odd number. And that implies that m=s. Armed with that lemma, we would have that the k for p^2 was even and the k for 2q^2 was odd.

  6. kamouna permalink
    September 1, 2016 7:39 am

    Dear Gowers the Great,

    It suffices to consider the problem:”Is paradox recognition paradoxical or not paradoxical?”. Posing this question will make you the most hated person in history. Considering it, you will get straight threatening emails as I you may read at my blog we’ll come to you to beat you etc.

    I recently posted this question to cstheory.stackexchange, they immediately put it on hold. I posted it in the under-graduate cs.theory.stackexchange, they did no better.

    Clearly, everybody knows that this is the last mathematical question to be asked. Thereafter, all answers are “true” if and only if “false”.

    I must greet Prof Lipton for his (many) posts about paradoxes, let alone self-defeating sentences like that of the public in the 30th of June, 2013 second revolutation:

    “No for insulting the idiot president”

    such sentences sound like “music” to Prof Lipton as well as many paradoxes of mathematics. However, when it comes to the Kleene-Rosser paradox:

    It’s impossible,
    tell the sun to leave the sky,

    It’s impossible,
    ask a baby not to cry,

    Can the ocean,
    Keep from rushing to the shore,

    It’s impossible, it’s impossilbe.

    (Courtesy of Perry Como)

    ===========================

    Rafee Kamouna.

  7. September 1, 2016 6:06 pm

    From someone who doesn’t regularly play with algebra, I found Saunderson’s proof by contradiction easier to follow. And, of course, Gower’s fundamental arithmetic is something I can remember well-enough to repeat at a cocktail party (if anyone in an inebriated audience wants to go further than that then they can probably do a better job than me in any case).

    I’m posting because kamouna reminded me of a joke I found in an old manuscript by von Neumann’s brother that Sidles loaned me. In it, the German is standing on a corner in Berlin shouting, “the Kaiser is an idiot! the Kaiser is an idiot!” The police come-along and arrest him for treason, and he says, “Wait! I didn’t mean our Kaiser. I, I was talking-about the Austrian Kaiser.”

    They respond, “You can’t fool us, we know who the idiot is.” (ha ha)

  8. kamouna permalink
    September 1, 2016 10:55 pm

    No for insulting the idiot Kaiser. That was the paradox. It’s possible in software, impossible in hardware. That’s why computability (software+hardware) is inconsistent.

    https://www.academia.edu/28025536/Computability_and_Complexity_Theories_are_Inconsistent

    Rafee Kamouna.

Trackbacks

  1. A Proof Of The Halting Theorem | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading