Skip to content

An Ozone Hole in Logic

April 1, 2022


Set Theory proved to be perpetually paradoxigenic…

“Literature and the Real Person” source

Mr. Apollinax might have made a good escort for Lofa Polir, had she lived a hundred years ago. The titular character of a poem written circa 1915 by Thomas Eliot was based on Bertrand Russell, whose 1901 paradox revealed a gaping inconsistency in set theory as it was then known. Russell spent much of the intervening years trying to repair the gap via type theory.

Today we present a recently-discovered inconsistency in the type theory of the Scala programming language, which forced a feature-chopping revision in its new version 3.0. Then, in a worldwide exclusive, we reveal a new theorem by Polir that explains why such inconsistencies are inevitable.

Ernst Zermelo had already noticed the paradox in 1899 but brushed it off because his axiom system already evaded it. It is possible that Georg Cantor knew it in letters in 1897 to Richard Dedekind and David Hilbert. I could call this a continuation of a theme we’ve often addressed about discoveries not being named for their first discoverers, but I will not, for this reason: The discoveree was Gottlob Frege, who immediately recognized that the treatise on set theory he was about to publish had been devastated by the paradox. And the person he heard that from, in June 1902, was Russell—not any of his own compatriots.

The poem comes with an epigram from a Greek text by Lucian satirically praising the painter Zeuxis of Herakleia: “Ω τῆς καινότητος. Ἡράκλεις, τῆς παραδοξολογίας. εὐμήχανος ἄνθρωπος.” The reference translation Eliot had to hand goes: “Such novelty! Heavens, what paradoxes! How inventive he is!” I believe, however, that Eliot intended a more-particulate reading: “Oh for his innovation! Hercules, his paradoxology! Sweet machine man!” That goes better with the poem text and with aspects of Russell that were visible to Eliot at the time. We’ll only talk about the paradoxology. Russell was well aware that other monsters besides his own were lurking, like ones that emerged in the US. What if patching paradoxes is just the way of progress? Let’s see this play out in the Scala programming language.

The Inconsistency

The root of the inconsistency is allowing projection of an abstract type. Let’s explain the former concept with concrete types first. A standard way to represent trees and other graphs is by having a nested node class, for instance:


class BinaryTree[A <: Comparable[A]] {

   class Node(var left: Node, var item: A, var right: Node)

   private var root: Node = null
   //etc.
}

A key wrinkle of Scala is that if you have two different trees T1 and T2, then T1.Node and T2.Node are separate types. This notion of path-dependent types enables the compiler to catch bugs where you might try to connect a node on one tree as a child of a node in the other tree. If there were just one BinaryTree.Node type, as is standard in Java for instance, then you could easily make this kind of bug and the compiler would not bat an eye.

If you really do want to combine nodes from two different trees, however, you can do so via code using the projection type BinaryTree#Node. The hash sign # makes this defined to be the supertype of nodes of all possible binary trees.

The [A <: Comparable[A]] part means that the tree can hold any type A of data that extends a trait called Comparable, whose function is to mandate having a comparison function for A so that the tree can be kept sorted. The symbol >: written the other way means supertype rather than subtype.

We could try to make our binary tree class more versatile by making it more abstract. By itself it could be a trait so as to allow deferred implementations. It could try to define a more general notion of Node that might relate more directly to stored data, for instance strings or numbers. This is not to say such abstractions are necessarily well-motivated, but the point is this:

  • The Scala compiler defines what is workable.

  • Its notion of workability is based on mathematical type theory.

  • Lots of abstract constructions are made workable to encourage creativity.

Workability is not supposed to lead to type errors, however. That is the unsoundness allowed in versions 2.x of Scala.

The Code

If you have a lot of time—time to read the longest post ever on this blog—then you can draw some motivation for the unsound code. That post shows how to represent the NBG version of set theory, due to John von Neuman, Paul Bernays, and Kurt Gödel, as an infinite graph. At the top of the graph are nodes representing the class of all sets and various other expansive types, which are symbolized by V in the post. At the bottom is a node for the null set identified with O. We can think of them as representing “zones” for possible supertypes and subtypes of some types of interest, such as string and number.

Aside from some abstract manipulation of types, the following code does nothing except compose two versions of the identity function, applied to a String. As you are welcome to try yourself, it compiles in Scala 2.x. But before you try to run it, you can ask yourself: exactly what number would the string “Hello World!” become?


/** File "Unsound.scala", mod by KWR from the last example in
https://lptk.github.io/programming/2019/09/13/type-projection.html
*/
object Unsound extends App {

   trait VZone { type Node >: String }
   trait OZone { type Node <: Double }

   def liftup[A <: VZone]: String => A#Node
      = (x => x)
   def godown[A <: OZone]: A#Node => Double
      = (x => x)

   def hole[A <: VZone with OZone]: String => Double
      = (x => godown[A](liftup[A](x)))

   println(hole("Hello World!"))
}

The point remains, however, that this program is accepted as a “theorem” in the logical theory underlying the Scala 2.x type system. That theory is thereby unsound.

Lofa Polir’s Theorem

A point we have been at pains to make is that this kind of happening is far from rare. Systems are proven secure and then break all the time. Creating more ways to write correct programs has created new ways to have bugs, even as the frequency of old kinds of bugs is reduced. As with the Earth’s ozone hole, the situation is not worsening but the problem persists.

Dr. Polir’s new theorem not only explains why but also argues that we should be neither surprised nor dismayed by this state of affairs. It can be given the following intuitive statement:

Theorem 1 There is a sound theory A.F. that can be represented as a complexity-bounded limit of inconsistent theories, but not as an effective limit of consistent theories.

The soundness is with respect to the standard model of arithmetic and includes every statement of arithmetic proved in set theory. The clever new idea in her proof is the manner of the proof itself: it formalizes and self-applies a criterion of feasible correctness in the limit for a sequence of incorrect proofs.

The upshot is the following:

The most productive and feasible way to do mathematics is to design inconsistent theories and fix them incrementally, rather than to demand global correctness from the get-go.

Before Dr. Polir types up her paper, she has a request for readers of this blog: What should the initials A.F. of her theory stand for? The name “Arithmetical Foundations” is both hackneyed and taken.

Open Problems

What do you think Lofa’s A.F. stands for?


[hand-fixing of content after posting while still April 1, added paragraph to intro, changed image at top]

The 2021 Turing Award

March 31, 2022

If a machine is expected to be infallible, it cannot also be intelligent.—Alan Turing

Jack Dongarra has just won the 2021 Turing Award. Congrats to him.


Dongarra is a Distinguished Professor of Computer Science in the Electrical Engineering and Computer Science Department at the University of Tennessee. He has connections to the nearby Oak Ridge National Laboratory, to Rice University, and in England to the University of Manchester—Turing’s academic home. The nub of ACM’s article says the following, with linear algebra emphasized:

Dongarra’s major contribution was in creating open-source software libraries and standards which employ linear algebra as an intermediate language that can be used by a wide variety of applications. These libraries have been written for single processors, parallel computers, multicore nodes, and multiple GPUs per node. Dongarra’s libraries also introduced many important innovations including autotuning, mixed precision arithmetic, and batch computations.

Dongarra’s doctoral advisor was Cleve Moler decades ago in 1980. I always think we should thank our advisors—having been one myself. But they continued on parallel tracks in the 1980s. Moler had invented MATLAB in the 1970s, but with the advent of IBM PCs in the 1980s, realized their promise for bringing advanced numerical computations into wide use. Moler and John Little and Steve Bangert co-founded MathWorks in 1984 to market their rewrite of MATLAB in C. Dongarra went on to co-write a host of mathematical tools: BLAS, LAPACK (which succeeded his work with Moler and others on EISPACK and LINPACK), ATLAS, HPCG, and much more.

Why The Turing Award?

Dongarra received the Turing for his software—code that has had significant impact in many areas of computational science from data analytics, healthcare, renewable energy, weather prediction, genomics, and economics, to name a few. His work was over four decades of writing code that can solve linear systems. Such systems occur in just about all of engineering and science. They challenge us because we constantly wish to solve bigger and more difficult systems. And we are able to solve bigger systems because computers continue to get faster—able to do more operations per second.

However, the challenge was and is that while computers get faster every year, how they get faster remains involved. They cannot just compute faster but how they get faster continues to be complex. If a computer just got faster every year, then Dongarra would not have been able to win a Turing award. That how they get faster is more complex, more involved, that made his Turing award possible.

Why So Hard?

What are some of the challenges that Dongarra had to fight, and continues to fight? We can build off the following quip by the Canadian software developer Craig Bruce:

“It’s hardware that makes a machine fast. It’s software that makes a fast machine slow.”

And in 1980 we could add: what could make an IBM PC fast?

The goal of Dongarra’s software was to run fast on the current hardware systems. The challenge over the decades is simple: As hardware systems were improved in performance they did not simply get faster. They needed to change how they worked in order to get faster. The systems changed from single processors, to parallel computers, multicore nodes, and multiple GPUs per node. These changes made the ability to exploit the potential performance harder and harder.

Let’s assume, for example, that the hardware could not go 100X faster but could execute 100 operations at the same time. Then provided we can exploit this parallelism we can make the hardware seem to be 100 times faster. But if we cannot exploit this, we are in trouble. This is the key issue that Dongarra faced. This challenge is exactly what made his work so difficult and important.

The Test of Time

The solutions found by Dongarra and his co-workers were definitive enough, and the core of linear algebra in LINPACK pure enough, that it could yield a salient benchmark of hardware power. The LINPACK benchmark suite won agreement by the early 1990s as giving the definitive measure of hardware power. A decade ago, they began updating it to involve HPCG.

Thus it is not enough to say that this work has stood the test of time: it is the test of time. And this is carrying into the future. A high LINPACK score in 2013 was an early indicator of the power achievable by D-Wave’s quantum adiabatic hardware. An article last month titled “The Race to Quantum Advantage Depends on Benchmarking” includes LINPACK as a contender. There is recent work toward a more-tailored quantum LINPACK benchmark.

Open Problems

Some interesting milestones about the Turing awards are: The first recipient of the award, in 1966, was Alan Perlis, an American computer scientist who wrote the compiler for the ALGOL computer programming language. The first woman to win the prize was Frances Allen, in 2006, for her work in compiler optimization, which contributed to the development of parallel execution in multiprocessing. The youngest recipient was Donald Knuth, who was 36 when he received the award in 1974 for his work on algorithms and computer programming. The oldest recipient was Alfred Aho, who was 79 when he received the award in 2020 for his work on algorithms and the theory of programming language implementation. I knew all the above winners pretty well. Rich DeMillo and I wrote a paper with Perlis in 1979.

Turing himself was also a pioneer in AI, of course. He thought about how computers could reason, how they could think. What directions will the Turing Awards take in the future—and will they reflect this side of Turing’s vision more?

Waiting For Self-Deriving Cars

March 26, 2022


Once you trust a self-driving car with your life, you pretty much will trust Artificial Intelligence with anything—Dave Waters.

ITProToday src

Keith Kirkpatrick is the author of an interesting CACM article on self-driving cars. It is titled “Still Waiting For Self-Driving Cars” and appears in the news section of this month’s issue.

Today we discuss why it has been so difficult to get self-driving cars started.

Kirkpatrick’s article starts:

Over the past decade, technology and automotive pundits have predicted the “imminent” arrival of fully autonomous vehicles that can drive on public roads without any active monitoring or input from a human driver. Elon Musk has predicted his company Tesla would deliver fully autonomous vehicles by the end of 2021, but he made similar predictions in 2020, 2019, and 2017. Each prediction has fallen flat, largely due to real-world safety concerns, particularly related to how self-driving cars perform in adverse conditions or situations.

An Issue

As printed in the current CACM issue, his article says the following on page 13:

A potential intermediate solution currently being tested in Germany is to utilize remote drivers to control vehicles. Vay, a Berlin-based startup, has been testing a fleet of remote-controlled electric vehicles in that city and plans to roll out a mobility service in Europe and potentially the U.S. this year. The service will allow customers to order a remote-controlled car and have it drive them to their desired destination; on arrival, they get out if the vehicle and leave it to a human teledriver miles away to either park the vehicle or steer it to the next client.

I claim this shows the issue that we have with automatic driving systems. There is a typo in the above paragraph. A typo that shows what we are up against in the attempt to make automatic driving systems.

DO YOU SEE IT?



We claim that a typo like this in a CACM article is part of the reason it is so hard to make self-driving cars. Do you agree? Or is the typo not critical? While we’re at it, does the title of this post have a typo? Read on…

Ken contributes a more difficult example of this kind, one he used while addressing the same issue of AI advances in his department’s Freshman Seminar. It relates to a later section in Kirkpatrick’s article, where he discusses the issue of

…testing to ensure vehicle navigation systems understand the complex social interactions that often occur between oncoming and adjacent drivers, or drivers and pedestrians. Generally, if a pedestrian is about to cross or is crossing a street, the driver and pedestrian will make eye contact, and will use nonverbal cues to indicate the direction and speed of their movement.

Over to Ken:

Ken’s Example

Here is the relevant portion of the White House transcript of Barack Obama’s July 2012 campaign speech in Roanoke, Virginia—the one with the notorious phrase “you didn’t build that”:

The question we could submit to Google—whose own automated-driving efforts have had difficulties with interpreting rules—is:

What does the word “that” in “you didn’t build that” refer to?

By rules of linguistic derivation, the answer seems clear:

  • The word “that” refers to the most recent noun, which is “business.”

  • The dashes ‘- -‘ connect two parts that refer to each other.

  • The word cannot anyway refer to “roads and bridges” because that is plural whereas “that” is singular, like “business.”

  • (The preceding point is not a self-contradiction because in quotes, “roads and bridges” is a phrase—singular.)

Now please take a minute-plus to listen to the actual delivery of this part of the speech (you may need to click back from 0:00:08 to 0:00:00 to get the start):


First, the transcript is missing a word: Obama actually said “that—you didn’t build that.” Perhaps having a rhetorically emphasized that right after “business” makes the semantic designation even clearer? But then notice that the part “if you’ve got a business” was delivered in a quick and parenthetical manner inside the full phrase “Somebody invested in roads and bridges … you didn’t build that.”

The interpretation that the “that” refers to infrastructure is supported by the speech’s immediate segue to the Internet, sandwiched around “somebody else made that happen.” Going out to sources, Obama was channeling a point already notoriously made by Elizabeth Warren in a 2011 speech. But without going out to sources—something we cannot expect an AI to do in the moment—there is Obama’s next sentence (also included in the speech clip):

“The point is, is that when we succeed, we succeed because of our individual initiative, but also because we do things together.”

Whether Obama intended to say that people did not build their businesses can still be argued, but that’s not the real point. My first point is that AIs based on current art for ascribing meanings not only would make that ascription, but should do so for overall consistency. And the second, larger, point at the end of my slides, where I get to hard AI problems, is this:

We base our confidence in developing AI systems on Alan Turing’s principle that whatever the human mind can resolve, a computer can be programmed to apprehend and execute. But what if important data, on fine scales, concern matters that our human minds cannot resolve?

CACM puts in a bubble Kirkpatrick’s quoting another expert that “it will take years and massive compute power to train self-driving systems to understand the nonverbal cues that pass between drivers and pedestrians.” We believe the task and need of getting cars to self-derive meanings is greater still.

Open Problems

Do these issues say anything about the difficulty of getting code right? We think so, but what do you think?


[some word changes; I seem unable to fix the endpoints of my clip on the C-SPAN website.]

Meta Wishes

March 17, 2022


When faced with two choices, simply toss a coin. It works because in that brief moment when the coin is in the air, you suddenly know what you are hoping for.

Neil L. is a leprechaun. He has been visiting me or Ken once every year since we started GLL. We had never seen a leprechaun before we began the blog—there must be some connection.

Today we want to share the experience we each had with him this morning of St. Patrick’s Day.
Read more…

Are These the Last Digits of Pi?

March 14, 2022


Ghoulish reflections on whether mathematics is emergent

Composite of src1, src2

Thomas Keller and Heiko Rölke led a team at the University of Applied Sciences in Graubünden, Switzerland, that set a new record for the computation of {\pi} last August. They computed 62.8 trillion digits of {\pi}. The last ten digits they obtained are 7817924264.

Today, we wish people Happy Pi Day amid wishes for happier days overall.
Read more…

I Am Now A Cyberman

March 5, 2022


Cybermen: You will be upgraded.
Cyberman: Upgrading is compulsory.

I am now carrying a full power CPU embedded in my chest. If I could use it directly I would have new powers, but it is dedicated to monitor my heart function. Alas.

The Procedure

My heart has not been beating well. So they had to implant a device that monitors the heart and also can restart the beating if needed. It was not a long operation—about one hour—but took two days recovering in the hospital.

The worst part of the recovery is that it is not restful. And it is boring. I am now home and resting much better.

Am I Turing Universal?

Well, according to Alan Turing, I was universal as soon as I began to use my brain. But can I leverage my new processing power?

The doctors told Kathryn that some ideas from DNA computing went into my ICD device. Of course, I cannot implement any DNA computing algorithms on my device.

I wonder, however, whether I can get useful outputs by varying the input—such as my pulse. Maybe I do not want to experiment on myself there. But it is not too far-fetched to imagine that in the not too distant future many of us could be controlling computations with our heartbeats. Or at least powering them while controlling with brainwaves. Call it “heartbeat computing”—?

Ken has a more immediate concern: could the computing power in the device be used to cheat at chess? Already many chess tournament prohibit electronic physio devices, even watches, because of this potential. A cognitive implant, however, could bend the rules on what it means to be a human player.

Open Problems

I want to thank all who sent some best wishes. I really thank you all for your thoughts.

I must also especially thank my dear wife Dr.Kathryn Farley for her tremendous support. And also thank Ken for helping her make some calls on our behalf. Thank you both.

ACM: Celebrating 75 Years

February 26, 2022

Age is strictly a case of mind over matter. If you don’t mind, it doesn’t matter—Jack Benny

Jennifer Chayes, Francine Berman, Susan Dumais, Deborah Estrin, James Hendler, Michael Jordan, Butler Lampson, Valerie Taylor are the members of a special committee. This committee is charged with helping organize the 75th anniversary of the ACM. The ACM was founded on September 15, 1947; hence the 75 year celebration. During 2022 special events are planned, leading up to a memorable day in June of panels on topics central to the future of computing. The committee has members from both universities and companies:

  • Jennifer Chayes (Chair), University of California, Berkeley
  • Francine Berman, University of Massachusetts Amherst
  • Susan Dumais, Microsoft Research
  • Deborah Estrin, Cornell University
  • James Hendler, Rensselaer Polytechnic Institute
  • Michael Jordan, University of California, Berkeley
  • Butler Lampson, Microsoft Corporation and MIT
  • Valerie Taylor, Argonne National Laboratory and CMD-IT

In random order:

Read more…

Simons Awards for 2022

February 23, 2022

To receive this award from an organization I admire so much makes me totally happy and grateful—James Welch

Jin-Yi Cai is one of the 2022 winners of the Simons awards in mathematics. Congrats to him.

Jin-Yi is a dear and long time friend of ours. We are thrilled to see that he has been selected for this high honor: The Simons Fellows program extends academic leaves from one term to a full year, enabling recipients to focus solely on research for the long periods often necessary for significant advances. The foundation is proud to support the work of these distinguished scientists.

Read more…

Inequalities on the Gridiron

February 13, 2022


Q: Why are Buffalo Bills unlike Dollar Bills? A: Dollar Bills are good for 4 quarters

Reddit “outsmarting math” source

Josh Allen is not appearing in today’s Super Bowl. He led the Buffalo Bills to not just one but two go-ahead touchdowns in the final 2:00 of the game at Kansas City three weeks ago, but the Bills lost both leads and KC won in overtime.

Today we note an inequality that treats the real numbers like a football gridiron.
Read more…

National Academy of Engineering Elects

February 10, 2022


The problem in this business isn’t to keep people from stealing your ideas; it’s making them steal your ideas!—Howard Aiken

Composite crop of homepage photos

Taher Elgamal and Anna Karlin are among 111 new US members of the National Academy of Engineering (NAE). That’s one-hundred-and-eleven, not seven in binary.

Today we congratulate them and all the new members.
Read more…