Skip to content

Susan Horwitz 1955–2014

December 2, 2014

Susan and a paradigm shift in software engineering

horwitz

Susan Horwitz was—it is hard to type “was”—a computer scientist who did important work in the area of software engineering. She passed away this summer on June 11th.

Today Ken and I wish to talk about Susan’s work.

Ken and I send belated, but heart-felt, thoughts to her family including her husband Tom Reps. I, Dick, knew her for quite some time and found her always a joy to be around. She was too young to have died. Too young.

One thing I mean is that she started out an an undergraduate music major—not just performing music, but studying worldwide cultural approaches to music. She had visiting appointments and sabbaticals all over Europe. Thomas Ball of Microsoft Research wrote a tribute in August with more on this side of her as well as her professional work. Rather than duplicate what he and others have said of her accomplishments, we will try to catch the larger spirit of problems that engaged her.

Software

One of the great paradoxes of computer theory, in my opinion, is that we are always looking left and right for new applications, while a very important one stands just in front of us. The “new applications” are many and include biology, economics, computer architecture, and materials design to name a few. But the one that is right there, and has been there from the dawn of computing, is software. All digital systems from cloud computing to personal computing really are immense amounts of software. Without software there would be no smartphones, no laptops, no modern airplanes, cars, and soon not even toasters.

Yet the paradox is that theorists tend to find the new areas much more exciting than software. Take a look at the titles from any recent theory conference: there are few, if any, papers on software. I never see “Quantum approaches to software design,” or “Approximation methods for rapid code optimization via Semi-Definite Programming.” Have you?

I think the reason is simple: software engineering is just plain hard. We have Alan Turing to thank, since everything that one might want to do with software hits the Turing Tarpit as Alan Perlis called it. That is, when you consider fundamental questions, the features of programming languages that give them human appeal take a back seat to information properties that might as well be analyzed in lambda calculus or some graph-based computational network model. Questions about software from the original “does it halt?” to “is it correct?” all run up against fundamental computational limits.

A Paradigm Shift?

The situation with software is even more difficult because it also runs up against limits of humans. The problem with software is that it is written by humans to perform some task, often in a complex environment, and the lack of a clear understanding of what it should do in all situations leads to errors—“bugs.” Even if one attempts to prove that a program is correct, one of the difficult tasks is to understand what correct means. Is correct completely defined? Does it cover the complex environment that the software will operate in? All of this is why software engineering is so hard.

A shift that has happened over the years is the key to the success that Susan had in her work, and more generally, accounts for why the field has been able to make more progress in the last few years than over previous decades. The shift is that one really—all claims aside—does not look for correctness in any program. Rather the shift is to look to see that the program satisfies certain important properties.

For example, a device driver may have a complex job to do in talking to real-time hardware and also to the rest of the software system. Defining its full correctness criterion is probably very difficult indeed. But we all agree that the driver should have a simple property like termination. When the driver is called it should reruns some value. The correct one would be nice, but any value is better than having the driver “hang” and never terminate. This is an example of the shift:

Replace complex correctness properties with simpler desirable properties.

I am not really sure who should get the main credit for this shift, but Susan and her colleagues certainly were part of this brilliant insight.

Here is a mathematical example of this type of shift. Suppose we have some function {f} defined on the unit interval {[0,1]}. We may wish to be able to prove that the function is exactly the one that we want, but that may be too hard to do. However, we may easily be able to show that {f} is continuous or even smooth. These properties are much weaker than what we might desire, yet they are important properties. They may be sufficient for the result that we are trying to prove. This type of paradigm shift is what caused software correctness to be more attackable than ever before. It curiously coincided with the advances made by Susan and her colleagues in their work on program analysis.

Susan’s Work

Susan was an expert on data flow and pointer analysis, and generally on program analysis. For example, she created with Reps and Mooly Sagiv the concept of grammar-restricted reachability—now called context-free language reachability. Their POPL 1995 paper “Precise Interprocedural Dataflow Analysis via Graph Reachability,” also known as the RHS algorithm, showed how many hard program-analysis problems can be solved efficiently by transforming them into the problem of graph reachability. Later Thomas Ball, whom we mentioned above, used the RHS algorithm as part of Microsoft’s SLAM checking project. This work had a real impact on the world: over 85 per cent of the crashes in versions of Windows at the time were found and fixed by this project.

By the way, Ball wrote:

“SLAM originally was an acronym, but we found it too cumbersome to explain. We now prefer to think of ‘slamming’ the bugs in a program.”

There is archaeological evidence that it stood for “Software, Languages, Analysis, and Model-Checking.” This work was summarized in 2002 by Bill Gates as:

[T]hings like even software verification, this has been the Holy Grail of computer science for many decades but now in some very key areas, for example, driver verification we’re building tools that can do actual proof about the software and how it works in order to guarantee the reliability.

Susan’s Theory Work

Susan did some hardcore theory work too. She proved that certain very simple properties, of the kind we discussed already, were not always possible to determine. Here is a quote from the introduction of her paper, “On the Non-Approximability of ‘Points-to’ Analysis”:

Unfortunately, in this paper, we show that no such approximation algorithms exist, either for flow-sensitive or for flow-insensitive pointer analysis, unless P=NP. In the flow-insensitive case, we show that if there were such an algorithm we could solve the NP-complete Hamiltonian Path problem in polynomial time. In the flow-sensitive case, we prove a stronger result. We relax our notion of accuracy and require that the approximate solution be bounded by a polynomial in the size of the precise solution (instead of by a constant) and show that even such weaker approximate algorithms are infeasible unless P=NP. We prove this by showing that if there were such an algorithm we could solve the NP-complete 3-SAT problem in polynomial time.

Ken has a tiny example that conveys some of the flow-analysis flavor without getting onto the scale of the papers. Consider this code for swapping the two nodes after those referenced by pointers p and q in a circularly linked list—the point was to write it without any if-then tests for small cases or nodes p and q being adjacent or identical:

  1. Node temp = p.next;
  2. p.next = q.next;
  3. q.next = temp;
  4. temp = temp.next;
  5. q.next.next = p.next.next;
  6. p.next.next = temp;

For instance, if p points to node A in ABCDEF and q points to C, then B and D are swapped to give ADCBEF. The code works correctly in all cases, and can be understood as swapping the links into and out of B and D. But can you prove it? Ken conceived it while he was at Cornell as a challenge for the pre/post-condition proof methods championed by David Gries and others. The methods seemed unavoidably to incur a bushy case analysis, and Ken doesn’t recall whether he or anyone else completed it by hand.

Nevertheless, Susan and her co-workers programmed real tools for cases of far higher complexity. One of my favorites is presented in her 2003 paper with her student, Suan Yong, titled “Protecting C programs from attacks via invalid pointer dereferences.” Again, the paradigm shift is that you need not try to make the programs perfectly secure, but can demonstrate the ability to stop many kinds of attacks, and maybe not telegraph the weaknesses that inevitably remain.

Open Problems

The computer science community is poorer without Susan. We will miss her smile, her activities to promote women in computing such as WISE, and her research. Again our thoughts to Susan’s family, to Tom, and to all whose lives were touched by her.

[some word and link fixes]

5 Comments leave one →
  1. December 2, 2014 3:24 pm

    Fun With Pointers ☞ Theme One

  2. December 3, 2014 12:04 pm

    > the field has been able to make more progress in the last few years than over previous decades. The shift is that one really—all claims aside—does not look for correctness in any program. Rather the shift is to look to see that the program satisfies certain important properties.

    This is less than convincing given that type systems have been understood as guaranteeing the absence of a small class of dynamic errors for a long time now (the “well-typed program cannot go wrong” slogan at least goes back to Milner in 1978), and that the field has also seen strong progress in the area of proving detailed correctness properties of large programs (a C compiler, an operating system micro-kernel, a webserver…). The results of program analyses at large (not just type-systems) are indeed encouraging.

    It may be time to review the predictions made in “Social Processes and Proofs of Theorems and Programs”; it made many valid points in 1979, but the field has also changed a lot, in interesting ways, since then.

  3. December 6, 2014 10:58 am

    Susan was a wonderful colleague at UW Madison for many years. In addition to being an accomplished researcher in PL, she contributed to the Dept and the University in multiple ways. It is a huge loss and we will miss her greatly.

Trackbacks

  1. Permutation Problems With Strings | Gödel's Lost Letter and P=NP
  2. POPL 2022—Et Tu, Brute? | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading