Skip to content

Breaking All the Security Rules

May 10, 2010


A radical approach to computer security

Andrew Odlyzko is a number theorist, a complexity theorist, a cryptographer, and a deep thinker. He has proved some very beautiful theorems, written non-trivial software, and managed large projects. One of his coolest projects is the search for a “bad” zero of the Riemann Zeta function. For example, he points out:

Thus zero # {10^{21} + 1} is actually

\displaystyle 1/2 + i \cdot 144,176,897,509,546,973,538.49806962 \dots

As expected its real value is exactly {1/2}.

Today I will talk about a presentation he gave the other day called: Providing security with an insecure cyber-infrastructure. This talk was presented at a general conference on CyberInSecurity—note the “In.”

Also at the conference was another neat talk by Craig Gentry on How the cloud can process data without seeing it. I will talk about his beautiful work in the near future.

Doug Solomon, the CTO of the company Ideo, gave a very cool talk on user interfaces. His work is not related to Andy’s, but I cannot resist giving a pointer to a video parody of the facebook interface: lookup “Facebook in Reality.” It is well worth two minutes, if you need a chuckle, but it is a bit out there humor—it’s rated R so I will leave a direct link out.

The Fundamental Tenets of Security

Modern computer security is based on several basic tenets:

  1. People want secure systems.
  2. People can implement systems correctly.
  3. People should assume the system is completely known to the adversary.

The first is clear: who would argue that we want insecure systems. Of course we want systems that are secure and protect our data, for example. The second, is also clear: we must be able to implement the security protocols correctly. If we cannot implement them properly, then what good are they. Finally, as Claude Shannon put it bluntly; “the enemy knows the system”. Modern security always assumes the system is known, only the “keys” are secret.

The Fundamental Tenets of Security

Andrew’s talk was surprising, it went against the basic tenets of security, and went against ideas he said he had believed for decades. Basically, he argued against each of the tenets I just gave. Let’s look at his arguments for each, and then turn to his solution for security in the future.

{\bullet } People want secure systems. He pointed out a really secure system would be useless. Even if we could build a secure system, it would be difficult, if not impossible, to use. For example, a system where secretaries cannot commit “forgery”, sign for their bosses, will not work in the real world.

Another way Andy attacked this point is he discussed the importance of digital signatures—one of the great results of modern cryptography. He worked on such systems in the 1980’s, and made important contributions. For example, Andy did seminal work on breaking a number of signature schemes.

But, he added, we have long used faxed signatures to buy and sell many things: including large purchases like houses. Faxes are not provably secure, they are not secure period. Yet commerce relies on them. Today small businesses deposit checks by first scanning them, and then sending them to their bank via email. This is also completely insecure. So where are the digital signature applications?

{\bullet } People can implement systems correctly. His point is one many may disagree with, but Andy was very convincing. He claimed, toy systems aside, a system of any complexity is not secure—people just cannot build complex systems that are correct. If they are not correct, they cannot be secure—there will be “holes” that will allow cyber-attacks. I think he is right: whether you believe in formal methods, or informal methods, there is no way to make really secure systems.

To amplify this point I would add: modern software systems are huge, often distributed, and consequently always will have bugs. Humans seem unable to avoid this—Andy’s point is valid, in my opinion. Correct real systems cannot be constructed.

{\bullet } People should assume the system is completely known to the adversary. Andy disagreed with this point too, and it leads to his “solution” for building secure systems. I will discuss it in the next section.

Then, Andy made a blanket statement and perhaps this is his most controversial point, he called it: The dog that did not bark. The quote is, of course, a variant of the famous quote from the Sherlock Holmes story Silver Blaze by Sir Arthur Conan Doyle. There Holmes answers to

“the dog did nothing in the night-time;”

with the famous, “that was the curious incident.”

Andy argued cyberspace is horribly insecure, but where are the big disasters? Apparently the cost of all cyber-crime to banks is just about equal to actual bank robberies. His point is even more interesting since just last Thursday (May 6, 2010) some error—human or computer—caused the US stock market to lose about one trillion dollars in minutes. Now I call this a real “barking dog.”

In summary his points are: cyber-space is insecure, it will always be insecure, and it does not matter. I am not sure I agree completely with his points, but they form a quite interesting position.

Andrew’s Solution

Andy said there is no solution, no easy way to secure cyberspace, no way to be perfectly safe. But, he had one idea that runs counter to modern cryptography principles—which he helped create when he worked in cryptography. He directly attacked the tenet: People should assume the system is completely known to the adversary. This assumption is like, in complexity theory, our love of the worst case model. If we make the worst case assumption that the adversaries know the system and we can still prove it is secure, this is wonderful. Unfortunately, Andy’s other points claim this is a false premise. If “pigs can fly,” then anything is true. Since no system is secure, we must change from using the worst case model in security.

He does have an approach, and it is—security through obscurity. He said lets build messy, not clean systems. He really means this, and had several concrete suggestions. He said code obfuscation or “spaghetti code,” would make the adversaries job all the more difficult. Andy notes the black hats use code obfuscation, so why should we not?

Here is an example of code that has been obfuscated as part of the yearly contest:

For further data, discussions, and speculations see his papers and presentation at this site.

Open Problems

I liked his talk for raising some controversial points. I think he raises several real open problems. I know there has been theory work on code obfuscation, but I think he is interested in more practical methods—see this.

Another view of what he is saying is this: can we make systems that are so complex that they are very difficult to break, even with the source?

[Chris->Craig Gentry]

15 Comments leave one →
  1. May 10, 2010 11:00 am

    Let me disagree with the obfuscation part.

    I’ll just use his argument:

    “He pointed out a really secure system would be useless. Even if we could build a secure system, it would be difficult, if not impossible, to use. For example, a system where secretaries cannot commit “forgery”, sign for their bosses, will not work in the real world.”

    The same happens with obfuscation. Most of the exploits are actually functional ones. If we want to obfuscate the system, we would have to obfuscate the functionality itself . Rendering the system useless.

    Also by considering the source public and understandable, we can also assume that we have more eyes going through it and that faults will be discovered and fixed faster. Leading to a more correct and thus secure system – plug in here all those arguments open-source zealots use constantly.

    The only argument that holds on obfuscation & hidden sources / protocols, is speed. Skype is an example.

    • rjlipton permalink*
      May 10, 2010 1:05 pm

      I think this is a quite good argument. I find it very strong. Thanks.

  2. Jeremy H permalink
    May 10, 2010 12:37 pm

    This reminds me of a post by Arvind Narayanan (http://33bits.org/2010/02/25/data-privacy-the-story-of-a-paradigm-shift/)

    In describing approaches to the privacy/utility tradeoff, he writes

    With utility-first, you have strong, well-understood guarantees on the usefulness of the data, but typically only a heuristic analysis of privacy. What this translates to is an upper bound on privacy. With privacy-first, you have strong, well-understood privacy guarantees, but you only know how to perform certain types of computations on the data. So you have a lower bound on utility.

    Similarly, worst-case crypto provides an upper bound on vulnerability and a lower bound on utility. As research progresses, the situation can only improve. Andrew’s approaches, on the other hand, provides no such long-term guarantee. So while it may be a very useful stop-gap, in the long-term I expected worst-case crypto to dominate for the same reasons that differential privacy has become so important.

    • Jeremy H permalink
      May 10, 2010 12:38 pm

      Sorry. The second paragraph was supposed to be in quotes. My HTML apparently needs work.

  3. adam o permalink
    May 10, 2010 4:21 pm

    I think you mean Craig Gentry. Anyway, in my opinion using obfuscation doesn’t violate the enemy knowing the system. The latter would say the enemy knows the obfuscated code and the obfuscating algorithm. If she can’t figure out how the code works then so much the better. But there are a number of interesting works in the theory community showing that getting a provable guarantee of this form is hard. Maybe it would be interesting to post more about this line of work.

  4. Joseph Iacobucci permalink
    May 11, 2010 8:03 am

    One quick observation, its fairly arrogant to assume that you can write obfuscated code that someone else can not read.

    If you can read it to verify that it works the way that you want it to, then someone else can also.

    I might be saying the same thing as the first poster.

    • rjlipton permalink*
      May 11, 2010 1:10 pm

      The idea might be the transformations are done by machine. Reading even machine code from a state of the art compiler is non-trivial, since they can do some pretty strange things to re-order code.

      • Joseph Iacobucci permalink
        May 12, 2010 8:27 am

        We *already* compile our programs which obviously is not working.

        I was under the impression that the top virus/malware writers do look directly at the machine code. I highly doubt that all vendors are posting their source to virus writer forums.

        In fact, just search on google for numerous blog posts about how to start reverse engineering someone’s code. from machine code. There are tools that attempt to even create stub code in the language of your choice from the machine code.

        I am not saying that it is easy, but fundamentally, we have to tell the computer what to do. And the computer is deterministic so someone else can figure out what the program does. If they do, they might be able to figure out a flaw. There are automated programs for finding areas in your code that can host exploits that exist.

        You can raise the cost of understanding the program, but in the end someone can figure it out.

        Question: How do you verify the resulting code that is obfuscated and complex? We still haven’t created a perfect compiler that is bug free, how can we expect a obfuscater/complexier (to make up some words) to not introduce more flaws than it prevents?

        After learning a bit about my position by writing these comments, we seem to agree that this is an interesting topic to explore, restating your summary, “Can we raise the cost of finding and exploiting security flaws to a high enough level as to discourage most exploits?” We might be able to. Similarly to how we handle the security of physical possessions (cars, homes, etc), the main goal is to make exploits hard/costly rather than prevent them entirely.

  5. May 11, 2010 6:23 pm

    What does that obfuscated C code do?

Trackbacks

  1. Breaking All the Security Rules « Gödel's Lost Letter and P=NP - Sinnerx
  2. Tweets that mention math Breaking All the Security Rules « Gödel's Lost Letter and P=NP: Finally, as Claude Shannon put it bluntly; “... -- Topsy.com
  3. Automotive Security Systems | My Blog
  4. Thanks For Sharing « Gödel’s Lost Letter and P=NP
  5. Cryptography And Quicksand | Gödel's Lost Letter and P=NP
  6. TECNOLOGÍA » Gödel’s Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading