Skip to content

Is Computer Security Possible?

March 8, 2017


The breaks keep on coming…

Holly Dragoo, Yacin Nadji, Joel Odom, Chris Roberts, and Stone Tillotson are experts in computer security. They recently were featured in the GIT newsletter Cybersecurity Commentary.

Today, Ken and I consider how their comments raise a basic issue about cybersecurity. Simply put:

Is it possible?

In the column, they discuss various security breaks that recently happened to real systems. Here are some abstracts from their reports:

  • Odom and Dragoo discuss the Cloudflare data leak, now dubbed “Cloudbleed,” and the finer points of software code gone wrong.

  • Nadji reviews a recent “state of malware” report—what’s new and what’s not. He also explains the problem of “domain residual trust” and how it facilitates the hijacking of Twitter accounts or once-legitimate news links.

  • Stone Tillotson … [calls] for careful consideration of backdoors into mobile devices following the hack of a company believed to have assisted the FBI’s San Bernandino case.

  • Roberts and Odom examine an attack by researchers in the Netherlands that defeated what has long been considered a reliable safeguard in modern microcomputer architectures: address space layout randomization (ASLR).

The last is an attempt to make attacks harder by using randomization to move around key pieces of systems data. It seems like a good idea, but Dan Boneh and his co-authors have shown that it can be broken. The group is Hovav Shacham, Eu-Jin Goh, Matthew Page, Nagendra Modadugu, and Ben Pfaff.

Here we talk about the first item at length, plus another item by Odom on the breaking of a famous hash function.

Security Break-Ins

With all due respect to a famous song by Sonny Bono and Cherilyn Sarkisian, “The Beat Goes On“: I have changed it some, but I think it captures the current situation in cybersecurity.

The breaks go on, the breaks go on
Drums keep pounding
A rhythm to the brain
La de da de de, la de da de da

Laptops was once the rage, uh huh
History has turned the page, uh huh
The iPhone’s the current thing, uh huh
Android is our newborn king, uh huh

[Chorus]

Insanity?

A definition of insanity ascribed to Albert Einstein goes:

Insanity is doing the same thing over and over again and expecting different results.

I wonder lately whether we are all insane when it comes to security. Break-ins to systems continue; if anything they are increasing in frequency. Some of the attacks are simply so basic that it is incredible. One example is an attack on a company that is in the business of supplying security to their customers. Some of the attacks use methods that have been known for decades.

Ken especially joined me in being shocked about one low-level detail in the recent “Cloudbleed” bug. The company affected, Cloudflare, posted an article tracing the breach ultimately to these two lines of code that were auto-generated using a well-known parser-generator called Ragel:

 if ( ++p == pe ) 
   goto _test_eof; 

The pointer p is in client hands, while pe is a system pointer marking the end of a buffer. It looks like p can only be incremented one memory unit at a time, so that it will eventually compare-equal to pe and cause control to jump out of the region where the client can govern HTML being processed. Wrong. Other parts of the code make it possible to enter this test with p > pe which allows undetected access to unprotected blocks of memory. Not only was it a memory leak but private information could be exposed.

The bug was avoidable by rewriting the code-generator so that it would give:

 if ( ++p >= pe ) 
   goto _test_eof; 

But we have a more basic question:

Why are such low-level bits of 1960s-vintage code carrying such high-level responsibility for security?

There are oodles of such lines in deployed applications. They are not even up to the level of the standard C++ library which gives only == and != tests for basic iterators but at least enforces that the iterator must either be within the bounds of the data structure or must be on the end. Sophisticated analyzers help to find many bugs, but can they keep pace with the sheer volume of code?

Note: this code was auto-generated, so we not only have to debug actual code but potential code as well. The Cloudflare article makes clear that the bug turned from latent to actual only after a combination of other changes in code system patterns. It concludes with “Some Lessons”:

The engineers working on the new HTML parser had been so worried about bugs affecting our service that they had spent hours verifying that it did not contain security problems.

Unfortunately, it was the ancient piece of software that contained a latent security problem and that problem only showed up as we were in the process of migrating away from it. Our internal infosec team is now undertaking a project to fuzz older software looking for potential other security problems.

While admitting our lack of expertise in this area, we feel bound to query:

How do we know that today’s software won’t be tomorrow’s “older software” that will need to be “fuzzed” to look for potential security problems?

We are still writing in low-level code. That’s the “insanity” part.

SHA Na Na

My GIT colleagues also comment on Google’s recent announcement two weeks ago of feasible production of collisions in the SHA-1 hash function. Google fashioned two PDF files with identical hashes, meaning that once a system has accepted one the other can be maliciously substituted. They say:

It is now practically possible to craft two colliding PDF files and obtain a SHA-1 digital signature on the first PDF file which can also be abused as a valid signature on the second PDF file… [so that e.g.] it is possible to trick someone to create a valid signature for a high-rent contract by having him or her sign a low-rent contract.

Now SHA-1 had been under clouds for a dozen years already, since the first demonstration that collisions can found with expectation faster than brute force. It is, however, still being used. For instance, Microsoft’s sunset plan called for its phase 2-of-3 to be enacted in mid-2017. Google, Mozilla, and Apple have been doing similarly with their browser certificates. Perhaps the new exploit will force the sunsets into a total eclipse.

Besides SHA-2 there is SHA-3 which is the current gold standard. As with SHA-2 it comes in different block sizes: 224, 256, 384, or 512 bits, whereas SHA-1 gives only 160 bits. Doubling the block size does ramp up the time for attacks that have been conceived exponentially. Still, the exploit shows what theoretical advances plus unprecedented power of computation can do. Odom shows the big picture in a foreboding chart.

Open Problems

Is security really possible? Or are we all insane?

Ken thinks there are two classes of parallel universes. In one class, the sentient beings originally developed programming languages in which variables were mutable by default and one needed an extra fussy and forgettable keyword like const to make them constant. In the other class, they first thought of languages in which identifiers denoted ideal Platonic objects and the keyword mutable had to be added to make them changeable.

The latter enjoyed the advantage that safer and sleeker code became the lazy coder’s default. The mutable strain was treated as a logical subclass in accord with the substitution principle. Logical relations like Square “Is-A” Rectangle held without entailing that Square.Mutable be a subclass of Rectangle.Mutable, and this further permitted retroactive abstraction via “superclassing.” They developed safe structures for security and dominated their light cones. The first class was doomed.

[word changes in paragraph after pointer code: “end-user” –> “client”, HTML being “coded” –> “processed”.]

10 Comments leave one →
  1. alanone1 permalink
    March 8, 2017 10:58 am

    I love this one!

    Part of the deal is that “locks * money” determines the level of effort required to break a lock. But we also have the problem of people losing keys, having them stolen, or being bribed.

    However, I think it is worth taking another pass at the technical parts of the problems — from scratch, and mainly just going forward. This requires the first sentence above be heeded in both hardware and software (and some great designers are needed as well).

    The Internet doesn’t command (the SW inside a computer has to decide what the passive incoming bits are supposed to mean). If they are not interpreted internally as commands (and I don’t think they should be) then the modules still need to decide what is reasonable for them to try to do in the light of the new information that just arrived.

  2. March 8, 2017 10:32 pm

    like that quote also supposed by einstein but its probably misattributed. it appears to originate in narcotics anonymous literature c1980. https://en.wikiquote.org/wiki/Narcotics_Anonymous

    yes cybersec is really big in the news last few years. my personal theory is that maybe physical warfare is gradually shifting to “infowar”. you didnt even mention the huge cia hacking leaks, news splattering all over right now. other cybersec areas, fake news, and russian election hacking, trump admin surveillance, etc. more on latest cybersec developments

    https://vzn1.wordpress.com/category/cybersec/

  3. March 9, 2017 6:19 am

    I think we are all insane.
    This is an old post from my blog that supports the statement about our insanity.

    http://pantelisrodis.blogspot.gr/2015/06/system-security-as-computational-problem.html?m=1

  4. March 9, 2017 12:42 pm

    A completely secure internet? Perish the thought!

    It is the business of the future to be dangerous; and it is among the merits of science that it equips the future for its duties.
      — Alfred North Whitehead

    To adapt Whitehead to yesterday’s xkcd guidance (specifically the rollover to xkcd #1808 “Hacking” )

    It is the business of the internet to be dangerous; and it is among the merits of ‘bash’ and ‘gcc’ that they equip the future for its duties.
      — cf. Randall Munroe

    They can take away my ‘bash’ and ‘gcc’ only by prying my cold deceased fingers from my keyboard! 🙂

  5. Serge permalink
    March 12, 2017 7:36 pm

    The outcome of a war between two processes, Enemy and Defender, cannot be predicted by a third process, Observer. The laws of mathematics are the same in every computing model, whether quantum or classic, and therefore are speechless as far as time is concerned. In other words, it is not possible to know whether computer security is possible, or whether there are one-way functions, or whether P!=NP…

  6. Call me Chuck the Monk if you want a signature permalink
    January 30, 2018 3:53 pm

    Anyways, one of the worst security risks is communication between computers.

    If all computers stood all alone, unable to communicate with another then the security risks would be limited to stochastical variables.

    Of course the builders of the isolated computers always might still slip something nasty even into isolated computers.

    Still if you are doing computing that you wont to keep safe, then I believe that a totally isolated computer might be the thing. Nothing in and more or less what you want out.

Trackbacks

  1. Gender Bias: It Is Worse Than You Think | Gödel's Lost Letter and P=NP
  2. Timing Leaks Everything | Gödel's Lost Letter and P=NP
  3. More Spectre and Meltdown « Pink Iguana
  4. A Clever Way To Find Compiler Bugs | Gödel's Lost Letter and P=NP

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading