Skip to content

Making Abstract Data Types Abstract

May 27, 2010


An approach to make systems safe from heap spray attacks

Doug Tygar is an expert on computer security in the broad sense. It started with his thesis work on building secure operating systems. This work was notable for its unique approach to this fundamental problem, partly driven by the views of his famous advisor Michael Rabin. Since then he has continued working on all aspects of security: including attacks against intrusion detectors themselves, covert channels, and spam and botnet protection.

Today I want to talk about some attacks on software systems and an attempt to try to model them formally, and perhaps prevent them. This is related to some research of Doug’s, but the ideas here are work still in progress.

Doug started his career as junior faculty at CMU, where he taught a class on security. One thing about CMU students, in general, they are good “hackers.” Even theory students are known for being able to write code, and the systems students are seriously good.

At the start of the class, Doug announced to his students he had a great password and that no one would be able to break it. I am not sure why he challenged the class, but he did. It was not a good idea—do not wave red flags at a bull—or in this case at a class full of them. The game was on.

In a couple of days the class had figured out his password. The problem they solved is given a “one-way” function {f} and a string {h}, find a string {p}, the password, so that

\displaystyle  f(p) = h.

They had used a standard dictionary based attack: try all the words from some dictionary, and test each {w} to see if {f(w)=h}. Usually, these programs are a bit more clever and can get a password like “99hello” quite easily.

They told Doug his password, and so he changed it to a better one. Game on. This they still cracked, even though it took much more computer time, but they still got it. Now Doug announced to his class that he had picked a password that would foil these simple dictionary attacks.

But, he made a small error, he told them how he had selected the password. He wanted a password he could remember, so he used his knowledge of Japanese. He told them he had used a Japanese phrase as his password. He smiled to the class, and told them to forget about trying to crack this password.

They decided to attack his password, Japanese or not. Of course he had to write his phrase as a series of roman letters, since the login routine did not accept Japanese characters. There are standard ways to encode Japanese into roman letters, see this; thus, they built their own special dictionary for Japanese phrases. It was a lot of work, but in a week or so, and with some luck, they again broke his password.

Doug was getting tired of this game of password cracking: he wanted them to work on more important security projects. So he told his class he had changed his password to a random string of letters:

Let’s move on, I win.

Of course this was not the end, recall these were CMU hackers. They knew the attacks using dictionaries were out, but there were other ways to get passwords. In those days, this was many years ago, the CMU department was using workstations that allowed users to listen in on all the ethernet traffic on the network. Essentially they could sniff the packets as they went by, whether they were meant for them or not.

So they wrote a program that examined each packet. The goal was to find packets that looked like they were part of a remote login. Each time they found one they got a {w} and checked to see if it was Doug’s password. After a few days, a packet went by with his password, and they had him. The key insight was that the operating system did not encrypt passwords when logging into remote machines.

After Doug heard they had his password again, he gave up. Game over. You win. He told them his new password—this stopped all attacks. And the class moved on to more interesting issues concerning the security of systems.

Let’s move on to a class of interesting attacks on systems.

Code Injection

One of the fundamental ways that malware takes over a system is this:

  • First, the malware runs in normal mode and does something legal, but something that will help it break your system. Often this involves writing some data somewhere in memory by legal means.
  • Then, the malware uses some bug, and jumps to a given location in memory.
  • Now, the malware is running its own code and does whatever it wants to do to you.

The second step is critical. Without the bug or hole in your system the malware would likely be unable to run the code that it wants to run. However, modern software systems are large, complex, and distributed: all make “preventing the jump” step very hard. Thus, the need for methods that detect or even protect systems from this type of attack.

Heap Spraying

In early attacks the malware could control the jump step—it could decide where to jump to exactly. The software bug that was exploited allowed the malware to jump to a specific location. This made the malware’s job pretty easy: place its code there and jump.

However, as software engineers have become more aware of attacks, there are less predictable ways for malware to take control. Today many bugs allow malware to jump into memory, but not to a specific location. The challenge is simple: given that malware can jump into memory, how does the malware make sure it can jump into its code? It would be useless, from the malware’s point of view, to jump into the middle of legal code or data. Likely, this type of jump might crash the system, but would not allow the malware to take over.

Hence the creation of heap spraying. Sounds like a garden product:

use “heap spray” and all your weeds will go away.

The idea of heap spraying is simple, since the jump is unpredictable, place many copies of the malware code all over memory. Then, when the jump occurs, there will be a good chance that the malware jumps into its own code.

There is one further simple idea to make this work. The malware wants to get to the start of its own code—jumping into the middle is not very useful. So nop sleds are used. Imagine memory is filled with pieces of code like this:

\displaystyle  \alpha, \alpha, \dots, \alpha, \mathsf{\ the\ malware\ code\ here}.

Here {\alpha} is typically a “nop.” Note, now as long as the jump hits the {\alpha} sequence the malware’s code will get executed properly. Here is an example of how this type of attack looks:

The Problem: Abstract Data Types

I see the problem as: the malware is given legal access to some abstract data type. During the first part of the attack the malware can do any legal operations on the data type. In the above example, the malware just filled an array—legally—with its sled and code. However, later on the malware will do the jump step. This will take it into the memory and perhaps with luck it will hit a nop sled and then execute its malware code.

The problem is: the abstract data type is not abstract. A white hat programmer only uses the correct calls to the data type; on the other hand, the malware jumps into the middle of the data type. This is an abuse of the data type, and is how the malware gets control of the machine.

Here is a “solution” to the heap spray problem based on the use of theory type ideas. I say “solution” for the following reason: unless the exact attacks are clearly defined the solution can still fail—more on this later on. I do think, however, that ideas from theory could easily play a bigger role in the security of modern systems.

I will explain the idea for the simple data type of a one-dimesional array of integers. Imagine some scripting language that hat allows programs to access arrays. They can execute {a[i] \leftarrow x} to write the array value, and execute {x \leftarrow a[i]} to read the array value.

The implementation of the array type is changed very slightly. I need to explain how it is created, read, and written.

{\bullet } Create Array: A random value integer {R} is selected. The array is set up as usual: let {m[i]} denote the memory locations where the array is stored.

{\bullet } Write Array: Suppose {x} is to be written into {a[i]}. Then, execute

\displaystyle  m[i] \leftarrow x \oplus R.

{\bullet } Read Array: Suppose {x} is to be read from {a[i]}. Then, execute

\displaystyle  y \leftarrow m[i],

and return the value {y \oplus R}.

The point of this is the values stored in the actual memory are not the “raw” values sent to the data type. They are scrambled to make them appear random. Clearly, a jump now will hit,

\displaystyle  \alpha \oplus R, \alpha \oplus R, \dots, \alpha \oplus R, \dots

Thus, a malware program that fills memory with nop sleds and then jumps is likely to hit random garbage. This may cause an exception or even a crash, but will unlikely allow the malware to take over the system.

Does This Work?

I believe the above is a pretty good “solution.” However, it could be defeated if the array allows access to undefined locations. This shows how careful one must be in designing such “solutions.” I think whether this particular idea is useful or not is less important than the idea that we may be able to use crypto like methods to protect software.

The malware could learn the value of {R} as follows. It would access some yet-undefined location and get back {x \oplus R} where {x} was the value left in the location. For example, if {x=0}, then the malware would learn {R}, and this would defeat the method. Even if {x} were arbitrary, the malware could use this approach to try to learn the secret random value {R}.

To avoid this attack there are at least two methods. We could assume that the data type does not allow the access of undefined values. It can return “undefined value,” but must monitor whether a location is defined or not. The other possibility is to use a more complex crypto solution. The advantage of the latter is the data type need not watch and check that no read occurs to a location that is undefined. I will discuss this method and related issues in a later post.

Stopping Heap Spray—Other Approaches

There are several other methods for protecting against heap spray attacks. One project is called Nozzle and is work of Paruj Ratanaworabhan, Benjamin Livshits, and Benjamin Zorn of Microsoft. This method is really an intrusion detection method: it watches the memory and reports if it detects the presence of objects that contain executable code. One problem with this method is the existence of false positives. They note that often data stored by programs is legal code.

Another project is called BuBBle and is the work of Francesco Gadaleta, Yves Younan, and Wouter Joosen. They use a method very close to one I am suggesting. They replace parts of memory by random pieces—these random pieces cause the memory left by the attacker to be gone. This will cause the malware to cause an exception when it attempts to execute that part of memory. One problem is they only modify some of memory, and second is they need additional storage for the actual values of the modified memory.

Open Problems

Can theory help in other ways in building secure operating systems? Does the idea here have any use in fighting heap spray attacks? Can we use more crypto in making secure systems?

16 Comments leave one →
  1. May 27, 2010 10:51 am

    This is a great story.

    The password challenge is really funny. As you indicated, people should really think twice before waving the red flag. It can cause nothing but trouble for everyone.

    One thing to think about, is why heap overflows and heap sprays are even possible. No one should intentionally be writing self-modifying code, so why isn’t code memory locked? Even if data does contain executable code, why is data in memory executable? Why does executable code and data exist on the same memory chip? Why is there no such thing as data only memory chips?

    Who says you can’t re-engineer the CPU, the memory and the OS to make heap overflows and heap sprays impossible?

    Re-engineering is the key to solving this problem – and the many other problems we’re having with hackers. People are approaching the problem in the wrong way. It’s like trying to prevent people from hitting deer that have wondered onto the road. Prevent the deer from wondering onto the road and you’ve solved your problem.

    • Matt permalink
      May 28, 2010 12:15 pm

      I believe the main reason is slow adoption of technology, and compatability. There already exist mechanisms to declare swaths of memory as unexecutable (the NX bit in amd64 and PAE). I believe that most major operating systems already use this for the stack (and possibly the heap?).

      One issue is that JIT compiling interpreters need to generate executable machine code, and put it somewhere. This makes them somewhat incompatible with no-execute memory.

      • Spudd86 permalink
        July 11, 2011 10:49 am

        Hence we have madvise. (and a rough equivalent on Windows, and others )

      • July 11, 2011 11:55 pm

        @Matt – You’ll be shocked to know that most of your mobile phones (including Android and iPhone) don’t turn on NX, although the processor does support it.
        Another risk is Return Oriented Programming, which defeats NX and has been in use for some time now.

  2. May 27, 2010 11:09 am

    Do formal methods count as theory here? I believe there have been a variety of suggestions on how they can help.

  3. meme permalink
    May 28, 2010 7:53 pm

    Preventing the bug is very easy; in fact, most languages don’t allow you to overwrite code. It doesn’t matter how complex your system is, as long as it doesn’t use a memory unsafe language, you don’t have to worry about heap and stack overflows.

    In the languages where it *is* a problem (mostly C and C++), introducing your fix would break many existing programs, because it is perfectly valid to generate code and execute it.

    • rjlipton permalink*
      May 29, 2010 9:10 pm

      The languages involved are not C or C++. They are languages that you allow anyone to run. Languages like pdf and various script languages. The point is these languages are suppose to be more restrictive, but they are not. That is the issue. Anyone who allows an unknown person run C on their machine is of course leaving themselves open to easy attacks.

  4. gwern permalink
    May 29, 2010 8:16 pm

    > Here {\alpha} is typically a “nop.” Note, now as long as the jump hits the {\alpha} sequence the malware’s code will get executed properly. Here is an example of how this type of attack looks:

    This is kind of opaque to anyone not already familiar with nops and what a jmp into a long sequence of nops implies.

    May I suggest you use an analogy with _Snakes and Ladders_?

  5. Anonymous permalink
    May 31, 2010 10:51 pm

    Very sad news: Avner Magen was killed.
    http://www.cbc.ca/canada/north/story/2010/05/31/denali-avalanche-deaths.html

  6. June 1, 2010 12:45 am

    My personal opinion, and I might be wrong – I don’t think we can make anything secure – we can only make things harder to crack.
    While at my previous company, I had a few ideas to make things tougher for folks to break in – I wanted to close in on a few vulnerabilities of debuggers and tracers, making them more difficult to crack. The evaluator kept saying that the idea is not good and it can be broken – I finally gave up on him and challenged him to show one idea on security that was not broken. I have yet to hear from him.

    Even if you make things cryptographically impossible to crack, people will come up with novel solutions:
    http://www.zdnet.com/blog/security/researchers-hack-wired-keyboards-hijack-keystrokes/2042
    http://www.engadget.com/2010/03/09/1024-bit-rsa-encryption-cracked-by-carefully-starving-cpu-of-ele/

  7. Ross Snider permalink
    June 1, 2010 2:26 pm

    The simple solution presented here is not sufficient, since XOR on 8 bits gives the attacker a > 1/256 chance of success (there are many varieties of NOP – even the one presented as an example isn’t a formal NOP it increments a register in x86]). In certain cases, attackers can even try executing all possible shellcodes. Additionally, a NOPsled can be written by anything. It could even be a file name (with alphanumeric shellcode). Javascript read/writes aren’t the only way to create a sled. Due to the complex nature of browsers, a sled could (for example) be created by Flash or Java on the same machine and then exploited by a javascript stack vuln.

    • rjlipton permalink*
      June 1, 2010 5:33 pm

      First, never said read/writes was only way to corrupt memory.

      Second, if we even reduce 1/256 it would help. Since each call would cause a crash with high likelihood. But if write 32 bits or more with R then 1/a few billion. You need 4 nops in a row to be good to work.

      Third, you make a good point. There are so many holes it is a bit funny trying to fill them. I have thought the ideas here might actually be better for just making the code safer against white hat errors. They do not make nop sleds but may still miss use the memory.

      Thanks for the comment.

  8. June 2, 2010 9:37 am

    I am one of the authors of the second paper you site above. Your idea is interesting and was actually our first idea, but from our first tests it turned out to be too slow, which is why we decided to replace characters at random positions. Please note your approach will also need additional storage: if you have a key per string, you will need extra storage for that key per string.

    An approach that does something similar to your suggestion above is Data Space Randomization, by Bhatkar and Sekar: http://seclab.cs.sunysb.edu/seclab/pubs/dsr.pdf.

  9. June 30, 2010 5:57 pm

    I am the first author of the paper which presents BuBBle. We chose to interrupt Javascript strings at random locations also to detect the attack.
    By encrypting/xorring the whole string the application would just crash when the attack occurs.
    With our solution, when the CPU executes the 0xCC sequence at a random location (which means that an attack is in progress) an exception is raised and handled by the application which alerts the user to close to browser.

    Regards.
    Francesco

    • rjlipton permalink*
      June 30, 2010 8:21 pm

      Thanks for the comment.

Trackbacks

  1. Tweets that mention Making Abstract Data Types Abstract « Gödel’s Lost Letter and P=NP -- Topsy.com

Leave a Reply

Discover more from Gödel's Lost Letter and P=NP

Subscribe now to keep reading and get access to the full archive.

Continue reading