This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] malloc/malloc.c: Mitigate null-byte overflow attacks




On 10/30/2017 07:28 AM, Florian Weimer wrote:
On 10/27/2017 05:17 AM, Moritz Eckert wrote:
diff --git a/malloc/malloc.c b/malloc/malloc.c
index d3fcadd20e..73fc98d2c3 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -1426,6 +1426,12 @@ typedef struct malloc_chunk *mbinptr;
        }                                          \
  }
+/* Check if prev_size is consistent with size*/
+#define prev_size_is_sane(p, prevsize) {                   \
+    if (__builtin_expect (prevsize !=  chunksize (p), 0))              \
+        malloc_printerr ("corrupted size vs. prev_size");              \
+}
+
  /*
     Indexing
@@ -4227,6 +4233,7 @@ _int_free (mstate av, mchunkptr p, int have_lock)
        prevsize = prev_size (p);
        size += prevsize;
        p = chunk_at_offset(p, -((long) prevsize));
+      prev_size_is_sane(p, prevsize)
        unlink(av, p, bck, fwd);
      }

More context for the unpatched code:

     /* consolidate backward */
     if (!prev_inuse(p)) {
       prevsize = prev_size (p);
       size += prevsize;
       p = chunk_at_offset(p, -((long) prevsize));
       unlink(av, p, bck, fwd);
     }

I'm not really familiar with the malloc code, so I'll try to summarize my understanding.

p is the pointer being freed.  I assume the goal is to mitigate a single-byte overwrite of the lowest byte of the chunk header for p.  The overwrite clears the PREV_INUSE bit, and therefore triggers reinterpretation of the allocated object contents as heap metadata because free will now read the previous size (prev_size) at p.  The value of prev_size is very likely under the control of the exploit writer because it comes *before* the overwritten byte.  So I think it would still be feasible to install a fake chunk and conduct an unlink-based attack.

Unfortunately, there is a misunderstanding here.
This is not a mitigation against unlink attacks in general. So the overflow is not into the lowest byte of p's size. The overflow happens into the size field of the 'real' previous chunk of p, so that a prev_size update is lost and p has a false prev_size at the time of the unlink, but it is not controlled. That's important, because as you pointed out, any unlink against a fake chunk is not preventable by those checks.

As far as I can tell, there are three ways to mitigate single-NUL write attacks reliably:

(1) Store the chunk size (with the PREV_INUSE) in something else but little-endian format.  We could use big endian, or simply a rotated representation, whatever has lower run-time overhead.  (This is something which could be done as part of the heap protector, or by itself.  The existing accessor macros should directly support this.)

(2) Flip the meaning of the PREV_INUSE bit, so that the attacker would have to write a non-NUL byte.

(3) Always allocate at least one byte more than the application needs, so that single-byte overflows never touch metadata.  The heap arena layout does not need the a full word of metadata for allocated chunks:

   * 32-bit: 1 MiB per heap, minimum allocation 8, 17 bit counter
   * 64-bit: 64 MiB per heap, minimum allocation 16, 22 bit counter

So we could compress the header to 3 bytes: We can encode the IS_MMAPPED bit with an otherwise impossible size counter and store the actual chunk size in a separate field.  The PREV_INUSE and NON_MAIN_ARENA bits would still be needed.  And we would have to use mmap for large allocations instead of sbrk even on the main arena.

I like those ideas, the first two seem to be straight forward to integrate. If that's something you guys want, I can write a patch for that?

Thanks,
Moritz


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]