This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH] Fix integer overflows in internal memalign and malloc functions [BZ #22343]
- From: Carlos O'Donell <carlos at redhat dot com>
- To: Paul Eggert <eggert at cs dot ucla dot edu>, Arjun Shankar <arjun dot is at lostca dot se>, libc-alpha at sourceware dot org
- Date: Thu, 4 Jan 2018 15:07:13 -0800
- Subject: Re: [PATCH] Fix integer overflows in internal memalign and malloc functions [BZ #22343]
- Authentication-results: sourceware.org; auth=none
- References: <20180104170250.GA72870@aloka.lostca.se> <7e9c6bd0-ddb8-72fa-4cde-6597c3d8641e@cs.ucla.edu>
On 01/04/2018 12:32 PM, Paul Eggert wrote:
> Arjun Shankar wrote:
>> + (sz) = request2size (req); \
>> + if (((sz) < (req))
>
> This does not work correctly if sz is signed, which is allowed by the
> current API: INTERNAL_SIZE_T is allowed to be signed, and sz might be
> INTERNAL_SIZE_T.
Good catch. I had not noticed that malloc-internals.h could override
INTERNAL_SIZE_T and make the above check against a signed type, for
which such overflow is undefined behaviour.
The reason I didn't think this is that we have *tons* of checks that
would break if we actually made this signed, so the fix you request
is correct.
> Perhaps the best fix would be to remove INTERNAL_SIZE_T and replace
> it with size_t everywhere; there's no real reason for
> INTERNAL_SIZE_T. Alternatively, we could change the documentation for
> INTERNAL_SIZE_T to say that it must be an unsigned type; this would
> be a simpler patch.
I agree.
>> + /* Check for overflow. */
>> + if (nb > SIZE_MAX - alignment - MINSIZE)
>> + {
>> + __set_errno (ENOMEM);
>> + return 0;
>> + }
>
> This causes the code to do an unnecessary overflow check, as it
> already checked nb against a (wrong) upper bound earlier.
You have potential overflow from two sources, the minimum size,
and the alignment. There is some computational overlap in both
cases.
> How about something like the attached patch for the code, instead? (I
> haven't tested this at all.)
That looks pretty good, and is a good suggestion.
It would allow further cleanups, and with a constant MALLOC_ALIGNMENT
the compiler can optimize as required.
I would suggest the following next steps:
* Arjun, give Paul's patch a try, with my suggestions below and see
if it passes your regression test.
* Repost v2, please include Paul's name in the ChangeLog for all the suggestions
e.g. your name and Paul's.
> 0001-BZ-22343.patch
>
>
> From 0c48d38733ba40c9f0f6561c9736e6b98da930de Mon Sep 17 00:00:00 2001
> From: Paul Eggert <eggert@cs.ucla.edu>
> Date: Thu, 4 Jan 2018 12:25:14 -0800
> Subject: [PATCH] [BZ #22343]
>
> * malloc/malloc.c (usize2tidx): Remove; unused.
> (requestalign2size): New macro. Define request2size in its terms.
> (checked_requestalign2size): New macro.
> Define checked_request2size in its terms.
> (_int_memalign): Use it.
> ---
> ChangeLog | 8 ++++++++
> malloc/malloc-internal.h | 15 +++++----------
> malloc/malloc.c | 31 ++++++++++++++++++-------------
> 3 files changed, 31 insertions(+), 23 deletions(-)
>
> diff --git a/ChangeLog b/ChangeLog
> index 5be91cf978..0b422836bb 100644
> --- a/ChangeLog
> +++ b/ChangeLog
> @@ -1,3 +1,11 @@
> +2018-01-04 Paul Eggert <eggert@cs.ucla.edu>
> +
> + * malloc/malloc.c (usize2tidx): Remove; unused.
> + (requestalign2size): New macro. Define request2size in its terms.
> + (checked_requestalign2size): New macro.
> + Define checked_request2size in its terms.
> + (_int_memalign): Use it.
> +
> 2018-01-04 Florian Weimer <fweimer@redhat.com>
>
> [BZ #22667]
> diff --git a/malloc/malloc-internal.h b/malloc/malloc-internal.h
> index ad054508ee..8d0a68b78b 100644
> --- a/malloc/malloc-internal.h
> +++ b/malloc/malloc-internal.h
> @@ -27,9 +27,7 @@
>
> The default version is the same as size_t.
>
> - While not strictly necessary, it is best to define this as an
> - unsigned type, even if size_t is a signed type. This may avoid some
> - artificial size limitations on some systems.
> + This type must be unsigned.
OK.
>
> On a 64-bit machine, you may be able to reduce malloc overhead by
> defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' at the
> @@ -40,17 +38,14 @@
> potential advantages of decreasing size_t word size.
>
> Implementors: Beware of the possible combinations of:
> - - INTERNAL_SIZE_T might be signed or unsigned, might be 32 or 64 bits,
> + - INTERNAL_SIZE_T might be 32 or 64 bits,
OK.
> and might be the same width as int or as long
> - size_t might have different width and signedness as INTERNAL_SIZE_T
> - int and long might be 32 or 64 bits, and might be the same width
>
> - To deal with this, most comparisons and difference computations
> - among INTERNAL_SIZE_Ts should cast them to unsigned long, being
> - aware of the fact that casting an unsigned int to a wider long does
> - not sign-extend. (This also makes checking for negative numbers
> - awkward.) Some of these casts result in harmless compiler warnings
> - on some systems. */
> + Take care when comparing INTERNAL_SIZE_T to signed integer values,
> + as the comparisons might (or might not) treat negative signed
> + values as if they were large unsigned values. */
> #ifndef INTERNAL_SIZE_T
> # define INTERNAL_SIZE_T size_t
> #endif
OK.
> diff --git a/malloc/malloc.c b/malloc/malloc.c
> index 48106f9bd4..9e8a10c9f5 100644
> --- a/malloc/malloc.c
> +++ b/malloc/malloc.c
> @@ -312,8 +312,6 @@ __malloc_assert (const char *assertion, const char *file, unsigned int line,
>
> /* When "x" is from chunksize(). */
> # define csize2tidx(x) (((x) - MINSIZE + MALLOC_ALIGNMENT - 1) / MALLOC_ALIGNMENT)
> -/* When "x" is a user-provided size. */
> -# define usize2tidx(x) csize2tidx (request2size (x))
OK.
>
> /* With rounding and alignment, the bins are...
> idx 0 bytes 0..24 (64-bit) or 0..12 (32-bit)
> @@ -1220,18 +1218,27 @@ nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
> /* pad request bytes into a usable size -- internal version */
>
> #define request2size(req) \
> - (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE) ? \
> - MINSIZE : \
> - ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)
> + requestalign2size(req, MALLOC_ALIGNMENT)
> +#define requestalign2size(req, align) \
> + (((req) + SIZE_SZ + (align) - 1 < MINSIZE) \
> + ? MINSIZE \
> + : ((req) + SIZE_SZ + (align) - 1) & ~((align) - 1))
Bespoke alignment code. Use ALIGN_UP or PTR_ALIGN_UP from libc-pointer-arith.h.
>
> /* Same, except also perform argument check */
>
> #define checked_request2size(req, sz) \
> - if (REQUEST_OUT_OF_RANGE (req)) { \
> - __set_errno (ENOMEM); \
> - return 0; \
> - } \
> - (sz) = request2size (req);
> + checked_requestalign2size(req, MALLOC_ALIGNMENT, sz)
> +#define checked_requestalign2size(req, align, sz) \
> + do \
> + { \
> + (sz) = requestalign2size (req, align); \
> + if ((sz) < (req) || REQUEST_OUT_OF_RANGE (sz)) \
> + { \
> + __set_errno (ENOMEM); \
> + return 0; \
> + } \
> + } \
> + while (0)
OK.
>
> /*
> --------------- Physical chunk operations ---------------
> @@ -4662,9 +4669,7 @@ _int_memalign (mstate av, size_t alignment, size_t bytes)
> unsigned long remainder_size; /* its size */
> INTERNAL_SIZE_T size;
>
> -
> -
> - checked_request2size (bytes, nb);
> + checked_requestalign2size (bytes, alignment, nb);
OK.
>
> /*
> Strategy: find a spot within that chunk that meets the alignment
> -- 2.14.3
--
Cheers,
Carlos.