This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] [v4] malloc: Consistently apply trim_threshold to all heaps [BZ #17195]


On Wed, Feb 18, 2015 at 02:26:20PM +0100, Julian Taylor wrote:
> On 02/18/2015 11:29 AM, Mel Gorman wrote:
> 
> > diff --git a/malloc/arena.c b/malloc/arena.c
> > index 886defb074a2..a78d4835a825 100644
> > --- a/malloc/arena.c
> > +++ b/malloc/arena.c
> > @@ -696,7 +696,7 @@ heap_trim (heap_info *heap, size_t pad)
> >      }
> >    top_size = chunksize (top_chunk);
> >    extra = (top_size - pad - MINSIZE - 1) & ~(pagesz - 1);
> > -  if (extra < (long) pagesz)
> > +  if (extra < (long) mp_.trim_threshold)
> >      return 0;
> >  
> >    /* Try to shrink. */
> > 
> 
> 
> I am not very familiar with how the allocator works,

This is the first time in years I've looked so take this with a grain
of salt. Others with more experience of the allocator may chime in with
corrections.

> what now happens
> with this change when you are creating lots of temporary threads that
> allocate memory below the trim treshold?

The exact impact would depend on the configured trim threshold. In
theory nothing stops a user making this very large and creating 
processes with a single thread that achieve the same effect.

Assuming the default is used and they stay below the default trim threshold
then each thread can attempt to use 128K of useless data.  It changes
with dynamic heap tuning but lets take the simple case. At this point,
the maximum amount of wastage is limited by M_ARENA_MAX which is 8
by default. By default, the maximum wastage is low even for a carefully
crafted application. The worst-case situation can be tuned by setting trim
threshold or arena_max but there are easier ways to waste memory.

Furthermore, as the application should not reference the memory (it's freed)
the wasted memory eventually be swapped out if swap is available. A hostile
application could use knowledge of the allocator to use-after-free the
memory and keep it resident but it would be very obvious in top. Overall it
would be easier to interfere with a machine but just calling mmap() for a
large anonymous buffer. This applies to a carefully crafted application or
one that happens to free just below the threshold and never mallocs again.
Overall, I think the worst-case impact is marginal.

> As I understand it each thread has its own arena, as they are now not
> trimmed as often anymore is that area memory lost when the thread ends?

No, when a process exits, the memory is given back to the system
unconditionally. If it's a thread that's joined then the wastage depends
on whether another thread uses that heap or not.

> I recall something along these lines for the original reasoning why the
> trim was not applied to thread arenas.

Is there any chance you can remember keywords in the discussion? The
changelog is completely void of information. Worse, it would have been
harder to hit this problem originally because new areas were only created
when contention occurred. If there was strong motivation for the pagesz
value then it's possible that it no longer applies.

-- 
Mel Gorman
SUSE Labs


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]