This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: pthread wastes memory with mlockall(MCL_FUTURE)


On Fri, Sep 18, 2015 at 11:27:34AM +0100, Balazs Kezes wrote:
> Hi!
> 
> I've run into the following problem: Whenever a new thread is created,
> pthread creates some guard pages next to its stack. These guard pages
> are usually empty zero pages, and have all their permissions removed --
> nothing can read/write/execute on these pages.
> 
> The problem is that the application I use has a large number of threads
> and uses mlockall(MCL_FUTURE) so this messes up the memory usage
> calculation (rss) for the application which then leads to memory wasted.
> 
> Would it make sense for glibc to munlock these pages? I'm thinking
> something like this (although I haven't tested it yet):
> 
> diff --git a/nptl/allocatestack.c b/nptl/allocatestack.c
> index 753da61..1fc715c 100644
> --- a/nptl/allocatestack.c
> +++ b/nptl/allocatestack.c
> @@ -659,6 +659,11 @@ allocate_stack (const struct pthread_attr *attr, struct pthread **pdp,
>  
>  	      return errno;
>  	    }
> +	  /* The guard pages shouldn't be locked into memory. A lot of memory
> +	     would be unnecessarily wasted if you have a lot of threads and
> +	     mlockall(MCL_FUTURE) set otherwise. We ignore the errors because
> +	     can't do anything about them anyways.  */
> +	  (void) munlock (guard, guardsize);

I would say it's a kernel bug for PROT_NONE pages to actually occupy
resources when locked, if they actually do? How did you test/measure
this?

Rich


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]