This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Reduce number of mmap calls from __libc_memalign in ld.so


On Sat, Apr 2, 2016 at 10:33 AM, Mike Frysinger <vapier@gentoo.org> wrote:
> On 02 Apr 2016 08:34, H.J. Lu wrote:
>> __libc_memalign in ld.so allocates one page at a time and tries to
>> optimize consecutive __libc_memalign calls by hoping that the next
>> mmap is after the current memory allocation.
>>
>> However, the kernel hands out mmap addresses in top-down order, so
>> this optimization in practice never happens, with the result that we
>> have more mmap calls and waste a bunch of space for each __libc_memalign.
>>
>> This change makes __libc_memalign to mmap one page extra.  Worst case,
>> the kernel never puts a backing page behind it, but best case it allows
>> __libc_memalign to operate much much better.  For elf/tst-align --direct,
>> it reduces number of mmap calls from 12 to 9.
>>
>> --- a/elf/dl-minimal.c
>> +++ b/elf/dl-minimal.c
>> @@ -75,6 +75,7 @@ __libc_memalign (size_t align, size_t n)
>>           return NULL;
>>         nup = GLRO(dl_pagesize);
>>       }
>> +      nup += GLRO(dl_pagesize);
>
> should this be in the else case ?
>
> also the comment above this code needs updating
> -mike

You are right.  Here is the updated patch.

-- 
H.J.
From d56ca4f3269e47cba3e8d22ba8e48cd20d470757 Mon Sep 17 00:00:00 2001
From: "H.J. Lu" <hjl.tools@gmail.com>
Date: Sat, 2 Apr 2016 08:25:31 -0700
Subject: [PATCH] Reduce number of mmap calls from __libc_memalign in ld.so

__libc_memalign in ld.so allocates one page at a time and tries to
optimize consecutive __libc_memalign calls by hoping that the next
mmap is after the current memory allocation.

However, the kernel hands out mmap addresses in top-down order, so
this optimization in practice never happens, with the result that we
have more mmap calls and waste a bunch of space for each __libc_memalign.

This change makes __libc_memalign to mmap one page extra.  Worst case,
the kernel never puts a backing page behind it, but best case it allows
__libc_memalign to operate much much better.  For elf/tst-align --direct,
it reduces number of mmap calls from 12 to 9.

	* elf/dl-minimal.c (__libc_memalign): Mmap one extra page.
---
 elf/dl-minimal.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/elf/dl-minimal.c b/elf/dl-minimal.c
index 762e65b..8bffdc7 100644
--- a/elf/dl-minimal.c
+++ b/elf/dl-minimal.c
@@ -66,7 +66,8 @@ __libc_memalign (size_t align, size_t n)
 
   if (alloc_ptr + n >= alloc_end || n >= -(uintptr_t) alloc_ptr)
     {
-      /* Insufficient space left; allocate another page.  */
+      /* Insufficient space left; allocate another page plus one extra
+	 page to reduce number of mmap calls.  */
       caddr_t page;
       size_t nup = (n + GLRO(dl_pagesize) - 1) & ~(GLRO(dl_pagesize) - 1);
       if (__glibc_unlikely (nup == 0))
@@ -75,6 +76,8 @@ __libc_memalign (size_t align, size_t n)
 	    return NULL;
 	  nup = GLRO(dl_pagesize);
 	}
+      else
+	nup += GLRO(dl_pagesize);
       page = __mmap (0, nup, PROT_READ|PROT_WRITE,
 		     MAP_ANON|MAP_PRIVATE, -1, 0);
       if (page == MAP_FAILED)
-- 
2.5.5


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]