This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PATCH] Reduce number of mmap calls from __libc_memalign in ld.so


__libc_memalign in ld.so allocates one page at a time and tries to
optimize consecutive __libc_memalign calls by hoping that the next
mmap is after the current memory allocation.

However, the kernel hands out mmap addresses in top-down order, so
this optimization in practice never happens, with the result that we
have more mmap calls and waste a bunch of space for each __libc_memalign.

This change makes __libc_memalign to mmap one page extra.  Worst case,
the kernel never puts a backing page behind it, but best case it allows
__libc_memalign to operate much much better.  For elf/tst-align --direct,
it reduces number of mmap calls from 12 to 9.

Tested on x86-64.  OK for master?

H.J.
---
	* elf/dl-minimal.c (__libc_memalign): Mmap one extra page.
---
 elf/dl-minimal.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/elf/dl-minimal.c b/elf/dl-minimal.c
index 762e65b..d6f87f1 100644
--- a/elf/dl-minimal.c
+++ b/elf/dl-minimal.c
@@ -75,6 +75,7 @@ __libc_memalign (size_t align, size_t n)
 	    return NULL;
 	  nup = GLRO(dl_pagesize);
 	}
+      nup += GLRO(dl_pagesize);
       page = __mmap (0, nup, PROT_READ|PROT_WRITE,
 		     MAP_ANON|MAP_PRIVATE, -1, 0);
       if (page == MAP_FAILED)
-- 
2.5.5


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]