This is the mail archive of the libc-help@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Possible malloc/mmap bug?


On Thu, Nov 17, 2011 at 7:16 PM, Chuck Hines <hines@cert.org> wrote:
> Hopefully I'm not crazy and one of you (developers) will be able to use
> my test code to recreate the problem I observed and know how to deal
> with it. ?Or maybe I'll have more brainpower available in the morning to
> continue doing some investigations and see if I can report back here
> with more info...

Your application is a pathological case of VM fragmentation.

Well before you reach M_MMAP_MAX on any reasonable system you'll get
ENOMEM from mmap.

On a local small 32-bit system I see the failure at ~22,000 iterations:
~~~
1321628743.652568 mremap(0xbff7e000, 532480, 8192, MREMAP_MAYMOVE) = 0xbff7e000
1321628743.652650 mmap2(NULL, 532480, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
1321628743.653300 brk(0)                = 0x96f4000
1321628743.653369 brk(0x9796000)        = 0x96f4000
1321628743.653442 mmap2(NULL, 1048576, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
~~~
I have a glibc built with debugging and the failure mode is as follows:
* The mremap is the realloc happening.
* The mmap2 fails with ENOMEM since the kernel has run out of space
for any mappings.
* Glibc then tries brk which *also* fails. Notice the returned address
is unchanged indicating no more space is left.

At this point the VM is entirely fragmented and the current glibc
malloc implementation can't recover.

You *could* argue that this is a bug in the glibc malloc
implementation, but the solution (compacting the allocations during
realloc) would slow down other well behaved applications.

You need to rewrite your application to:
* Allocate and maintain a single working buffer (or one per thread).
* When you complete the work on the working buffer copy the results to
a newly allocated small buffer.
* Reuse the single working buffer.

This will prevent fragmentation and also have the effect of making
your application faster.

Cheers,
Carlos.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]