This is the mail archive of the glibc-bugs@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug malloc/14581] glibc leaks memory and do not reuse after free (leading to unlimited RSS growth)


http://sourceware.org/bugzilla/show_bug.cgi?id=14581

--- Comment #7 from Kirill Korotaev <dev at parallels dot com> 2012-09-16 09:44:02 UTC ---
(In reply to comment #6)
> Could you explain what you mean by "if you print virtual addresses of allocated
> objects and kernel VMAs, i.e. you will find a huge unused memory extents which
> are never reused by glibc"? I'm not aware of any way the kernel VMA data could
> inform you about heap utilization, which is entirely under userspace control.

It's very simple. There is RSS and VSZ in /proc/pid/status.
RSS tells you how much physical memory was really allocated by kernel. If you
add memset() of objects after being allocated you will find that it's really
700MB which corresponds to VSZ as well. i.e. this memory is committed.

> I did a simulation of your test loop with much smaller sizes using pen and
> paper, and 
> 
> With SSIZE=2, ALIGN=8, LSIZE=5, NS=[many], NL=4:
> 
> 3z: SS LLLLLSS LLLLLSS LLLLLSS LLLLL
> 4a: SS      SS LLLLLSS LLLLLSS LLLLL
> 4z: SSSS    SS LLLLLSS LLLLLSS LLLLLLLLLL
> 5a: SSSS    SS      SS LLLLLSS LLLLLLLLLL
> 5z: SSSSSS  SS LLLLLSS LLLLLSS LLLLLLLLLL
> 6a: SSSSSS  SS LLLLLSS      SS LLLLLLLLLL
> 6z: SSSSSSSSSS LLLLLSS LLLLLSS LLLLLLLLLL
> 7a: SSSSSSSSSS LLLLLSS LLLLLSS      LLLLL
> 7z: SSSSSSSSSS LLLLLSS LLLLLSSSS    LLLLLLLLLL
> ...
> 
> where 3z means "at the end of iteration 3" and 4a means "after the free steps
> of iteration 4", etc. I might have gotten some details wrong, but it seems this
> pattern necessarily enforces fragmentation by destroying the alignment
> potential of each LSIZE-sized free range.

First 500 iterations are not interesting that much, cause they do not free any
previously allocated objects.
Have you noticed that array index wraps after NL and NS iterations passed and
then most interesting begins?


> Obviously there are some allocation strategies that avoid the issue, but I
> don't see it being avoidable in dlmalloc-type schemes in general. If you have
> an idea for how to avoid it without destroying the good properties of the
> allocator strategy, please share.

Looks like I start to realize what you mean...

Actually, theoretically any allocator should not ever allocate physical RAM
more then 2*allocated_size due to fragmentation on pattern like this, right?
(it's simple: if you allocated more then 2x times, this means you have unused
holes bigger then any single object and could allocate from it...). In our case
we see about 10x times ratio...

And there are many which behave like that: TCMalloc, buddy etc.
What is not natural in this test is that memalign replaced with malloc() fixes
the problem...

-- 
Configure bugmail: http://sourceware.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]