This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] [RFC] malloc: Reduce worst-case behaviour with madvise and refault overhead


On 02/13/2015 02:10 PM, Mel Gorman wrote:
> On Thu, Feb 12, 2015 at 06:58:14PM +0100, Julian Taylor wrote:
>>> On Mon, Feb 09, 2015 at 03:52:22PM -0500, Carlos O'Donell wrote:
>>>> On 02/09/2015 09:06 AM, Mel Gorman wrote:
>>>>> while (data_to_process) {
>>>>> 	buf = malloc(large_size);
>>>>> 	do_stuff();
>>>>> 	free(buf);
>>>>> }
>>>>
>>>> Why isn't the fix to change the application to hoist the
>>>> malloc out of the loop?
>>>
>>> I understand this is impossible for some language idioms (typically
>>> OOP, and despite my personal belief that this indicates they're bad
>>> language idioms, I don't want to descend into that type of argument),
>>> but to me the big question is:
>>>
>>> Why, when you have a large buffer -- so large that it can effect
>>> MADV_DONTNEED or munmap when freed -- are you doing so little with it
>>> in do_stuff() that the work performed on the buffer doesn't dominate
>>> the time spent?
>>>
>>> This indicates to me that the problem might actually be significant
>>> over-allocation beyond the size that's actually going to be used. Do
>>> we have some real-world specific examples of where this is happening?
>>> If it's poor design in application code and the applications could be
>>> corrected, I think we should consider whether the right fix is on the
>>> application side.
>>>
>>
>>
>> I also ran into this issue numerous times, also filed a bug:
>> https://sourceware.org/bugzilla/show_bug.cgi?id=17195
>>
> 
> Thanks for pointing that out. I read the report and the original report
> and do not understand why it was considered a duplicate. They are
> completely different issues.
> 
>> As a real world example I have higher level numerical software.
>> E.g. in python numpy you write code like this:
>> a = b+ c +d
>> where these are large arrays due to limitations of the library and
>> python this involves allocating multiple large arrays while the
>> operations on the memory itself is very small.
> 
> Is there any chance you could supply a simple test case in python for
> this? Your description is straight-forward and I suspect the resulting
> script will be just a few lines long but I want to be sure I see the
> same problem.

sure, you easily can construct many cases where you see this problem
with python numpy, e.g. a particular bad one that caused me to file the bug:

import numpy as np
def f():
    d = np.arange(1000000.) / 2
    d[::10] = np.nan
    c2 = ~np.isnan(d)
    for needle in range(1000):
        d[c2]

import threading
t = [threading.Thread(target=f) for x in range(2)]
for x in t:
    x.start()
for x in t:
    x.join()



a perf profile on ubuntu 14.10 (kernel 3.16)
  26.00%  python  multiarray.so            [.] array_boolean_subscript
  20.23%  python  libc-2.19.so             [.] __memmove_ssse3_back
   9.80%  python  [kernel.kallsyms]        [k] clear_page_c_e
   7.03%  python  [kernel.kallsyms]        [k] page_fault
   3.09%  python  multiarray.so            [.] count_boolean_trues
   3.08%  python  [kernel.kallsyms]        [k] mem_cgroup_charge_anon
   2.11%  python  [kernel.kallsyms]        [k] get_page_from_freelist
   1.81%  python  [kernel.kallsyms]        [k] __mem_cgroup_commit_charge

+ a lot more smaller kernel VM functions

in total we here have a good 25-30% overhead due to page faulting that
does not appear when running this script without starting threads.
system time with 2 threads is 5 seconds, total runtime 6 seconds, with
no threading it is system 0.3s total 5s (faster than threaded).
Note that exact performance characteristics here depend on the numpy
version, I was using current git head for this profile.

The minimal openmp case in the bug #17195 is reduced from this python
testcase.

> 
> Alternatively, would you be in the position to test v2 of this patch and
> see if the performance of your application can be adressed by tuning trim
> threshold to a high value?
> 

I can give it a try. Though the openmp testcase from the bug should be
the same problem and you can hopefully try that yourself.
Because setting thresholds currently does nothing numpy currently
doesn't even try to tune malloc to its expected workload.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]