This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Simple malloc benchtest.


Sorry, I forgot that the android email client only sends html.  Resending.

Siddhesh

On 23 December 2013 19:22, Siddhesh Poyarekar
<siddhesh.poyarekar@gmail.com> wrote:
>
> On 23-Dec-2013 7:02 pm, "OndÅej BÃlka" <neleai@seznam.cz> wrote:
>>
>> On Mon, Dec 23, 2013 at 04:39:12PM +0530, Siddhesh Poyarekar wrote:
>> > On Mon, Dec 23, 2013 at 10:50:34AM +0100, OndÅej BÃlka wrote:
>> > > You cannot do that, you would repeat same mistake that plagued
>> > > allocator
>> > > research in seventies, allocation patterns are simply different than
>> > > simulations and all that you would get from that measurement is is
>> > > meaningles garbage,
>> > > see following link:
>> > >
>> > >
>> > > http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.97.5185&rep=rep1&type=pdf
>> >
>> > I don't think the conclusions of that paper are valid because their
>> > measurements are tweaked to give the most optimistic number possible.
>> > They do pretend to use a more pessimistic measurement as well, but its
>> > higher numbers are simply ignored in their conclusion, stating that
>> > they're 'misleading'.
>> >
>> Please justify your opinion, a relevant metric was:
>>
>> "3. The maximum amount of memory used by the allocator
>> relative to the amount of memory requested by the pro-
>> gram at the point of maximal memory usage."
>>
>> If that metric is valid you have a severe problem with fragmentation on
>> following program;
>
> I did not imply that measurement 3 is valid. I meant that it only pretends
> to use measurement 3 but only uses 4.  IMO measurement 1, i.e. average of
> difference over time is a better measurement despite the fact that spikes
> are not accounted for. Measurement 4 certainly isn't.
>
>> char *ary[1000];
>> for (int i = 0; i < 1000; i++)
>>   ary[i] = malloc (10000);
>> for (int i = 0; i < 1000; i++)
>>   ary[i] = realloc (ary[i], 100);
>> char *next = malloc (10000);
>>
>> Which according to that measure has 10000% fragmentation.
>>
>> > Additionally, we still need to account for allocator overhead (which
>> > that paper correctly ignores, given its scope),
>>
>> Not quite,
>>
>> > so I'm going to modify
>> > my request to ask for a simple measurement (which could get refined
>> > over time) of allocator overhead and fragmentation - a single number
>> > should be sufficient for now, since differentiating between allocator
>> > overhead and fragmentation is only useful when you're comparing
>> > different allocators.
>> >
>> > If you want to put out a more comprehensive measurement of
>> > fragmentation (+ overhead) over time, I'd suggest looking at memory
>> > used vs memory requested at specific intervals and simply plot them
>> > into a graph.  Of course, the actual graph is out of scope for now,
>> > but you could at least get a limited set of plot points that a graph
>> > generator could use and print them out for now.
>> >
>> As you cannot use this benchmark to compare different algorithms what
>
> Not at all. It is a great first step because we need something that does
> measurements for tweaks to the current algorithm. Separating out overhead
> and fragmentation can be visited when we have an alternative allocator to
> put in place and compare.
>
>> you propose is useless. As sequence off addresses allocated will stay
>> the same a posible graph stays same and you cannot get any information
>> from constant.
>
> I don't understand what you mean here.
>
>> These make sense only when you do whole system profiling where you get
>> real data.



-- 
http://siddhesh.in


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]