This is the mail archive of the libc-ports@sources.redhat.com mailing list for the libc-ports project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] sysdeps/arm/armv7/multiarch/memcpy_impl.S: Improve performance.


On 09/03/2013 01:37 PM, OndÅej BÃlka wrote:
>> We have one, it's the glibc microbenchmark, and we want to expand it,
>> otherwise when ACME comes with their patch for ARM and breaks performance
>> for targets that Linaro cares about I have no way to reject the patch
>> objectively :-)
>>
> Carlos, you are asking for impossible. When you publish benchmark people
> will try to maximize benchmark number. After certain point this becomes
> possible only by employing shady accounting: Move part of time to place
> wehre it will not be measured by benchmark (for example by having
> function that is 4kb large, on benchmarks it will fit into instruction
> cache but that does not happen in reality). 

What is it that I'm asking that is impossible?

> Taking care of common factors that can cause that is about ten times
> more complex than whole system benchmarking, analysis will be quite
> difficult as you will get twenty numbers and you will need to decide
> which ones could made real impact and which wont.

Sorry, could you clarify this a bit more, exactly what is ten times
more complex?

If we have N tests and they produce N numbers, for a given target,
for a given device, for a given workload, there is a set of importance
weights on N that should give you some kind of relevance.

We should be able to come up with some kind of framework from which
we can clearly say "this patch is better than this other patch", even
if not automated, it should be possible to reason from the results,
and that reasoning recorded as a discussion on this list.

>>> The key advantage of the cortex-strings framework is that it allows
>>> graphing the results of benchmarks. Often changes to string function
>>> performance can only really be analysed graphically as otherwise you
>>> end up with a huge soup of numbers, some going up, some going down and
>>> it is very hard to separate the signal from the noise.
>>
>> I disagree strongly. You *must* come up with a measurable answer and
>> looking at a graph is never a solution I'm going to accept.
>>
> You can have that opinion.
> Looking at performance graphs is most powerful technique how to
> understand performance. I got most of my improvements from analyzing
> these.

That is a different use for the graphs. I do not disagree that graphing
is a *powerful* way to display information and using that information to
produce a new routine is useful. What I disagree with is using such graphs
to argue qualitatively that your patch is better than the existing
implementation.

There is always a quantitative way to say X is better than Y, but it
requires breaking down your expectations and documenting them e.g.
should be faster with X alignment on sizes from N bytes to M bytes, and
then ranking based on those criteria.

>> You need to statistically analyze the numbers, assign weights to ranges,
>> and come up with some kind of number that evaluates the results based
>> on *some* formula. That is the only way we are going to keep moving
>> performance forward (against some kind of criteria).
>>
> These accurate assigning weigths is best done by taking program running
> it and measuring time. Without taking this into account weigths will not
> tell much, as you will likely just optimize cold code at expense of hot
> code.

I don't disagree with you here.

Cheers,
Carlos.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]