This is the mail archive of the libc-ports@sources.redhat.com mailing list for the libc-ports project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] sysdeps/arm/armv7/multiarch/memcpy_impl.S: Improve performance.


On 09/03/2013 03:31 PM, Ryan S. Arnold wrote:
> On Tue, Sep 3, 2013 at 11:18 AM, Carlos O'Donell <carlos@redhat.com> wrote:
>> We have one, it's the glibc microbenchmark, and we want to expand it,
>> otherwise when ACME comes with their patch for ARM and breaks performance
>> for targets that Linaro cares about I have no way to reject the patch
>> objectively :-)
> 
> Can you be objective in analyzing performance when two different
> people have differing opinions on what performance preconditions
> should be coded against?

No.

> There are some cases that are obvious.. we know that from pipeline
> analysis that certain instruction sequences can hinder performance.
> That is objective and can be measured by a benchmark, but saying that
> a particular change penalizes X sized copies but helps Y sized copies
> when there are no published performance preconditions isn't.  It's a
> difference in opinion of what's important.

I agree. The project needs to adopt some set of performance preconditions
and document them and defend them.

If we can't defend these positions then we will never be able to evaluate
any performance patches. We will see-saw between several implementations
over the years.

The current set of performance preconditions are baked into the experience
of the core developers reviewing patches. I want the experts out of the
loop.

> PowerPC has had the luxury of not having their performance
> pre-conditions contested.  PowerPC string performance is optimized
> based upon customer data-set analysis.  So PowerPC's preconditions are
> pretty concrete...  Optimize for aligned data in excess of 128-bytes
> (I believe).

We should be documenting this somewhere, preferably in a Power-specific
test that looks at just this kind of issue.

Documenting this statically is the first, in my opinion, stepping stone
to having something like dynamic feedback.

>> You need to statistically analyze the numbers, assign weights to ranges,
>> and come up with some kind of number that evaluates the results based
>> on *some* formula. That is the only way we are going to keep moving
>> performance forward (against some kind of criteria).
> 
> This sounds like establishing preconditions (what types of data will
> be optimized for).

I agree. We need it.

> Unless technology evolves that you can statistically analyze data in
> real time and adjust the implementation based on what you find (an
> implementation with a different set of preconditions) to account for
> this you're going to end up with a lot of in-fighting over
> performance.

Why do you assume we'll have a lot of in-fighting over performance?

At present we've split the performance intensive (or so we believe)
routines on a per-machine basis. The arguments are then going to be
had only on a per-machine basis, and even then for each hardware
variant can have an IFUNC resolver select the right routine at
runtime.

Then we come upon the tunables that should allow some dynamic adjustment
of an algorithm based on realtime data.

> I've run into situations where I recommended that a customer code
> their own string function implementation because they continually
> encountered unaligned-data when copying-by-value in C++ functions and
> PowerPC's string function implementations penalized unaligned copies
> in preference for aligned copies.

Provide both in glibc and expose a tunable?

Cheers,
Carlos.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]