This is the mail archive of the libc-ports@sources.redhat.com mailing list for the libc-ports project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] sysdeps/arm/armv7/multiarch/memcpy_impl.S: Improve performance.


On Wed, Sep 04, 2013 at 01:00:09PM +0530, Siddhesh Poyarekar wrote:
> On Tue, Sep 03, 2013 at 03:15:25PM -0400, Carlos O'Donell wrote:
> > I agree. The eventual goal of the project is to have some kind of
> > whole system benchmarking that allows users to feed in their profiles
> > and allow us as developers to see what users are doing with our library.
> > 
> > Just like CPU designers feed in a whole distribution of applications
> > and look at the probability of instruction selection and tweak instruction
> > to microcode mappings.
> > 
> > I am willing to accept a certain error in the process as long as I know
> > we are headed in the right direction. If we all disagree about the
> > direction we are going in then we should talk about it.
> > 
> > I see:
> > 
> > microbenchmarks -> whole system benchmarks -> profile driven optimizations
> 
> I've mentioned this before - microbenchmarks are not a way to whole
> system benchmarks in that they don't replace system benchmarks.  We
> need to work on both in parallel because both have different goals.
> 
> A microbenchmark would have parameters such as alignment, size and
> cache pressure to determine how an implementation scales.  These are
> generic numbers (i.e. they're not tied to specific high level
> workloads) that a developer can use to design their programs.
> 
> Whole system benchmarks however work at a different level.  They would
> give an average case number that describes how a specific recipe
> impacts performance of a set of programs.  An administrator would use
> these to tweak the system for the workload.
> 
> > I would be happy to accept a patch that does:
> > * Shows the benchmark numbers.
> > * Explains relevant factors not caught by the benchmark that affect
> >   performance, what they are, and why the patch should go in.
> > 
> > My goal is to increase the quality of the written rationales for
> > performance related submissions.
> 
> Agreed.  In fact, this should go in as a large comment in the
> implementation itself.  Someone had mentioned in the past (was it
> Torvald?) that every assembly implementation we write should be as
> verbose in comments as it can possibly be so that there is no
> ambiguity about the rationale for selection of specific instruction
> sequences over others.
> 
> > >> If we have N tests and they produce N numbers, for a given target,
> > >> for a given device, for a given workload, there is a set of importance
> > >> weights on N that should give you some kind of relevance.
> > >>
> > > You are jumping to case when we will have these weights. Problematic
> > > part is getting those.
> > 
> > I agree.
> > 
> > It's hard to know the weights without having an intuitive understanding
> > of the applications you're running on your system and what's relevant
> > for their performance.
> 
> 1. Assume aligned input.  Nothing should take (any noticeable)
>    performance away from align copies/moves
Not very useful as this is extremely dependant on function measured. For
functions like strcmp and strlen alignments are mostly random so aligned
case does not say much. On opposite end of spectrum is memset which is
almost always 8 byte aligned and unaligned performance does not make lot
of sense.

> 2. Scale with size
Not very important for several reasons. One is that big sizes are cold
(just look in oprofile output that loops are less frequent than header.)

Second reason is that if we look at caller large sizes are unlikely
bottleneck.

One type of usage is find delimiter like:

i=strlen(n);
for (i=0;i<n;i++)
  something();

Here for n large a strlen contribution is likely small.

Or second one is skipping parts of string. Consider following

while(p=strchr(p+1,'a')){
  something();
}

If p was 1000 byte buffer then best case is when a is not there and we
do one 1000 byte strchr. Worst case is when string consist entirely of
a's and we need to call 1 byte strchr 1000 times.

> 3. Provide acceptable performance for unaligned sizes without
>    penalizing the aligned case

This is quite important case. It should be measured correctly, what is
important is that alignment varies. This can be slower than when you
pick fixed alignment and alignment varies in reality.

> 4. Measure the effect of dcache pressure on function performance
> 5. Measure effect of icache pressure on function performance.
> 
Here you really need to base weigths on function usage patterns. 
A bigger code size is acceptable for functions that are called more
often. You need to see distribution of how are calls clustered to get
full picture. A strcmp is least sensitive to icache concerns, as when it
is called its mostly 100 times over in tight loop so size is not big issue.
If same number of call is uniformnly spread through program we need
stricter criteria.

> Depending on the actual cost of cache misses on different processors,
> the icache/dcache miss cost would either have higher or lower weight
> but for 1-3, I'd go in that order of priorities with little concern
> for unaligned cases.
> 
> Siddhesh


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]