This is the mail archive of the libc-ports@sources.redhat.com mailing list for the libc-ports project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] ARM: Add Cortex-A15 optimized NEON and VFP memcpy routines, with IFUNC.


On Thu, Apr 18, 2013 at 10:47:26AM +0100, Will Newton wrote:
> On 18 April 2013 10:39, OndÅej BÃlka <neleai@seznam.cz> wrote:
> > On Mon, Apr 15, 2013 at 11:38:49AM +0100, Will Newton wrote:
> >> On 15 April 2013 11:06, MÃns RullgÃrd <mans@mansr.com> wrote:
> >>
> >> Hi MÃns,
> >>
> >> >> Add a high performance memcpy routine optimized for Cortex-A15 with
> >> >> variants for use in the presence of NEON and VFP hardware, selected
> >> >> at runtime using indirect function support.
> >> >
> >> > How does this perform on Cortex-A9?
> >>
> >> The code is also faster on A9 although the gains are not quite as
> >> pronounced. A set of numbers is attached (they linewrap pretty
> >> horribly inline).
> >>
> >>
> > I forget to ask where to get benchmark source. Without it there is no
> > way to tell if it was done correctly.
> > You must randomly vary sizes in range n..2n and also vary alignments.
> 
> The benchmark is taken from the cortex-strings package:
> 
> https://launchpad.net/cortex-strings
> 
> I wrote a wrapper around the benchmark to vary alignment in {1, 2, 4,
> 8} and a variety of block lengths between 8 and 200.
> 
Could you post wrapper?

I could find there only following if it is what you meant:
http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/view/head:/tests/test-memcpy.c

If this is a case then benchmark contains several serious mistake and data 
generated by it cannot be accepted.

I attached a modification of simple benchmark used by gcc. Could you try
it and post results to be sure.

First put your need place neon implementation into neon.s file with
function name memcpy_neon. 
Then run
./memcpy_test 64 6000000000 gcc 

Mistakes in http://bazaar.launchpad.net/~linaro-toolchain-dev/cortex-strings/trunk/view/head:/tests/test-memcpy.c
follow.

First is that original benchmark did not vary sizes and alignments. 

Second is timing is in loop over same data (see code below). 
Even if you vary lengths this loop will undo all your work on
randomizing inputs. 
Every branch becomes predicted. All data it kept in cache. 

These conditions cause performance to be far from performance on real
inputs. 

 for (i = 0; i < 32; ++i)
	{
	  HP_TIMING_NOW (start);
	  CALL (impl, dst, src, len);
	  HP_TIMING_NOW (stop);
	  HP_TIMING_BEST (best_time, start, stop);
	}

Third problem is that benchmark takes minimum over times. 
This obviously does not measure average time but minimal time.

This is statisticaly unsound practice. Any article that would used minimum in
benchmark would immidietaly get rejected on review.

Reason is easy, consider function.

if (rand()%4<1) 
 sleep(1);
else 
 sleep(15000);

Which is according to minimum metric 100 times faster than one below
despite opposite is true.

if (rand()%2<1)
 sleep(100);
else
 sleep(200);




Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]