This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Support separate benchmark outputs


On Tue, Apr 16, 2013 at 11:09:46PM +0530, Siddhesh Poyarekar wrote:
> On 16 April 2013 21:20, OndÅej BÃlka <neleai@seznam.cz> wrote:
> > That is my point that you must measure relative performance. However
> > code above does not measure performance. In simple test
> 
> No, your idea of relative performance is different from mine -
> actually I can't call it mine since these are already existing tests
> (written by Jakub I think) that I'm only copying over.  

How do you want to use data that you get from these benchmarks?
Could you say that if I come with new implementation if firefox will run
faster or slower?

> From your
> description it seems like your definition of relative is the original
> memcpy vs the modified memcpy.  Here 'relative' implies comparison
> between multiple implementations of functions, i.e. the sse3, sse4,
> avx, etc. and then with the generic implementation and finally the
> simple byte copy/move/write, etc.

I   measure several implementations (glibc, modified, byte, qwords)
You measure several implementations (sse3,  sse4    , byte, avx)

Where is difference?

> 
> > According to sequential glibc implementation is better than my by 15%.
> > However when I sample randomly my implementation becomes 33% better than
> > glibc one.
> 
> There's a do_random_tests in memcpy (and possibly in others too, I
> haven't checked) that is there just for correctness tests.  It can be
> trivially modified to measure the cost of the calls and make into a
> reasonable random sampling benchmark.
> 
You can but not easily. You must avoid several pitfalls. See file 
tests/rand.c at my microbenchmark.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]