This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH][AArch64] Optimized memset


On Tue, Aug 11, 2015 at 02:02:25PM +0100, Wilco Dijkstra wrote:
> > OndÅej BÃlka wrote:
> > On Fri, Jul 31, 2015 at 04:02:12PM +0100, Wilco Dijkstra wrote:
> > > This is an optimized memset for AArch64. Memset is split into 4 main cases: small sets of up
> > to 16
> > > bytes, medium of 16..96 bytes which are fully unrolled. Large memsets of more than 96 bytes
> > align
> > > the destination and use an unrolled loop processing 64 bytes per iteration. Memsets of zero
> > of more
> > > than 256 use the dc zva instruction, and there are faster versions for the common ZVA sizes
> > 64 or
> > > 128. STP of Q registers is used to reduce codesize without loss of performance.
> > >
> > > Speedup on test-memset is 1% on Cortex-A57 and 8% on Cortex-A53. On a random test with
> > varying sizes
> > > and alignment the new version is 50% faster.
> > >
> > > OK for commit?
> > >
> > A strategy for smaller sizes is quite similar to one on x64. Could you
> > comment why did you choose this control flow. It isn't clear where you
> > should stop with full unrolling, I recall that with some gcc majority of
> > calls had size 192 so unrolling to 256 bytes obviously gave speedup.
> 
> Further unrolling may well be beneficial in some cases, but for that
> I need to compare actual data. GCC appears to almost exclusively hit 
> the dc zva case according to profiles, so the memsets must be larger
> than 256.
>
Sorry, I recalled that piece of data wrong. Its memcpy that does that. 
Memset frequently does larger calls.
 
> > I also got some ideas to handle small case with conditional moves/
> > masked moves, as aarch64 doesn't have conditional move only select
> > would it be possible to handle small case by
> > 
> > address4 = (size & 4) ? address : stack;
> > *((int32_t *) address4) = vc;
> > address2 = (size & 2) ? address + size - 2: stack;
> > *((int16_t *) address2) = vc;
> > address1 = (size & 1) ? address + (size & 4): stack;
> > *((char *) address2) = vc;
> > 
> > I didn't tested if it makes improvement but it looks likely.
> 
> That might be faster on some cores, but it's not clear that size 0-3
> or 0-7 are common enough for it to matter.
> 
True. I mentioned that mainly as these are worst case on my benchmark 
where setting one byte is slower than setting 256.

I realized that this could be used also for larger sizes you could use
tricks like this to reduce branch misprediction penalty

x[n >= 8 ? 1 : 0] = vc;
x[n >= 16 ? 2 : 0] = vc;
x[n >= 24 ? 3 : 0] = vc;
x[n >= 32 ? 4 : 0] = vc;
x[n >= 40 ? 5 : 0] = vc;


> > A real performance impact of this is tricky as it heavily depends on
> > what caller does so only definitive way is take programs that use it
> > (like gcc) and run overnight test to see if you get 1% improvement in
> > total running time or not.
> > 
> > Here I would also be interested how this will be improved on dryrun
> > data.
> 
> I think 1% improvement would be hard to measure in a actual running system. 
> Collecting statistics would be more interesting as that can be played back
> as part of a benchmark in a controlled environment.
> 
No, what I mean is repeatedly running say gcc test.c -O3 and measuring
running time of that.

Dryrun is that controled environment but it doesn't measure all effects
so measuring live programs is better.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]