This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH][AArch64] Optimized memcpy/memmove


On Mon, Sep 28, 2015 at 10:35:20AM +0100, Wilco Dijkstra wrote:
> > I would here simply alias memcpy to memmove as there is minimal
> > performance impact when you do check only for sizes larger than 96
> > bytes.
> 
> That is an option indeed, however the entry check for memmove takes 1-2 cycles
> on most CPUs, and it means more executed branches and more I-cache footprint for 
> memcpy, so I'd have to be absolutely sure it doesn't slow down memcpy.
>
You should read more carefully what I wrote as I didn't asked for using
entry check. That check should be just before loop so it would affect
performance only for sizes larger than 96 bytes. On some processors this
could be free as OoO execution hides that cost, but you need to test
that. 


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]