This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH] aarch64: Optimized memcpy for Qualcomm Falkor processor
- From: Wilco Dijkstra <Wilco dot Dijkstra at arm dot com>
- To: Siddhesh Poyarekar <siddhesh at gotplt dot org>, "libc-alpha at sourceware dot org" <libc-alpha at sourceware dot org>
- Cc: nd <nd at arm dot com>
- Date: Fri, 23 Jun 2017 12:49:22 +0000
- Subject: Re: [PATCH] aarch64: Optimized memcpy for Qualcomm Falkor processor
- Authentication-results: sourceware.org; auth=none
- Authentication-results: arm.com; dkim=none (message not signed) header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
- Nodisclaimer: True
- Spamdiagnosticmetadata: NSPM
- Spamdiagnosticoutput: 1:99
Siddhesh Poyarekar wrote:
> This is an optimized memcpy implementation for the Qualcomm Falkor
> processor. The implementation improves specINT in SPEC2006 by 0.6%
> with omnetpp and xalancbmk leading at 6% and the overall impact being
> mostly positive on all benchmarks. With the glibc microbenchmarks the
> large copy benchmarks suffer slightly but bench-memcpy-random improves
> throughout by about 5%.
Those are odd results. Omnetpp doesn't use memcpy and xalancbmk profile has
memcpy at ~2%, so getting a 6% improvement couldn't be related to memcpy!
Similarly the random memcpy benchmark only does a small number of copies
larger than 96 bytes (where your new code is used), so I find it hard to believe it
could make a difference. On Cortex-A57 I get identical performance for this patch vs
the generic version (btw __memcpy_thunderx is very close, __memcpy_thunderx2
is 18% slower).
If prefetching in larger copies doesn't help Falkor in the large copy benchmark, then
what's the reasoning behind this patch?
Wilco