This is the mail archive of the glibc-bugs@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug libc/17801] memcpy is slower on amd64 than on i686 with a Sandy Bridge CPU


https://sourceware.org/bugzilla/show_bug.cgi?id=17801

--- Comment #2 from cvs-commit at gcc dot gnu.org <cvs-commit at gcc dot gnu.org> ---
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "GNU C Library master sources".

The branch, hjl/pr17711 has been updated
       via  56d25c11b64a97255a115901d136d753c86de24e (commit)
      from  a29c4064115e59bcf8c001c0b3dedfa8d49d3653 (commit)

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.

- Log -----------------------------------------------------------------
https://sourceware.org/git/gitweb.cgi?p=glibc.git;h=56d25c11b64a97255a115901d136d753c86de24e

commit 56d25c11b64a97255a115901d136d753c86de24e
Author: H.J. Lu <hjl.tools@gmail.com>
Date:   Fri Jan 30 06:50:20 2015 -0800

    Use AVX unaligned memcpy only if AVX2 is available

    memcpy with unaligned 256-bit AVX register loads/stores are slow on older
    processorsl like Sandy Bridge.  This patch adds bit_AVX_Fast_Unaligned_Load
    and sets it only when AVX2 is available.

        [BZ #17801]
        * sysdeps/x86_64/multiarch/init-arch.c (__init_cpu_features):
        Set the bit_AVX_Fast_Unaligned_Load bit for AVX2.
        * sysdeps/x86_64/multiarch/init-arch.h (bit_AVX_Fast_Unaligned_Load):
        New.
        (index_AVX_Fast_Unaligned_Load): Likewise.
        (HAS_AVX_FAST_UNALIGNED_LOAD): Likewise.
        * sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check the
        bit_AVX_Fast_Unaligned_Load bit instead of the bit_AVX_Usable bit.
        * sysdeps/x86_64/multiarch/memcpy_chk.S (__memcpy_chk): Likewise.
        * sysdeps/x86_64/multiarch/mempcpy.S (__mempcpy): Likewise.
        * sysdeps/x86_64/multiarch/mempcpy_chk.S (__mempcpy_chk): Likewise.
        * sysdeps/x86_64/multiarch/memmove.c (__libc_memmove): Replace
        HAS_AVX with HAS_AVX_FAST_UNALIGNED_LOAD.
        * sysdeps/x86_64/multiarch/memmove_chk.c (__memmove_chk): Likewise.

-----------------------------------------------------------------------

Summary of changes:
 ChangeLog                              |   18 ++++++++++++++++++
 sysdeps/x86_64/multiarch/init-arch.c   |    9 +++++++--
 sysdeps/x86_64/multiarch/init-arch.h   |    4 ++++
 sysdeps/x86_64/multiarch/memcpy.S      |    2 +-
 sysdeps/x86_64/multiarch/memcpy_chk.S  |    2 +-
 sysdeps/x86_64/multiarch/memmove.c     |    2 +-
 sysdeps/x86_64/multiarch/memmove_chk.c |    2 +-
 sysdeps/x86_64/multiarch/mempcpy.S     |    2 +-
 sysdeps/x86_64/multiarch/mempcpy_chk.S |    2 +-
 9 files changed, 35 insertions(+), 8 deletions(-)

-- 
You are receiving this mail because:
You are on the CC list for the bug.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]