This is the mail archive of the glibc-bugs@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug dynamic-link/21258] Branch predication in _dl_runtime_resolve_avx512_opt leads to lower CPU frequency


https://sourceware.org/bugzilla/show_bug.cgi?id=21258

--- Comment #6 from cvs-commit at gcc dot gnu.org <cvs-commit at gcc dot gnu.org> ---
This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "GNU C Library master sources".

The branch, hjl/pr21258/2.23 has been created
        at  883cadc5543ffd3a4537498b44c782ded8a4a4e8 (commit)

- Log -----------------------------------------------------------------
https://sourceware.org/git/gitweb.cgi?p=glibc.git;h=883cadc5543ffd3a4537498b44c782ded8a4a4e8

commit 883cadc5543ffd3a4537498b44c782ded8a4a4e8
Author: H.J. Lu <hjl.tools@gmail.com>
Date:   Tue Mar 21 10:59:31 2017 -0700

    x86-64: Improve branch predication in _dl_runtime_resolve_avx512_opt [BZ
#21258]

    On Skylake server, _dl_runtime_resolve_avx512_opt is used to preserve
    the first 8 vector registers.  The code layout is

      if only %xmm0 - %xmm7 registers are used
         preserve %xmm0 - %xmm7 registers
      if only %ymm0 - %ymm7 registers are used
         preserve %ymm0 - %ymm7 registers
      preserve %zmm0 - %zmm7 registers

    Branch predication always executes the fallthrough code path to preserve
    %zmm0 - %zmm7 registers speculatively, even though only %xmm0 - %xmm7
    registers are used.  This leads to lower CPU frequency on Skylake
    server.  This patch changes the fallthrough code path to preserve
    %xmm0 - %xmm7 registers instead:

      if whole %zmm0 - %zmm7 registers are used
        preserve %zmm0 - %zmm7 registers
      if only %ymm0 - %ymm7 registers are used
         preserve %ymm0 - %ymm7 registers
      preserve %xmm0 - %xmm7 registers

    Tested on Skylake server.

        [BZ #21258]
        * sysdeps/x86_64/dl-trampoline.S (_dl_runtime_resolve_opt):
        Define only if _dl_runtime_resolve is defined to
        _dl_runtime_resolve_sse_vex.
        * sysdeps/x86_64/dl-trampoline.h (_dl_runtime_resolve_opt):
        Fallthrough to _dl_runtime_resolve_sse_vex.

    (cherry picked from commit c15f8eb50cea7ad1a4ccece6e0982bf426d52c00)

https://sourceware.org/git/gitweb.cgi?p=glibc.git;h=83037ea1d9e84b1b44ed307f01cbb5eeac24e22d

commit 83037ea1d9e84b1b44ed307f01cbb5eeac24e22d
Author: H.J. Lu <hjl.tools@gmail.com>
Date:   Tue Aug 23 09:09:32 2016 -0700

    X86-64: Add _dl_runtime_resolve_avx[512]_{opt|slow} [BZ #20508]

    There is transition penalty when SSE instructions are mixed with 256-bit
    AVX or 512-bit AVX512 load instructions.  Since _dl_runtime_resolve_avx
    and _dl_runtime_profile_avx512 save/restore 256-bit YMM/512-bit ZMM
    registers, there is transition penalty when SSE instructions are used
    with lazy binding on AVX and AVX512 processors.

    To avoid SSE transition penalty, if only the lower 128 bits of the first
    8 vector registers are non-zero, we can preserve %xmm0 - %xmm7 registers
    with the zero upper bits.

    For AVX and AVX512 processors which support XGETBV with ECX == 1, we can
    use XGETBV with ECX == 1 to check if the upper 128 bits of YMM registers
    or the upper 256 bits of ZMM registers are zero.  We can restore only the
    non-zero portion of vector registers with AVX/AVX512 load instructions
    which will zero-extend upper bits of vector registers.

    This patch adds _dl_runtime_resolve_sse_vex which saves and restores
    XMM registers with 128-bit AVX store/load instructions.  It is used to
    preserve YMM/ZMM registers when only the lower 128 bits are non-zero.
    _dl_runtime_resolve_avx_opt and _dl_runtime_resolve_avx512_opt are added
    and used on AVX/AVX512 processors supporting XGETBV with ECX == 1 so
    that we store and load only the non-zero portion of vector registers.
    This avoids SSE transition penalty caused by _dl_runtime_resolve_avx and
    _dl_runtime_profile_avx512 when only the lower 128 bits of vector
    registers are used.

    _dl_runtime_resolve_avx_slow is added and used for AVX processors which
    don't support XGETBV with ECX == 1.  Since there is no SSE transition
    penalty on AVX512 processors which don't support XGETBV with ECX == 1,
    _dl_runtime_resolve_avx512_slow isn't provided.

        [BZ #20495]
        [BZ #20508]
        * sysdeps/x86/cpu-features.c (init_cpu_features): For Intel
        processors, set Use_dl_runtime_resolve_slow and set
        Use_dl_runtime_resolve_opt if XGETBV suports ECX == 1.
        * sysdeps/x86/cpu-features.h (bit_Use_dl_runtime_resolve_opt):
        New.
        (bit_Use_dl_runtime_resolve_slow): Likewise.
        (index_Use_dl_runtime_resolve_opt): Likewise.
        (index_Use_dl_runtime_resolve_slow): Likewise.
        * sysdeps/x86_64/dl-machine.h (elf_machine_runtime_setup): Use
        _dl_runtime_resolve_avx512_opt and _dl_runtime_resolve_avx_opt
        if Use_dl_runtime_resolve_opt is set.  Use
        _dl_runtime_resolve_slow if Use_dl_runtime_resolve_slow is set.
        * sysdeps/x86_64/dl-trampoline.S: Include <cpu-features.h>.
        (_dl_runtime_resolve_opt): New.  Defined for AVX and AVX512.
        (_dl_runtime_resolve): Add one for _dl_runtime_resolve_sse_vex.
        * sysdeps/x86_64/dl-trampoline.h (_dl_runtime_resolve_avx_slow):
        New.
        (_dl_runtime_resolve_opt): Likewise.
        (_dl_runtime_profile): Define only if _dl_runtime_profile is
        defined.

    (cherry picked from commit fb0f7a6755c1bfaec38f490fbfcaa39a66ee3604)

-----------------------------------------------------------------------

-- 
You are receiving this mail because:
You are on the CC list for the bug.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]