This is the mail archive of the glibc-cvs@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

GNU C Library master sources branch master updated. glibc-2.21-476-g774488f


This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "GNU C Library master sources".

The branch, master has been updated
       via  774488f88aeed6b838fe29c3c7561433c242a3c9 (commit)
      from  6af25acc7b6313fd8934c3b2f0eb3da5a1c6eb6b (commit)

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.

- Log -----------------------------------------------------------------
http://sourceware.org/git/gitweb.cgi?p=glibc.git;a=commitdiff;h=774488f88aeed6b838fe29c3c7561433c242a3c9

commit 774488f88aeed6b838fe29c3c7561433c242a3c9
Author: Andrew Senkevich <andrew.senkevich@intel.com>
Date:   Wed Jun 17 15:53:00 2015 +0300

    Vector logf for x86_64 and tests.
    
    Here is implementation of vectorized logf containing SSE, AVX,
    AVX2 and AVX512 versions according to Vector ABI
    <https://groups.google.com/forum/#!topic/x86-64-abi/LmppCfN1rZ4>.
    
        * sysdeps/unix/sysv/linux/x86_64/libmvec.abilist: New symbols added.
        * sysdeps/x86/fpu/bits/math-vector.h: Added SIMD declaration and asm
        redirections for logf.
        * sysdeps/x86_64/fpu/Makefile (libmvec-support): Added new files.
        * sysdeps/x86_64/fpu/Versions: New versions added.
        * sysdeps/x86_64/fpu/libm-test-ulps: Regenerated.
        * sysdeps/x86_64/fpu/multiarch/Makefile (libmvec-sysdep_routines): Added
        build of SSE, AVX2 and AVX512 IFUNC versions.
        * sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core.S: New file.
        * sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core_avx512.S: New file.
        * sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core.S: New file.
        * sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core_sse4.S: New file.
        * sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core.S: New file.
        * sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core_avx2.S: New file.
        * sysdeps/x86_64/fpu/svml_s_logf16_core.S: New file.
        * sysdeps/x86_64/fpu/svml_s_logf4_core.S: New file.
        * sysdeps/x86_64/fpu/svml_s_logf8_core.S: New file.
        * sysdeps/x86_64/fpu/svml_s_logf8_core_avx.S: New file.
        * sysdeps/x86_64/fpu/svml_s_logf_data.S: New file.
        * sysdeps/x86_64/fpu/svml_s_logf_data.h: New file.
        * sysdeps/x86_64/fpu/test-float-vlen16-wrappers.c: Vector logf tests.
        * sysdeps/x86_64/fpu/test-float-vlen16.c: Likewise.
        * sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c: Likewise.
        * sysdeps/x86_64/fpu/test-float-vlen4.c: Likewise.
        * sysdeps/x86_64/fpu/test-float-vlen8-avx2-wrappers.c: Likewise.
        * sysdeps/x86_64/fpu/test-float-vlen8-avx2.c: Likewise.
        * sysdeps/x86_64/fpu/test-float-vlen8-wrappers.c: Likewise.
        * sysdeps/x86_64/fpu/test-float-vlen8.c: Likewise.
        * NEWS: Mention addition of x86_64 vector logf.

diff --git a/ChangeLog b/ChangeLog
index bad022e..a0c2a9a 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,5 +1,35 @@
 2015-06-17  Andrew Senkevich  <andrew.senkevich@intel.com>
 
+	* sysdeps/unix/sysv/linux/x86_64/libmvec.abilist: New symbols added.
+	* sysdeps/x86/fpu/bits/math-vector.h: Added SIMD declaration and asm
+	redirections for logf.
+	* sysdeps/x86_64/fpu/Makefile (libmvec-support): Added new files.
+	* sysdeps/x86_64/fpu/Versions: New versions added.
+	* sysdeps/x86_64/fpu/libm-test-ulps: Regenerated.
+	* sysdeps/x86_64/fpu/multiarch/Makefile (libmvec-sysdep_routines): Added
+	build of SSE, AVX2 and AVX512 IFUNC versions.
+	* sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core.S: New file.
+	* sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core_avx512.S: New file.
+	* sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core.S: New file.
+	* sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core_sse4.S: New file.
+	* sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core.S: New file.
+	* sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core_avx2.S: New file.
+	* sysdeps/x86_64/fpu/svml_s_logf16_core.S: New file.
+	* sysdeps/x86_64/fpu/svml_s_logf4_core.S: New file.
+	* sysdeps/x86_64/fpu/svml_s_logf8_core.S: New file.
+	* sysdeps/x86_64/fpu/svml_s_logf8_core_avx.S: New file.
+	* sysdeps/x86_64/fpu/svml_s_logf_data.S: New file.
+	* sysdeps/x86_64/fpu/svml_s_logf_data.h: New file.
+	* sysdeps/x86_64/fpu/test-float-vlen16-wrappers.c: Vector logf tests.
+	* sysdeps/x86_64/fpu/test-float-vlen16.c: Likewise.
+	* sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c: Likewise.
+	* sysdeps/x86_64/fpu/test-float-vlen4.c: Likewise.
+	* sysdeps/x86_64/fpu/test-float-vlen8-avx2-wrappers.c: Likewise.
+	* sysdeps/x86_64/fpu/test-float-vlen8-avx2.c: Likewise.
+	* sysdeps/x86_64/fpu/test-float-vlen8-wrappers.c: Likewise.
+	* sysdeps/x86_64/fpu/test-float-vlen8.c: Likewise.
+	* NEWS: Mention addition of x86_64 vector logf.
+
 	* bits/libm-simd-decl-stubs.h: Added stubs for log.
 	* math/bits/mathcalls.h: Added log declaration with __MATHCALL_VEC.
 	* sysdeps/unix/sysv/linux/x86_64/libmvec.abilist: New versions added.
diff --git a/NEWS b/NEWS
index 4c66687..c361309 100644
--- a/NEWS
+++ b/NEWS
@@ -53,7 +53,7 @@ Version 2.22
   condition in some applications.
 
 * Added vector math library named libmvec with the following vectorized x86_64
-  implementations: cos, cosf, sin, sinf, log.
+  implementations: cos, cosf, sin, sinf, log, logf.
   The library can be disabled with --disable-mathvec. Use of the functions is
   enabled with -fopenmp -ffast-math starting from -O1 for GCC version >= 4.9.0.
   The library is linked in as needed when using -lm (no need to specify -lmvec
diff --git a/sysdeps/unix/sysv/linux/x86_64/libmvec.abilist b/sysdeps/unix/sysv/linux/x86_64/libmvec.abilist
index 3357957..3593edc 100644
--- a/sysdeps/unix/sysv/linux/x86_64/libmvec.abilist
+++ b/sysdeps/unix/sysv/linux/x86_64/libmvec.abilist
@@ -4,18 +4,22 @@ GLIBC_2.22
  _ZGVbN2v_log F
  _ZGVbN2v_sin F
  _ZGVbN4v_cosf F
+ _ZGVbN4v_logf F
  _ZGVbN4v_sinf F
  _ZGVcN4v_cos F
  _ZGVcN4v_log F
  _ZGVcN4v_sin F
  _ZGVcN8v_cosf F
+ _ZGVcN8v_logf F
  _ZGVcN8v_sinf F
  _ZGVdN4v_cos F
  _ZGVdN4v_log F
  _ZGVdN4v_sin F
  _ZGVdN8v_cosf F
+ _ZGVdN8v_logf F
  _ZGVdN8v_sinf F
  _ZGVeN16v_cosf F
+ _ZGVeN16v_logf F
  _ZGVeN16v_sinf F
  _ZGVeN8v_cos F
  _ZGVeN8v_log F
diff --git a/sysdeps/x86/fpu/bits/math-vector.h b/sysdeps/x86/fpu/bits/math-vector.h
index ed85622..5c3e492 100644
--- a/sysdeps/x86/fpu/bits/math-vector.h
+++ b/sysdeps/x86/fpu/bits/math-vector.h
@@ -38,6 +38,8 @@
 #  define __DECL_SIMD_sinf __DECL_SIMD_x86_64
 #  undef __DECL_SIMD_log
 #  define __DECL_SIMD_log __DECL_SIMD_x86_64
+#  undef __DECL_SIMD_logf
+#  define __DECL_SIMD_logf __DECL_SIMD_x86_64
 
 /* Workaround to exclude unnecessary symbol aliases in libmvec
    while GCC creates the vector names based on scalar asm name.
@@ -47,6 +49,10 @@ __asm__ ("_ZGVbN2v___log_finite = _ZGVbN2v_log");
 __asm__ ("_ZGVcN4v___log_finite = _ZGVcN4v_log");
 __asm__ ("_ZGVdN4v___log_finite = _ZGVdN4v_log");
 __asm__ ("_ZGVeN8v___log_finite = _ZGVeN8v_log");
+__asm__ ("_ZGVbN4v___logf_finite = _ZGVbN4v_logf");
+__asm__ ("_ZGVcN8v___logf_finite = _ZGVcN8v_logf");
+__asm__ ("_ZGVdN8v___logf_finite = _ZGVdN8v_logf");
+__asm__ ("_ZGVeN16v___logf_finite = _ZGVeN16v_logf");
 
 # endif
 #endif
diff --git a/sysdeps/x86_64/fpu/Makefile b/sysdeps/x86_64/fpu/Makefile
index a509746..b610e3f 100644
--- a/sysdeps/x86_64/fpu/Makefile
+++ b/sysdeps/x86_64/fpu/Makefile
@@ -8,7 +8,9 @@ libmvec-support += svml_d_cos2_core svml_d_cos4_core_avx \
 		   svml_s_sinf4_core svml_s_sinf8_core_avx \
 		   svml_s_sinf8_core svml_s_sinf16_core svml_s_sinf_data \
 		   svml_d_log2_core svml_d_log4_core_avx svml_d_log4_core \
-		   svml_d_log8_core svml_d_log_data \
+		   svml_d_log8_core svml_d_log_data svml_s_logf4_core \
+		   svml_s_logf8_core_avx svml_s_logf8_core svml_s_logf16_core \
+		   svml_s_logf_data \
 		   init-arch
 endif
 
diff --git a/sysdeps/x86_64/fpu/Versions b/sysdeps/x86_64/fpu/Versions
index 7bda47f..ecd1b70 100644
--- a/sysdeps/x86_64/fpu/Versions
+++ b/sysdeps/x86_64/fpu/Versions
@@ -5,5 +5,6 @@ libmvec {
     _ZGVbN2v_log; _ZGVcN4v_log; _ZGVdN4v_log; _ZGVeN8v_log;
     _ZGVbN4v_cosf; _ZGVcN8v_cosf; _ZGVdN8v_cosf; _ZGVeN16v_cosf;
     _ZGVbN4v_sinf; _ZGVcN8v_sinf; _ZGVdN8v_sinf; _ZGVeN16v_sinf;
+    _ZGVbN4v_logf; _ZGVcN8v_logf; _ZGVdN8v_logf; _ZGVeN16v_logf;
   }
 }
diff --git a/sysdeps/x86_64/fpu/libm-test-ulps b/sysdeps/x86_64/fpu/libm-test-ulps
index 949a099..1812370 100644
--- a/sysdeps/x86_64/fpu/libm-test-ulps
+++ b/sysdeps/x86_64/fpu/libm-test-ulps
@@ -1847,17 +1847,25 @@ ifloat: 2
 ildouble: 1
 ldouble: 1
 
+Function: "log_vlen16":
+float: 3
+
 Function: "log_vlen2":
 double: 1
 
 Function: "log_vlen4":
 double: 1
+float: 3
 
 Function: "log_vlen4_avx2":
 double: 1
 
 Function: "log_vlen8":
 double: 1
+float: 3
+
+Function: "log_vlen8_avx2":
+float: 2
 
 Function: "pow":
 float: 3
diff --git a/sysdeps/x86_64/fpu/multiarch/Makefile b/sysdeps/x86_64/fpu/multiarch/Makefile
index 16d93ca..5fc6ea3 100644
--- a/sysdeps/x86_64/fpu/multiarch/Makefile
+++ b/sysdeps/x86_64/fpu/multiarch/Makefile
@@ -60,5 +60,7 @@ libmvec-sysdep_routines += svml_d_cos2_core_sse4 svml_d_cos4_core_avx2 \
 			   svml_d_log8_core_avx512 \
 			   svml_s_cosf4_core_sse4 svml_s_cosf8_core_avx2 \
 			   svml_s_cosf16_core_avx512 svml_s_sinf4_core_sse4 \
-			   svml_s_sinf8_core_avx2 svml_s_sinf16_core_avx512
+			   svml_s_sinf8_core_avx2 svml_s_sinf16_core_avx512 \
+			   svml_s_logf4_core_sse4 svml_s_logf8_core_avx2 \
+			   svml_s_logf16_core_avx512
 endif
diff --git a/sysdeps/x86_64/fpu/test-float-vlen8-avx2-wrappers.c b/sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core.S
similarity index 50%
copy from sysdeps/x86_64/fpu/test-float-vlen8-avx2-wrappers.c
copy to sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core.S
index a85f588..8756750 100644
--- a/sysdeps/x86_64/fpu/test-float-vlen8-avx2-wrappers.c
+++ b/sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core.S
@@ -1,4 +1,4 @@
-/* Wrapper part of tests for AVX2 ISA versions of vector math functions.
+/* Multiple versions of vectorized logf.
    Copyright (C) 2014-2015 Free Software Foundation, Inc.
    This file is part of the GNU C Library.
 
@@ -16,14 +16,24 @@
    License along with the GNU C Library; if not, see
    <http://www.gnu.org/licenses/>.  */
 
-#include "test-float-vlen8.h"
-#include "test-vec-loop.h"
-#include <immintrin.h>
+#include <sysdep.h>
+#include <init-arch.h>
 
-#undef VEC_SUFF
-#define VEC_SUFF _vlen8_avx2
+	.text
+ENTRY (_ZGVeN16v_logf)
+        .type   _ZGVeN16v_logf, @gnu_indirect_function
+        cmpl    $0, KIND_OFFSET+__cpu_features(%rip)
+        jne     1
+        call    __init_cpu_features
+1:      leaq    _ZGVeN16v_logf_skx(%rip), %rax
+        testl   $bit_AVX512DQ_Usable, __cpu_features+FEATURE_OFFSET+index_AVX512DQ_Usable(%rip)
+        jnz     3
+2:      leaq    _ZGVeN16v_logf_knl(%rip), %rax
+        testl   $bit_AVX512F_Usable, __cpu_features+FEATURE_OFFSET+index_AVX512F_Usable(%rip)
+        jnz     3
+        leaq    _ZGVeN16v_logf_avx2_wrapper(%rip), %rax
+3:      ret
+END (_ZGVeN16v_logf)
 
-#define VEC_TYPE __m256
-
-VECTOR_WRAPPER (WRAPPER_NAME (cosf), _ZGVdN8v_cosf)
-VECTOR_WRAPPER (WRAPPER_NAME (sinf), _ZGVdN8v_sinf)
+#define _ZGVeN16v_logf _ZGVeN16v_logf_avx2_wrapper
+#include "../svml_s_logf16_core.S"
diff --git a/sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core_avx512.S b/sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core_avx512.S
new file mode 100644
index 0000000..86fcab6
--- /dev/null
+++ b/sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core_avx512.S
@@ -0,0 +1,416 @@
+/* Function logf vectorized with AVX-512. KNL and SKX versions.
+   Copyright (C) 2014-2015 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#include <sysdep.h>
+#include "svml_s_logf_data.h"
+#include "svml_s_wrapper_impl.h"
+
+	.text
+ENTRY (_ZGVeN16v_logf_knl)
+#ifndef HAVE_AVX512_ASM_SUPPORT
+WRAPPER_IMPL_AVX512 _ZGVdN8v_logf
+#else
+/*
+   ALGORITHM DESCRIPTION:
+
+     log(x) = exponent_x*log(2) + log(mantissa_x),         if mantissa_x<4/3
+     log(x) = (exponent_x+1)*log(2) + log(0.5*mantissa_x), if mantissa_x>4/3
+
+     R = mantissa_x - 1,     if mantissa_x<4/3
+     R = 0.5*mantissa_x - 1, if mantissa_x>4/3
+     |R|< 1/3
+
+     log(1+R) is approximated as a polynomial: degree 9 for 1-ulp,
+     degree 7 for 4-ulp, degree 3 for half-precision.  */
+
+        pushq     %rbp
+        cfi_adjust_cfa_offset (8)
+        cfi_rel_offset (%rbp, 0)
+        movq      %rsp, %rbp
+        cfi_def_cfa_register (%rbp)
+        andq      $-64, %rsp
+        subq      $1280, %rsp
+        movq      __svml_slog_data@GOTPCREL(%rip), %rax
+        movl      $-1, %ecx
+
+/* reduction: compute r,n */
+        vpsubd    _iBrkValue(%rax), %zmm0, %zmm2
+        vmovups   _sPoly_7(%rax), %zmm7
+        vpandd    _iOffExpoMask(%rax), %zmm2, %zmm3
+
+/* exponent_x (mantissa_x<4/3) or exponent_x+1 (mantissa_x>4/3) */
+        vpsrad    $23, %zmm2, %zmm4
+
+/* check for working range,
+   set special argument mask (denormals/zero/Inf/NaN)
+ */
+        vpaddd    _iHiDelta(%rax), %zmm0, %zmm1
+
+/* mantissa_x (mantissa_x<4/3), or 0.5*mantissa_x (mantissa_x>4/3) */
+        vpaddd    _iBrkValue(%rax), %zmm3, %zmm6
+        vpcmpd    $1, _iLoRange(%rax), %zmm1, %k1
+        vcvtdq2ps {rn-sae}, %zmm4, %zmm1
+
+/* reduced argument R */
+        vsubps       _sOne(%rax), %zmm6, %zmm8
+        vpbroadcastd %ecx, %zmm5{%k1}{z}
+
+/* polynomial evaluation starts here */
+        vfmadd213ps _sPoly_6(%rax), %zmm8, %zmm7
+        vptestmd    %zmm5, %zmm5, %k0
+        kmovw       %k0, %ecx
+        vfmadd213ps _sPoly_5(%rax), %zmm8, %zmm7
+        vfmadd213ps _sPoly_4(%rax), %zmm8, %zmm7
+        vfmadd213ps _sPoly_3(%rax), %zmm8, %zmm7
+        vfmadd213ps _sPoly_2(%rax), %zmm8, %zmm7
+        vfmadd213ps _sPoly_1(%rax), %zmm8, %zmm7
+        vmulps      %zmm8, %zmm7, %zmm9
+
+/* polynomial evaluation end */
+        vfmadd213ps %zmm8, %zmm8, %zmm9
+
+/*
+   final reconstruction:
+   add exponent_value*log2 to polynomial result
+ */
+        vfmadd132ps _sLn2(%rax), %zmm9, %zmm1
+        testl       %ecx, %ecx
+        jne         .LBL_1_3
+
+.LBL_1_2:
+        cfi_remember_state
+        vmovaps   %zmm1, %zmm0
+        movq      %rbp, %rsp
+        cfi_def_cfa_register (%rsp)
+        popq      %rbp
+        cfi_adjust_cfa_offset (-8)
+        cfi_restore (%rbp)
+        ret
+
+.LBL_1_3:
+        cfi_restore_state
+        vmovups   %zmm0, 1152(%rsp)
+        vmovups   %zmm1, 1216(%rsp)
+        je        .LBL_1_2
+
+        xorb      %dl, %dl
+        kmovw     %k4, 1048(%rsp)
+        xorl      %eax, %eax
+        kmovw     %k5, 1040(%rsp)
+        kmovw     %k6, 1032(%rsp)
+        kmovw     %k7, 1024(%rsp)
+        vmovups   %zmm16, 960(%rsp)
+        vmovups   %zmm17, 896(%rsp)
+        vmovups   %zmm18, 832(%rsp)
+        vmovups   %zmm19, 768(%rsp)
+        vmovups   %zmm20, 704(%rsp)
+        vmovups   %zmm21, 640(%rsp)
+        vmovups   %zmm22, 576(%rsp)
+        vmovups   %zmm23, 512(%rsp)
+        vmovups   %zmm24, 448(%rsp)
+        vmovups   %zmm25, 384(%rsp)
+        vmovups   %zmm26, 320(%rsp)
+        vmovups   %zmm27, 256(%rsp)
+        vmovups   %zmm28, 192(%rsp)
+        vmovups   %zmm29, 128(%rsp)
+        vmovups   %zmm30, 64(%rsp)
+        vmovups   %zmm31, (%rsp)
+        movq      %rsi, 1064(%rsp)
+        movq      %rdi, 1056(%rsp)
+        movq      %r12, 1096(%rsp)
+        cfi_offset_rel_rsp (12, 1096)
+        movb      %dl, %r12b
+        movq      %r13, 1088(%rsp)
+        cfi_offset_rel_rsp (13, 1088)
+        movl      %ecx, %r13d
+        movq      %r14, 1080(%rsp)
+        cfi_offset_rel_rsp (14, 1080)
+        movl      %eax, %r14d
+        movq      %r15, 1072(%rsp)
+        cfi_offset_rel_rsp (15, 1072)
+        cfi_remember_state
+
+.LBL_1_6:
+        btl       %r14d, %r13d
+        jc        .LBL_1_12
+
+.LBL_1_7:
+        lea       1(%r14), %esi
+        btl       %esi, %r13d
+        jc        .LBL_1_10
+
+.LBL_1_8:
+        addb      $1, %r12b
+        addl      $2, %r14d
+        cmpb      $16, %r12b
+        jb        .LBL_1_6
+
+        kmovw     1048(%rsp), %k4
+        movq      1064(%rsp), %rsi
+        kmovw     1040(%rsp), %k5
+        movq      1056(%rsp), %rdi
+        kmovw     1032(%rsp), %k6
+        movq      1096(%rsp), %r12
+        cfi_restore (%r12)
+        movq      1088(%rsp), %r13
+        cfi_restore (%r13)
+        kmovw     1024(%rsp), %k7
+        vmovups   960(%rsp), %zmm16
+        vmovups   896(%rsp), %zmm17
+        vmovups   832(%rsp), %zmm18
+        vmovups   768(%rsp), %zmm19
+        vmovups   704(%rsp), %zmm20
+        vmovups   640(%rsp), %zmm21
+        vmovups   576(%rsp), %zmm22
+        vmovups   512(%rsp), %zmm23
+        vmovups   448(%rsp), %zmm24
+        vmovups   384(%rsp), %zmm25
+        vmovups   320(%rsp), %zmm26
+        vmovups   256(%rsp), %zmm27
+        vmovups   192(%rsp), %zmm28
+        vmovups   128(%rsp), %zmm29
+        vmovups   64(%rsp), %zmm30
+        vmovups   (%rsp), %zmm31
+        movq      1080(%rsp), %r14
+        cfi_restore (%r14)
+        movq      1072(%rsp), %r15
+        cfi_restore (%r15)
+        vmovups   1216(%rsp), %zmm1
+        jmp       .LBL_1_2
+
+.LBL_1_10:
+        cfi_restore_state
+        movzbl    %r12b, %r15d
+        vmovss    1156(%rsp,%r15,8), %xmm0
+        call      logf@PLT
+        vmovss    %xmm0, 1220(%rsp,%r15,8)
+        jmp       .LBL_1_8
+
+.LBL_1_12:
+        movzbl    %r12b, %r15d
+        vmovss    1152(%rsp,%r15,8), %xmm0
+        call      logf@PLT
+        vmovss    %xmm0, 1216(%rsp,%r15,8)
+        jmp       .LBL_1_7
+#endif
+END (_ZGVeN16v_logf_knl)
+
+ENTRY (_ZGVeN16v_logf_skx)
+#ifndef HAVE_AVX512_ASM_SUPPORT
+WRAPPER_IMPL_AVX512 _ZGVdN8v_logf
+#else
+/*
+   ALGORITHM DESCRIPTION:
+
+     log(x) = exponent_x*log(2) + log(mantissa_x),         if mantissa_x<4/3
+     log(x) = (exponent_x+1)*log(2) + log(0.5*mantissa_x), if mantissa_x>4/3
+
+     R = mantissa_x - 1,     if mantissa_x<4/3
+     R = 0.5*mantissa_x - 1, if mantissa_x>4/3
+     |R|< 1/3
+
+     log(1+R) is approximated as a polynomial: degree 9 for 1-ulp,
+     degree 7 for 4-ulp, degree 3 for half-precision.  */
+
+        pushq     %rbp
+        cfi_adjust_cfa_offset (8)
+        cfi_rel_offset (%rbp, 0)
+        movq      %rsp, %rbp
+        cfi_def_cfa_register (%rbp)
+        andq      $-64, %rsp
+        subq      $1280, %rsp
+        movq      __svml_slog_data@GOTPCREL(%rip), %rax
+        vmovups   .L_2il0floatpacket.7(%rip), %zmm6
+        vmovups _iBrkValue(%rax), %zmm4
+        vmovups _sPoly_7(%rax), %zmm8
+
+/*
+   check for working range,
+   set special argument mask (denormals/zero/Inf/NaN)
+ */
+        vpaddd _iHiDelta(%rax), %zmm0, %zmm1
+
+/* reduction: compute r,n */
+        vpsubd    %zmm4, %zmm0, %zmm2
+        vpcmpd    $5, _iLoRange(%rax), %zmm1, %k1
+
+/* exponent_x (mantissa_x<4/3) or exponent_x+1 (mantissa_x>4/3) */
+        vpsrad    $23, %zmm2, %zmm5
+        vpandd _iOffExpoMask(%rax), %zmm2, %zmm3
+
+/* mantissa_x (mantissa_x<4/3), or 0.5*mantissa_x (mantissa_x>4/3) */
+        vpaddd    %zmm4, %zmm3, %zmm7
+
+/* reduced argument R */
+        vsubps _sOne(%rax), %zmm7, %zmm9
+
+/* polynomial evaluation starts here */
+        vfmadd213ps _sPoly_6(%rax), %zmm9, %zmm8
+        vfmadd213ps _sPoly_5(%rax), %zmm9, %zmm8
+        vfmadd213ps _sPoly_4(%rax), %zmm9, %zmm8
+        vfmadd213ps _sPoly_3(%rax), %zmm9, %zmm8
+        vfmadd213ps _sPoly_2(%rax), %zmm9, %zmm8
+        vfmadd213ps _sPoly_1(%rax), %zmm9, %zmm8
+        vmulps    %zmm9, %zmm8, %zmm10
+
+/* polynomial evaluation end */
+        vfmadd213ps %zmm9, %zmm9, %zmm10
+        vpandnd   %zmm1, %zmm1, %zmm6{%k1}
+        vptestmd  %zmm6, %zmm6, %k0
+        vcvtdq2ps {rn-sae}, %zmm5, %zmm1
+        kmovw     %k0, %ecx
+
+/*
+   final reconstruction:
+   add exponent_value*log2 to polynomial result
+ */
+        vfmadd132ps _sLn2(%rax), %zmm10, %zmm1
+        testl     %ecx, %ecx
+        jne       .LBL_2_3
+
+.LBL_2_2:
+        cfi_remember_state
+        vmovaps   %zmm1, %zmm0
+        movq      %rbp, %rsp
+        cfi_def_cfa_register (%rsp)
+        popq      %rbp
+        cfi_adjust_cfa_offset (-8)
+        cfi_restore (%rbp)
+        ret
+
+.LBL_2_3:
+        cfi_restore_state
+        vmovups   %zmm0, 1152(%rsp)
+        vmovups   %zmm1, 1216(%rsp)
+        je        .LBL_2_2
+
+        xorb      %dl, %dl
+        xorl      %eax, %eax
+        kmovw     %k4, 1048(%rsp)
+        kmovw     %k5, 1040(%rsp)
+        kmovw     %k6, 1032(%rsp)
+        kmovw     %k7, 1024(%rsp)
+        vmovups   %zmm16, 960(%rsp)
+        vmovups   %zmm17, 896(%rsp)
+        vmovups   %zmm18, 832(%rsp)
+        vmovups   %zmm19, 768(%rsp)
+        vmovups   %zmm20, 704(%rsp)
+        vmovups   %zmm21, 640(%rsp)
+        vmovups   %zmm22, 576(%rsp)
+        vmovups   %zmm23, 512(%rsp)
+        vmovups   %zmm24, 448(%rsp)
+        vmovups   %zmm25, 384(%rsp)
+        vmovups   %zmm26, 320(%rsp)
+        vmovups   %zmm27, 256(%rsp)
+        vmovups   %zmm28, 192(%rsp)
+        vmovups   %zmm29, 128(%rsp)
+        vmovups   %zmm30, 64(%rsp)
+        vmovups   %zmm31, (%rsp)
+        movq      %rsi, 1064(%rsp)
+        movq      %rdi, 1056(%rsp)
+        movq      %r12, 1096(%rsp)
+        cfi_offset_rel_rsp (12, 1096)
+        movb      %dl, %r12b
+        movq      %r13, 1088(%rsp)
+        cfi_offset_rel_rsp (13, 1088)
+        movl      %ecx, %r13d
+        movq      %r14, 1080(%rsp)
+        cfi_offset_rel_rsp (14, 1080)
+        movl      %eax, %r14d
+        movq      %r15, 1072(%rsp)
+        cfi_offset_rel_rsp (15, 1072)
+        cfi_remember_state
+
+.LBL_2_6:
+        btl       %r14d, %r13d
+        jc        .LBL_2_12
+
+.LBL_2_7:
+        lea       1(%r14), %esi
+        btl       %esi, %r13d
+        jc        .LBL_2_10
+
+.LBL_2_8:
+        incb      %r12b
+        addl      $2, %r14d
+        cmpb      $16, %r12b
+        jb        .LBL_2_6
+
+        kmovw     1048(%rsp), %k4
+        kmovw     1040(%rsp), %k5
+        kmovw     1032(%rsp), %k6
+        kmovw     1024(%rsp), %k7
+        vmovups   960(%rsp), %zmm16
+        vmovups   896(%rsp), %zmm17
+        vmovups   832(%rsp), %zmm18
+        vmovups   768(%rsp), %zmm19
+        vmovups   704(%rsp), %zmm20
+        vmovups   640(%rsp), %zmm21
+        vmovups   576(%rsp), %zmm22
+        vmovups   512(%rsp), %zmm23
+        vmovups   448(%rsp), %zmm24
+        vmovups   384(%rsp), %zmm25
+        vmovups   320(%rsp), %zmm26
+        vmovups   256(%rsp), %zmm27
+        vmovups   192(%rsp), %zmm28
+        vmovups   128(%rsp), %zmm29
+        vmovups   64(%rsp), %zmm30
+        vmovups   (%rsp), %zmm31
+        vmovups   1216(%rsp), %zmm1
+        movq      1064(%rsp), %rsi
+        movq      1056(%rsp), %rdi
+        movq      1096(%rsp), %r12
+        cfi_restore (%r12)
+        movq      1088(%rsp), %r13
+        cfi_restore (%r13)
+        movq      1080(%rsp), %r14
+        cfi_restore (%r14)
+        movq      1072(%rsp), %r15
+        cfi_restore (%r15)
+        jmp       .LBL_2_2
+
+.LBL_2_10:
+        cfi_restore_state
+        movzbl    %r12b, %r15d
+        vmovss    1156(%rsp,%r15,8), %xmm0
+        vzeroupper
+        vmovss    1156(%rsp,%r15,8), %xmm0
+
+        call      logf@PLT
+
+        vmovss    %xmm0, 1220(%rsp,%r15,8)
+        jmp       .LBL_2_8
+
+.LBL_2_12:
+        movzbl    %r12b, %r15d
+        vmovss    1152(%rsp,%r15,8), %xmm0
+        vzeroupper
+        vmovss    1152(%rsp,%r15,8), %xmm0
+
+        call      logf@PLT
+
+        vmovss    %xmm0, 1216(%rsp,%r15,8)
+        jmp       .LBL_2_7
+
+#endif
+END (_ZGVeN16v_logf_skx)
+
+	.section .rodata, "a"
+.L_2il0floatpacket.7:
+	.long	0xffffffff,0xffffffff,0xffffffff,0xffffffff,0xffffffff,0xffffffff,0xffffffff,0xffffffff,0xffffffff,0xffffffff,0xffffffff,0xffffffff,0xffffffff,0xffffffff,0xffffffff,0xffffffff
+	.type	.L_2il0floatpacket.7,@object
diff --git a/sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c b/sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core.S
similarity index 56%
copy from sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c
copy to sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core.S
index 3a0fa6a..153ed8e 100644
--- a/sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c
+++ b/sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core.S
@@ -1,4 +1,4 @@
-/* Wrapper part of tests for SSE ISA versions of vector math functions.
+/* Multiple versions of vectorized logf.
    Copyright (C) 2014-2015 Free Software Foundation, Inc.
    This file is part of the GNU C Library.
 
@@ -16,11 +16,23 @@
    License along with the GNU C Library; if not, see
    <http://www.gnu.org/licenses/>.  */
 
-#include "test-float-vlen4.h"
-#include "test-vec-loop.h"
-#include <immintrin.h>
+#include <sysdep.h>
+#include <init-arch.h>
 
-#define VEC_TYPE __m128
+	.text
+ENTRY (_ZGVbN4v_logf)
+        .type   _ZGVbN4v_logf, @gnu_indirect_function
+        cmpl    $0, KIND_OFFSET+__cpu_features(%rip)
+        jne     1f
+        call    __init_cpu_features
+1:      leaq    _ZGVbN4v_logf_sse4(%rip), %rax
+        testl   $bit_SSE4_1, __cpu_features+CPUID_OFFSET+index_SSE4_1(%rip)
+        jz      2f
+        ret
+2:      leaq    _ZGVbN4v_logf_sse2(%rip), %rax
+        ret
+END (_ZGVbN4v_logf)
+libmvec_hidden_def (_ZGVbN4v_logf)
 
-VECTOR_WRAPPER (WRAPPER_NAME (cosf), _ZGVbN4v_cosf)
-VECTOR_WRAPPER (WRAPPER_NAME (sinf), _ZGVbN4v_sinf)
+#define _ZGVbN4v_logf _ZGVbN4v_logf_sse2
+#include "../svml_s_logf4_core.S"
diff --git a/sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core_sse4.S b/sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core_sse4.S
new file mode 100644
index 0000000..68f1103
--- /dev/null
+++ b/sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core_sse4.S
@@ -0,0 +1,194 @@
+/* Function logf vectorized with SSE4.
+   Copyright (C) 2014-2015 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#include <sysdep.h>
+#include "svml_s_logf_data.h"
+
+	.text
+ENTRY (_ZGVbN4v_logf_sse4)
+/*
+   ALGORITHM DESCRIPTION:
+
+     log(x) = exponent_x*log(2) + log(mantissa_x),         if mantissa_x<4/3
+     log(x) = (exponent_x+1)*log(2) + log(0.5*mantissa_x), if mantissa_x>4/3
+
+     R = mantissa_x - 1,     if mantissa_x<4/3
+     R = 0.5*mantissa_x - 1, if mantissa_x>4/3
+     |R|< 1/3
+
+     log(1+R) is approximated as a polynomial: degree 9 for 1-ulp,
+     degree 7 for 4-ulp, degree 3 for half-precision.  */
+
+        pushq     %rbp
+        cfi_adjust_cfa_offset (8)
+        cfi_rel_offset (%rbp, 0)
+        movq      %rsp, %rbp
+        cfi_def_cfa_register (%rbp)
+        andq      $-64, %rsp
+        subq      $320, %rsp
+
+/* reduction: compute r,n */
+        movaps    %xmm0, %xmm2
+
+/* check for working range,
+   set special argument mask (denormals/zero/Inf/NaN) */
+        movq      __svml_slog_data@GOTPCREL(%rip), %rax
+        movdqu _iHiDelta(%rax), %xmm1
+        movdqu _iLoRange(%rax), %xmm4
+        paddd     %xmm0, %xmm1
+        movdqu _iBrkValue(%rax), %xmm3
+        pcmpgtd   %xmm1, %xmm4
+        movdqu _iOffExpoMask(%rax), %xmm1
+        psubd     %xmm3, %xmm2
+        pand      %xmm2, %xmm1
+
+/* exponent_x (mantissa_x<4/3) or exponent_x+1 (mantissa_x>4/3) */
+        psrad     $23, %xmm2
+        paddd     %xmm3, %xmm1
+        movups _sPoly_7(%rax), %xmm5
+
+/* mantissa_x (mantissa_x<4/3), or 0.5*mantissa_x (mantissa_x>4/3) */
+        cvtdq2ps  %xmm2, %xmm6
+
+/* reduced argument R */
+        subps _sOne(%rax), %xmm1
+        movmskps  %xmm4, %ecx
+
+/* final reconstruction:
+   add exponent_value*log2 to polynomial result */
+        mulps _sLn2(%rax), %xmm6
+
+/* polynomial evaluation starts here */
+        mulps     %xmm1, %xmm5
+        addps _sPoly_6(%rax), %xmm5
+        mulps     %xmm1, %xmm5
+        addps _sPoly_5(%rax), %xmm5
+        mulps     %xmm1, %xmm5
+        addps _sPoly_4(%rax), %xmm5
+        mulps     %xmm1, %xmm5
+        addps _sPoly_3(%rax), %xmm5
+        mulps     %xmm1, %xmm5
+        addps _sPoly_2(%rax), %xmm5
+        mulps     %xmm1, %xmm5
+        addps _sPoly_1(%rax), %xmm5
+        mulps     %xmm1, %xmm5
+
+/* polynomial evaluation end */
+        mulps     %xmm1, %xmm5
+        addps     %xmm5, %xmm1
+        addps     %xmm6, %xmm1
+        testl     %ecx, %ecx
+        jne       .LBL_1_3
+
+.LBL_1_2:
+        cfi_remember_state
+        movdqa    %xmm1, %xmm0
+        movq      %rbp, %rsp
+        cfi_def_cfa_register (%rsp)
+        popq      %rbp
+        cfi_adjust_cfa_offset (-8)
+        cfi_restore (%rbp)
+        ret
+
+.LBL_1_3:
+        cfi_restore_state
+        movups    %xmm0, 192(%rsp)
+        movups    %xmm1, 256(%rsp)
+        je        .LBL_1_2
+
+        xorb      %dl, %dl
+        xorl      %eax, %eax
+        movups    %xmm8, 112(%rsp)
+        movups    %xmm9, 96(%rsp)
+        movups    %xmm10, 80(%rsp)
+        movups    %xmm11, 64(%rsp)
+        movups    %xmm12, 48(%rsp)
+        movups    %xmm13, 32(%rsp)
+        movups    %xmm14, 16(%rsp)
+        movups    %xmm15, (%rsp)
+        movq      %rsi, 136(%rsp)
+        movq      %rdi, 128(%rsp)
+        movq      %r12, 168(%rsp)
+        cfi_offset_rel_rsp (12, 168)
+        movb      %dl, %r12b
+        movq      %r13, 160(%rsp)
+        cfi_offset_rel_rsp (13, 160)
+        movl      %ecx, %r13d
+        movq      %r14, 152(%rsp)
+        cfi_offset_rel_rsp (14, 152)
+        movl      %eax, %r14d
+        movq      %r15, 144(%rsp)
+        cfi_offset_rel_rsp (15, 144)
+        cfi_remember_state
+
+.LBL_1_6:
+        btl       %r14d, %r13d
+        jc        .LBL_1_12
+
+.LBL_1_7:
+        lea       1(%r14), %esi
+        btl       %esi, %r13d
+        jc        .LBL_1_10
+
+.LBL_1_8:
+        incb      %r12b
+        addl      $2, %r14d
+        cmpb      $16, %r12b
+        jb        .LBL_1_6
+
+        movups    112(%rsp), %xmm8
+        movups    96(%rsp), %xmm9
+        movups    80(%rsp), %xmm10
+        movups    64(%rsp), %xmm11
+        movups    48(%rsp), %xmm12
+        movups    32(%rsp), %xmm13
+        movups    16(%rsp), %xmm14
+        movups    (%rsp), %xmm15
+        movq      136(%rsp), %rsi
+        movq      128(%rsp), %rdi
+        movq      168(%rsp), %r12
+        cfi_restore (%r12)
+        movq      160(%rsp), %r13
+        cfi_restore (%r13)
+        movq      152(%rsp), %r14
+        cfi_restore (%r14)
+        movq      144(%rsp), %r15
+        cfi_restore (%r15)
+        movups    256(%rsp), %xmm1
+        jmp       .LBL_1_2
+
+.LBL_1_10:
+        cfi_restore_state
+        movzbl    %r12b, %r15d
+        movss     196(%rsp,%r15,8), %xmm0
+
+        call      logf@PLT
+
+        movss     %xmm0, 260(%rsp,%r15,8)
+        jmp       .LBL_1_8
+
+.LBL_1_12:
+        movzbl    %r12b, %r15d
+        movss     192(%rsp,%r15,8), %xmm0
+
+        call      logf@PLT
+
+        movss     %xmm0, 256(%rsp,%r15,8)
+        jmp       .LBL_1_7
+
+END (_ZGVbN4v_logf_sse4)
diff --git a/sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c b/sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core.S
similarity index 51%
copy from sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c
copy to sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core.S
index 3a0fa6a..6f50bf6 100644
--- a/sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c
+++ b/sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core.S
@@ -1,4 +1,4 @@
-/* Wrapper part of tests for SSE ISA versions of vector math functions.
+/* Multiple versions of vectorized logf.
    Copyright (C) 2014-2015 Free Software Foundation, Inc.
    This file is part of the GNU C Library.
 
@@ -10,17 +10,29 @@
    The GNU C Library is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-   Lesser General Public License for more details.
+    Lesser General Public License for more details.
 
    You should have received a copy of the GNU Lesser General Public
    License along with the GNU C Library; if not, see
    <http://www.gnu.org/licenses/>.  */
 
-#include "test-float-vlen4.h"
-#include "test-vec-loop.h"
-#include <immintrin.h>
+#include <sysdep.h>
+#include <init-arch.h>
 
-#define VEC_TYPE __m128
+	.text
+ENTRY (_ZGVdN8v_logf)
+        .type   _ZGVdN8v_logf, @gnu_indirect_function
+        cmpl    $0, KIND_OFFSET+__cpu_features(%rip)
+        jne     1f
+        call    __init_cpu_features
+1:      leaq    _ZGVdN8v_logf_avx2(%rip), %rax
+        testl   $bit_AVX2_Usable, __cpu_features+FEATURE_OFFSET+index_AVX2_Usable(%rip)
+        jz      2f
+        ret
+2:      leaq    _ZGVdN8v_logf_sse_wrapper(%rip), %rax
+        ret
+END (_ZGVdN8v_logf)
+libmvec_hidden_def (_ZGVdN8v_logf)
 
-VECTOR_WRAPPER (WRAPPER_NAME (cosf), _ZGVbN4v_cosf)
-VECTOR_WRAPPER (WRAPPER_NAME (sinf), _ZGVbN4v_sinf)
+#define _ZGVdN8v_logf _ZGVdN8v_logf_sse_wrapper
+#include "../svml_s_logf8_core.S"
diff --git a/sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core_avx2.S b/sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core_avx2.S
new file mode 100644
index 0000000..1f08b42
--- /dev/null
+++ b/sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core_avx2.S
@@ -0,0 +1,184 @@
+/* Function logf vectorized with AVX2.
+   Copyright (C) 2014-2015 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#include <sysdep.h>
+#include "svml_s_logf_data.h"
+
+	.text
+ENTRY(_ZGVdN8v_logf_avx2)
+/*
+   ALGORITHM DESCRIPTION:
+
+    log(x) = exponent_x*log(2) + log(mantissa_x),         if mantissa_x<4/3
+    log(x) = (exponent_x+1)*log(2) + log(0.5*mantissa_x), if mantissa_x>4/3
+
+    R = mantissa_x - 1,     if mantissa_x<4/3
+    R = 0.5*mantissa_x - 1, if mantissa_x>4/3
+    |R|< 1/3
+
+    log(1+R) is approximated as a polynomial: degree 9 for 1-ulp,
+    degree 7 for 4-ulp, degree 3 for half-precision.  */
+
+        pushq     %rbp
+        cfi_adjust_cfa_offset (8)
+        cfi_rel_offset (%rbp, 0)
+        movq      %rsp, %rbp
+        cfi_def_cfa_register (%rbp)
+        andq      $-64, %rsp
+        subq      $448, %rsp
+        movq      __svml_slog_data@GOTPCREL(%rip), %rax
+        vmovaps   %ymm0, %ymm2
+        vmovups _iBrkValue(%rax), %ymm6
+        vmovups _iLoRange(%rax), %ymm1
+/* check for working range,
+   set special argument mask (denormals/zero/Inf/NaN) */
+        vpaddd _iHiDelta(%rax), %ymm2, %ymm7
+
+/* reduction: compute r,n */
+        vpsubd    %ymm6, %ymm2, %ymm4
+
+/* exponent_x (mantissa_x<4/3) or exponent_x+1 (mantissa_x>4/3) */
+        vpsrad    $23, %ymm4, %ymm3
+        vpand _iOffExpoMask(%rax), %ymm4, %ymm5
+        vmovups _sPoly_7(%rax), %ymm4
+        vcvtdq2ps %ymm3, %ymm0
+
+/* mantissa_x (mantissa_x<4/3), or 0.5*mantissa_x (mantissa_x>4/3) */
+        vpaddd    %ymm6, %ymm5, %ymm3
+
+/* reduced argument R */
+        vsubps _sOne(%rax), %ymm3, %ymm5
+
+/* polynomial evaluation starts here */
+        vfmadd213ps _sPoly_6(%rax), %ymm5, %ymm4
+        vfmadd213ps _sPoly_5(%rax), %ymm5, %ymm4
+        vfmadd213ps _sPoly_4(%rax), %ymm5, %ymm4
+        vfmadd213ps _sPoly_3(%rax), %ymm5, %ymm4
+        vfmadd213ps _sPoly_2(%rax), %ymm5, %ymm4
+        vfmadd213ps _sPoly_1(%rax), %ymm5, %ymm4
+        vmulps    %ymm5, %ymm4, %ymm6
+
+/* polynomial evaluation end */
+        vfmadd213ps %ymm5, %ymm5, %ymm6
+        vpcmpgtd  %ymm7, %ymm1, %ymm1
+        vmovmskps %ymm1, %ecx
+
+/* final reconstruction:
+   add exponent_value*log2 to polynomial result */
+        vfmadd132ps _sLn2(%rax), %ymm6, %ymm0
+        testl     %ecx, %ecx
+        jne       .LBL_1_3
+
+.LBL_1_2:
+        cfi_remember_state
+        movq      %rbp, %rsp
+        cfi_def_cfa_register (%rsp)
+        popq      %rbp
+        cfi_adjust_cfa_offset (-8)
+        cfi_restore (%rbp)
+        ret
+
+.LBL_1_3:
+        cfi_restore_state
+        vmovups   %ymm2, 320(%rsp)
+        vmovups   %ymm0, 384(%rsp)
+        je        .LBL_1_2
+
+        xorb      %dl, %dl
+        xorl      %eax, %eax
+        vmovups   %ymm8, 224(%rsp)
+        vmovups   %ymm9, 192(%rsp)
+        vmovups   %ymm10, 160(%rsp)
+        vmovups   %ymm11, 128(%rsp)
+        vmovups   %ymm12, 96(%rsp)
+        vmovups   %ymm13, 64(%rsp)
+        vmovups   %ymm14, 32(%rsp)
+        vmovups   %ymm15, (%rsp)
+        movq      %rsi, 264(%rsp)
+        movq      %rdi, 256(%rsp)
+        movq      %r12, 296(%rsp)
+        cfi_offset_rel_rsp (12, 296)
+        movb      %dl, %r12b
+        movq      %r13, 288(%rsp)
+        cfi_offset_rel_rsp (13, 288)
+        movl      %ecx, %r13d
+        movq      %r14, 280(%rsp)
+        cfi_offset_rel_rsp (14, 280)
+        movl      %eax, %r14d
+        movq      %r15, 272(%rsp)
+        cfi_offset_rel_rsp (15, 272)
+        cfi_remember_state
+
+.LBL_1_6:
+        btl       %r14d, %r13d
+        jc        .LBL_1_12
+
+.LBL_1_7:
+        lea       1(%r14), %esi
+        btl       %esi, %r13d
+        jc        .LBL_1_10
+
+.LBL_1_8:
+        incb      %r12b
+        addl      $2, %r14d
+        cmpb      $16, %r12b
+        jb        .LBL_1_6
+
+        vmovups   224(%rsp), %ymm8
+        vmovups   192(%rsp), %ymm9
+        vmovups   160(%rsp), %ymm10
+        vmovups   128(%rsp), %ymm11
+        vmovups   96(%rsp), %ymm12
+        vmovups   64(%rsp), %ymm13
+        vmovups   32(%rsp), %ymm14
+        vmovups   (%rsp), %ymm15
+        vmovups   384(%rsp), %ymm0
+        movq      264(%rsp), %rsi
+        movq      256(%rsp), %rdi
+        movq      296(%rsp), %r12
+        cfi_restore (%r12)
+        movq      288(%rsp), %r13
+        cfi_restore (%r13)
+        movq      280(%rsp), %r14
+        cfi_restore (%r14)
+        movq      272(%rsp), %r15
+        cfi_restore (%r15)
+        jmp       .LBL_1_2
+
+.LBL_1_10:
+        cfi_restore_state
+        movzbl    %r12b, %r15d
+        vmovss    324(%rsp,%r15,8), %xmm0
+        vzeroupper
+
+        call      logf@PLT
+
+        vmovss    %xmm0, 388(%rsp,%r15,8)
+        jmp       .LBL_1_8
+
+.LBL_1_12:
+        movzbl    %r12b, %r15d
+        vmovss    320(%rsp,%r15,8), %xmm0
+        vzeroupper
+
+        call      logf@PLT
+
+        vmovss    %xmm0, 384(%rsp,%r15,8)
+        jmp       .LBL_1_7
+
+END(_ZGVdN8v_logf_avx2)
diff --git a/sysdeps/x86_64/fpu/test-float-vlen4.c b/sysdeps/x86_64/fpu/svml_s_logf16_core.S
similarity index 79%
copy from sysdeps/x86_64/fpu/test-float-vlen4.c
copy to sysdeps/x86_64/fpu/svml_s_logf16_core.S
index 3863787..47ae785 100644
--- a/sysdeps/x86_64/fpu/test-float-vlen4.c
+++ b/sysdeps/x86_64/fpu/svml_s_logf16_core.S
@@ -1,4 +1,4 @@
-/* Tests for SSE ISA versions of vector math functions.
+/* Function logf vectorized with AVX-512. Wrapper to AVX2 version.
    Copyright (C) 2014-2015 Free Software Foundation, Inc.
    This file is part of the GNU C Library.
 
@@ -16,9 +16,10 @@
    License along with the GNU C Library; if not, see
    <http://www.gnu.org/licenses/>.  */
 
-#include "test-float-vlen4.h"
+#include <sysdep.h>
+#include "svml_s_wrapper_impl.h"
 
-#define TEST_VECTOR_cosf 1
-#define TEST_VECTOR_sinf 1
-
-#include "libm-test.c"
+	.text
+ENTRY (_ZGVeN16v_logf)
+WRAPPER_IMPL_AVX512 _ZGVdN8v_logf
+END (_ZGVeN16v_logf)
diff --git a/sysdeps/x86_64/fpu/test-float-vlen16.c b/sysdeps/x86_64/fpu/svml_s_logf4_core.S
similarity index 77%
copy from sysdeps/x86_64/fpu/test-float-vlen16.c
copy to sysdeps/x86_64/fpu/svml_s_logf4_core.S
index 8988cdb..09be406 100644
--- a/sysdeps/x86_64/fpu/test-float-vlen16.c
+++ b/sysdeps/x86_64/fpu/svml_s_logf4_core.S
@@ -1,4 +1,4 @@
-/* Tests for AVX-512 ISA versions of vector math functions.
+/* Function logf vectorized with SSE2.
    Copyright (C) 2014-2015 Free Software Foundation, Inc.
    This file is part of the GNU C Library.
 
@@ -16,11 +16,15 @@
    License along with the GNU C Library; if not, see
    <http://www.gnu.org/licenses/>.  */
 
-#include "test-float-vlen16.h"
 
-#define TEST_VECTOR_cosf 1
-#define TEST_VECTOR_sinf 1
+#include <sysdep.h>
+#include "svml_s_wrapper_impl.h"
 
-#define REQUIRE_AVX512F
+	.text
+ENTRY (_ZGVbN4v_logf)
+WRAPPER_IMPL_SSE2 logf
+END (_ZGVbN4v_logf)
 
-#include "libm-test.c"
+#ifndef USE_MULTIARCH
+ libmvec_hidden_def (_ZGVbN4v_logf)
+#endif
diff --git a/sysdeps/x86_64/fpu/test-float-vlen4.c b/sysdeps/x86_64/fpu/svml_s_logf8_core.S
similarity index 75%
copy from sysdeps/x86_64/fpu/test-float-vlen4.c
copy to sysdeps/x86_64/fpu/svml_s_logf8_core.S
index 3863787..cf4e9be 100644
--- a/sysdeps/x86_64/fpu/test-float-vlen4.c
+++ b/sysdeps/x86_64/fpu/svml_s_logf8_core.S
@@ -1,4 +1,4 @@
-/* Tests for SSE ISA versions of vector math functions.
+/* Function logf vectorized with AVX2, wrapper version.
    Copyright (C) 2014-2015 Free Software Foundation, Inc.
    This file is part of the GNU C Library.
 
@@ -16,9 +16,14 @@
    License along with the GNU C Library; if not, see
    <http://www.gnu.org/licenses/>.  */
 
-#include "test-float-vlen4.h"
+#include <sysdep.h>
+#include "svml_s_wrapper_impl.h"
 
-#define TEST_VECTOR_cosf 1
-#define TEST_VECTOR_sinf 1
+	.text
+ENTRY (_ZGVdN8v_logf)
+WRAPPER_IMPL_AVX _ZGVbN4v_logf
+END (_ZGVdN8v_logf)
 
-#include "libm-test.c"
+#ifndef USE_MULTIARCH
+ libmvec_hidden_def (_ZGVdN8v_logf)
+#endif
diff --git a/sysdeps/x86_64/fpu/test-float-vlen4.c b/sysdeps/x86_64/fpu/svml_s_logf8_core_avx.S
similarity index 79%
copy from sysdeps/x86_64/fpu/test-float-vlen4.c
copy to sysdeps/x86_64/fpu/svml_s_logf8_core_avx.S
index 3863787..7ab572b 100644
--- a/sysdeps/x86_64/fpu/test-float-vlen4.c
+++ b/sysdeps/x86_64/fpu/svml_s_logf8_core_avx.S
@@ -1,4 +1,4 @@
-/* Tests for SSE ISA versions of vector math functions.
+/* Function logf vectorized in AVX ISA as wrapper to SSE4 ISA version.
    Copyright (C) 2014-2015 Free Software Foundation, Inc.
    This file is part of the GNU C Library.
 
@@ -16,9 +16,10 @@
    License along with the GNU C Library; if not, see
    <http://www.gnu.org/licenses/>.  */
 
-#include "test-float-vlen4.h"
+#include <sysdep.h>
+#include "svml_s_wrapper_impl.h"
 
-#define TEST_VECTOR_cosf 1
-#define TEST_VECTOR_sinf 1
-
-#include "libm-test.c"
+        .text
+ENTRY(_ZGVcN8v_logf)
+WRAPPER_IMPL_AVX _ZGVbN4v_logf
+END(_ZGVcN8v_logf)
diff --git a/sysdeps/x86_64/fpu/svml_s_logf_data.S b/sysdeps/x86_64/fpu/svml_s_logf_data.S
new file mode 100644
index 0000000..1e7f701
--- /dev/null
+++ b/sysdeps/x86_64/fpu/svml_s_logf_data.S
@@ -0,0 +1,102 @@
+/* Data for vector function logf.
+   Copyright (C) 2014-2015 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#include "svml_s_logf_data.h"
+
+	.section .rodata, "a"
+	.align 64
+
+/* Data table for vector implementations of function logf.
+   The table may contain polynomial, reduction, lookup coefficients and
+   other coefficients obtained through different methods of research and
+   experimental work.  */
+
+	.globl __svml_slog_data
+__svml_slog_data:
+
+/* Polynomial sPoly[] coefficients:
+ * -5.0000000000000000000000000e-01 */
+float_vector _sPoly_1 0xbf000000
+
+/* 3.3336564898490905761718750e-01 */
+float_vector _sPoly_2 0x3eaaaee7
+
+/* -2.5004664063453674316406250e-01 */
+float_vector _sPoly_3 0xbe80061d
+
+/* 1.9822503626346588134765625e-01 */
+float_vector _sPoly_4 0x3e4afb81
+
+/* -1.6462457180023193359375000e-01 */
+float_vector _sPoly_5 0xbe289358
+
+/* 1.6964881122112274169921875e-01 */
+float_vector _sPoly_6 0x3e2db86b
+
+/* -1.5177205204963684082031250e-01 */
+float_vector _sPoly_7 0xbe1b6a22
+
+/* Constant for work range check: Delta 80000000-7f800000 */
+float_vector _iHiDelta 0x00800000
+
+/* Constant for work range check: 00800000 + Delta */
+float_vector _iLoRange 0x01000000
+
+/* Mantissa break point  SP 2/3 */
+float_vector _iBrkValue 0x3f2aaaab
+
+/* SP significand mask */
+float_vector _iOffExpoMask 0x007fffff
+
+/* 1.0f */
+float_vector _sOne 0x3f800000
+
+/* SP log(2) */
+float_vector _sLn2 0x3f317218
+
+/* SP infinity, +/- */
+.if .-__svml_slog_data != _sInfs
+.err
+.endif
+	.long	0x7f800000
+	.long	0xff800000
+	.rept	56
+	.byte	0
+	.endr
+
+/* SP one, +/- */
+.if .-__svml_slog_data != _sOnes
+.err
+.endif
+	.long	0x3f800000
+	.long	0xbf800000
+	.rept	56
+	.byte	0
+	.endr
+
+/* SP zero +/- */
+.if .-__svml_slog_data != _sZeros
+.err
+.endif
+	.long	0x00000000
+	.long	0x80000000
+	.rept	56
+	.byte	0
+	.endr
+	.type	__svml_slog_data,@object
+	.size __svml_slog_data,.-__svml_slog_data
diff --git a/sysdeps/x86_64/fpu/svml_s_logf_data.h b/sysdeps/x86_64/fpu/svml_s_logf_data.h
new file mode 100644
index 0000000..d42411a
--- /dev/null
+++ b/sysdeps/x86_64/fpu/svml_s_logf_data.h
@@ -0,0 +1,48 @@
+/* Offsets for data table for vectorized function logf.
+   Copyright (C) 2014-2015 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef S_LOGF_DATA_H
+#define S_LOGF_DATA_H
+
+#define _sPoly_1                      	0
+#define _sPoly_2                      	64
+#define _sPoly_3                      	128
+#define _sPoly_4                      	192
+#define _sPoly_5                      	256
+#define _sPoly_6                      	320
+#define _sPoly_7                      	384
+#define _iHiDelta                     	448
+#define _iLoRange                     	512
+#define _iBrkValue                    	576
+#define _iOffExpoMask                 	640
+#define _sOne                         	704
+#define _sLn2                         	768
+#define _sInfs                        	832
+#define _sOnes                        	896
+#define _sZeros                       	960
+
+.macro float_vector offset value
+.if .-__svml_slog_data != \offset
+.err
+.endif
+.rept 16
+.long \value
+.endr
+.endm
+
+#endif
diff --git a/sysdeps/x86_64/fpu/test-float-vlen16-wrappers.c b/sysdeps/x86_64/fpu/test-float-vlen16-wrappers.c
index 801b03c..72435e4 100644
--- a/sysdeps/x86_64/fpu/test-float-vlen16-wrappers.c
+++ b/sysdeps/x86_64/fpu/test-float-vlen16-wrappers.c
@@ -24,3 +24,4 @@
 
 VECTOR_WRAPPER (WRAPPER_NAME (cosf), _ZGVeN16v_cosf)
 VECTOR_WRAPPER (WRAPPER_NAME (sinf), _ZGVeN16v_sinf)
+VECTOR_WRAPPER (WRAPPER_NAME (logf), _ZGVeN16v_logf)
diff --git a/sysdeps/x86_64/fpu/test-float-vlen16.c b/sysdeps/x86_64/fpu/test-float-vlen16.c
index 8988cdb..10da5fe 100644
--- a/sysdeps/x86_64/fpu/test-float-vlen16.c
+++ b/sysdeps/x86_64/fpu/test-float-vlen16.c
@@ -20,6 +20,7 @@
 
 #define TEST_VECTOR_cosf 1
 #define TEST_VECTOR_sinf 1
+#define TEST_VECTOR_logf 1
 
 #define REQUIRE_AVX512F
 
diff --git a/sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c b/sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c
index 3a0fa6a..f51575d 100644
--- a/sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c
+++ b/sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c
@@ -24,3 +24,4 @@
 
 VECTOR_WRAPPER (WRAPPER_NAME (cosf), _ZGVbN4v_cosf)
 VECTOR_WRAPPER (WRAPPER_NAME (sinf), _ZGVbN4v_sinf)
+VECTOR_WRAPPER (WRAPPER_NAME (logf), _ZGVbN4v_logf)
diff --git a/sysdeps/x86_64/fpu/test-float-vlen4.c b/sysdeps/x86_64/fpu/test-float-vlen4.c
index 3863787..5cb293f 100644
--- a/sysdeps/x86_64/fpu/test-float-vlen4.c
+++ b/sysdeps/x86_64/fpu/test-float-vlen4.c
@@ -20,5 +20,6 @@
 
 #define TEST_VECTOR_cosf 1
 #define TEST_VECTOR_sinf 1
+#define TEST_VECTOR_logf 1
 
 #include "libm-test.c"
diff --git a/sysdeps/x86_64/fpu/test-float-vlen8-avx2-wrappers.c b/sysdeps/x86_64/fpu/test-float-vlen8-avx2-wrappers.c
index a85f588..7515a59 100644
--- a/sysdeps/x86_64/fpu/test-float-vlen8-avx2-wrappers.c
+++ b/sysdeps/x86_64/fpu/test-float-vlen8-avx2-wrappers.c
@@ -27,3 +27,4 @@
 
 VECTOR_WRAPPER (WRAPPER_NAME (cosf), _ZGVdN8v_cosf)
 VECTOR_WRAPPER (WRAPPER_NAME (sinf), _ZGVdN8v_sinf)
+VECTOR_WRAPPER (WRAPPER_NAME (logf), _ZGVdN8v_logf)
diff --git a/sysdeps/x86_64/fpu/test-float-vlen8-avx2.c b/sysdeps/x86_64/fpu/test-float-vlen8-avx2.c
index db0b5e5..1026d63 100644
--- a/sysdeps/x86_64/fpu/test-float-vlen8-avx2.c
+++ b/sysdeps/x86_64/fpu/test-float-vlen8-avx2.c
@@ -23,6 +23,7 @@
 
 #define TEST_VECTOR_cosf 1
 #define TEST_VECTOR_sinf 1
+#define TEST_VECTOR_logf 1
 
 #define REQUIRE_AVX2
 
diff --git a/sysdeps/x86_64/fpu/test-float-vlen8-wrappers.c b/sysdeps/x86_64/fpu/test-float-vlen8-wrappers.c
index fb7f696..6dde1a2 100644
--- a/sysdeps/x86_64/fpu/test-float-vlen8-wrappers.c
+++ b/sysdeps/x86_64/fpu/test-float-vlen8-wrappers.c
@@ -24,3 +24,4 @@
 
 VECTOR_WRAPPER (WRAPPER_NAME (cosf), _ZGVcN8v_cosf)
 VECTOR_WRAPPER (WRAPPER_NAME (sinf), _ZGVcN8v_sinf)
+VECTOR_WRAPPER (WRAPPER_NAME (logf), _ZGVcN8v_logf)
diff --git a/sysdeps/x86_64/fpu/test-float-vlen8.c b/sysdeps/x86_64/fpu/test-float-vlen8.c
index f893c5b..3fe10ad 100644
--- a/sysdeps/x86_64/fpu/test-float-vlen8.c
+++ b/sysdeps/x86_64/fpu/test-float-vlen8.c
@@ -20,5 +20,6 @@
 
 #define TEST_VECTOR_cosf 1
 #define TEST_VECTOR_sinf 1
+#define TEST_VECTOR_logf 1
 
 #include "libm-test.c"

-----------------------------------------------------------------------

Summary of changes:
 ChangeLog                                          |   30 ++
 NEWS                                               |    2 +-
 sysdeps/unix/sysv/linux/x86_64/libmvec.abilist     |    4 +
 sysdeps/x86/fpu/bits/math-vector.h                 |    6 +
 sysdeps/x86_64/fpu/Makefile                        |    4 +-
 sysdeps/x86_64/fpu/Versions                        |    1 +
 sysdeps/x86_64/fpu/libm-test-ulps                  |    8 +
 sysdeps/x86_64/fpu/multiarch/Makefile              |    4 +-
 sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core.S  |   39 ++
 .../fpu/multiarch/svml_s_logf16_core_avx512.S      |  416 ++++++++++++++++++++
 sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core.S   |   38 ++
 .../x86_64/fpu/multiarch/svml_s_logf4_core_sse4.S  |  194 +++++++++
 sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core.S   |   38 ++
 .../x86_64/fpu/multiarch/svml_s_logf8_core_avx2.S  |  184 +++++++++
 sysdeps/x86_64/fpu/svml_s_logf16_core.S            |   25 ++
 sysdeps/x86_64/fpu/svml_s_logf4_core.S             |   30 ++
 sysdeps/x86_64/fpu/svml_s_logf8_core.S             |   29 ++
 sysdeps/x86_64/fpu/svml_s_logf8_core_avx.S         |   25 ++
 sysdeps/x86_64/fpu/svml_s_logf_data.S              |  102 +++++
 sysdeps/x86_64/fpu/svml_s_logf_data.h              |   48 +++
 sysdeps/x86_64/fpu/test-float-vlen16-wrappers.c    |    1 +
 sysdeps/x86_64/fpu/test-float-vlen16.c             |    1 +
 sysdeps/x86_64/fpu/test-float-vlen4-wrappers.c     |    1 +
 sysdeps/x86_64/fpu/test-float-vlen4.c              |    1 +
 .../x86_64/fpu/test-float-vlen8-avx2-wrappers.c    |    1 +
 sysdeps/x86_64/fpu/test-float-vlen8-avx2.c         |    1 +
 sysdeps/x86_64/fpu/test-float-vlen8-wrappers.c     |    1 +
 sysdeps/x86_64/fpu/test-float-vlen8.c              |    1 +
 28 files changed, 1232 insertions(+), 3 deletions(-)
 create mode 100644 sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core.S
 create mode 100644 sysdeps/x86_64/fpu/multiarch/svml_s_logf16_core_avx512.S
 create mode 100644 sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core.S
 create mode 100644 sysdeps/x86_64/fpu/multiarch/svml_s_logf4_core_sse4.S
 create mode 100644 sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core.S
 create mode 100644 sysdeps/x86_64/fpu/multiarch/svml_s_logf8_core_avx2.S
 create mode 100644 sysdeps/x86_64/fpu/svml_s_logf16_core.S
 create mode 100644 sysdeps/x86_64/fpu/svml_s_logf4_core.S
 create mode 100644 sysdeps/x86_64/fpu/svml_s_logf8_core.S
 create mode 100644 sysdeps/x86_64/fpu/svml_s_logf8_core_avx.S
 create mode 100644 sysdeps/x86_64/fpu/svml_s_logf_data.S
 create mode 100644 sysdeps/x86_64/fpu/svml_s_logf_data.h


hooks/post-receive
-- 
GNU C Library master sources


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]