This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH x86_64] Update memcpy, mempcpy and memmove selection order for Excavator CPU BZ #19583
- From: "H.J. Lu" <hjl dot tools at gmail dot com>
- To: "Pawar, Amit" <Amit dot Pawar at amd dot com>
- Cc: "libc-alpha at sourceware dot org" <libc-alpha at sourceware dot org>
- Date: Thu, 17 Mar 2016 04:53:29 -0700
- Subject: Re: [PATCH x86_64] Update memcpy, mempcpy and memmove selection order for Excavator CPU BZ #19583
- Authentication-results: sourceware.org; auth=none
- References: <SN1PR12MB073325E2FB320E3CECD22660978B0 at SN1PR12MB0733 dot namprd12 dot prod dot outlook dot com>
On Thu, Mar 17, 2016 at 3:52 AM, Pawar, Amit <Amit.Pawar@amd.com> wrote:
> This patch is fix for https://sourceware.org/bugzilla/show_bug.cgi?id=19583
>
> As per the suggestion, defining new bit_arch_Prefer_Fast_Copy_Backward and index_arch_Prefer_Fast_Copy_Backward feature bit macros to update IFUNC selection order for memcpy, memcpy_chk, mempcpy, mempcpy_chk, memmove and memmove_chk functions without affecting other targets. PFA patch and ChangeLog files to this mail. If OK please commit it from my side.
>
>
A few comments:
1. Since there is bit_arch_Fast_Copy_Backward already, please
add bit_arch_Avoid_AVX_Fast_Unaligned_Load instead.
2. Please verify that index_arch_XXX are the same and use
one index_arch_XXX to set all bits. There are examples in
sysdeps/x86/cpu-features.c.
3. Please use proper ChangeLog format:
* file (name of function, macro, ...): What changed.
--
H.J.