This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH RFC V2] Improve 64bit memcpy/memove for Corei7 with unaligned avx instruction
- From: OndÅej BÃlka <neleai at seznam dot cz>
- To: Liubov Dmitrieva <liubov dot dmitrieva at gmail dot com>
- Cc: Ling Ma <ling dot ma dot program at gmail dot com>, GNU C Library <libc-alpha at sourceware dot org>, Ma Ling <ling dot ml at alibaba-inc dot com>
- Date: Fri, 12 Jul 2013 16:47:55 +0200
- Subject: Re: [PATCH RFC V2] Improve 64bit memcpy/memove for Corei7 with unaligned avx instruction
- References: <1373547096-8095-1-git-send-email-ling dot ma dot program at gmail dot com> <CAHjhQ91fVakxKNkEniz0AL-Srn3kNtLf+5AaB+VHozy5_z5zeA at mail dot gmail dot com> <20130712032333 dot GA5839 at domone dot PAOCY> <CAHjhQ92Fig0_drm_Ftj8n3v17Pvia+a5-OyODXHJQq=Vkz1HPw at mail dot gmail dot com>
On Fri, Jul 12, 2013 at 10:09:03AM +0400, Liubov Dmitrieva wrote:
> >> We need to check performance for core i7 with AVX before install this.
> >> As far as I understood you checked on Haswell only? But AVX works for
> >> more architectures than AVX2.
> >Using avx for memcpy before haswell is pointless, stores and loads are
> >128bit anyway and by going 256bit you only complicate scheduler.
>
> But we can't name it avx2 version and check avx2 flag if it doesn't use avx2.
> Probably we should introduce flag Slow AVX and set it before Haswell
> if you are sure that using AVX before Haswell is pointless.
>
You should use same optimization that gcc uses on loads, namely
vmovups (%rdi), %xmm0
vinsertf128 $1, 16(%rdi), %ymm0, %ymm0
As we do not modify data in ymm next logical step is split these halves
into two separate registers and have sse implementation instead avx one.
> --
> Liubov
>