This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: memcpy performance regressions 2.19 -> 2.24(5)


Hi H.J.,

I was on vacation, sorry for the slow reply.  The updated benchmark
still shows the same behavior, thanks.

I'll try my hand at creating a patch that makes that variable
__x86_shared_non_temporal_threshold a tunable.  It will be necessary
to do internal experiments anyway.

Best,
Erich

On Fri, May 12, 2017 at 2:20 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
> On Fri, May 12, 2017 at 1:21 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
>> On Fri, May 12, 2017 at 12:43 PM, Erich Elsen <eriche@google.com> wrote:
>>> HJ - yes, the benchmark still shows the same behavior.  I did have to modify
>>> the build to add -std=c++11.
>>
>> I updated hjl/x86/optimize branch with memcpy_benchmark2.cc
>> to change its output for easy comparison.  Please take a look to see
>> if it is still valid.
>>
>> H.J.
>>> Carlos - Maybe the first step is to add a tunable that allows for selection
>>> of the non-temporal-store size threshold without changing the implementation
>>> that is selected.  I can work on submitting this patch.
>
> There are
>
>  /* The large memcpy micro benchmark in glibc shows that 6 times of
>      shared cache size is the approximate value above which non-temporal
>      store becomes faster.  */
>   __x86_shared_non_temporal_threshold = __x86_shared_cache_size * 6;
>
> I did the measurement on a 8-core processor.  6 / 8 is .75 of the shared
> cache.   But on processors with 56 cores, 6 / 56 may be too small.
>
> H.J.
>>> On Wed, May 10, 2017 at 7:17 PM, Carlos O'Donell <carlos@redhat.com> wrote:
>>>>
>>>> On 05/10/2017 01:33 PM, H.J. Lu wrote:
>>>> > On Tue, May 9, 2017 at 4:48 PM, Erich Elsen <eriche@google.com> wrote:
>>>> >> store is a net win even though it causes a 2-3x decrease in single
>>>> >> threaded performance for some processors?  Or how else is the decision
>>>> >> about the threshold made?
>>>> >
>>>> > There is no perfect number to make everyone happy.  I am open
>>>> > to suggestion to improve the compromise.
>>>> >
>>>> > H.J.
>>>>
>>>> I agree with H.J., there is a compromise to be made here. Having a single
>>>> process thrash the box by taking all of the memory bandwidth might be
>>>> sensible for a microservice, but glibc has to default to something that
>>>> works well on average.
>>>>
>>>> With the new tunables infrastructure we can start talking about ways in
>>>> which a tunable could influence IFUNC selection though, allowing users
>>>> some kind of choice into tweaking for single-threaded or multi-threaded,
>>>> single-user or multi-user etc.
>>>>
>>>> What I would like to see as the output of any discussion is a set of
>>>> microbenchmarks (benchtests/) added to glibc that are the distillation
>>>> of whatever workload we're talking about here. This is crucial to the
>>>> community having a way to test from release-to-release that we don't
>>>> regress performance.
>>>>
>>>> Unless you want to sign up to test your workload at every release then
>>>> we need this kind of microbenchmark addition. And microbenchmarks are
>>>> dead-easy to integrate with glibc so most people should have no excuse.
>>>>
>>>> The hardware vendors and distros who want particular performance tests
>>>> are putting such tests in place (representative of their users), and
>>>> direct
>>>> end-users  who want particular performance are also adding tests.
>>>>
>>>> --
>>>> Cheers,
>>>> Carlos.
>>>
>>>
>>
>>
>>
>> --
>> H.J.
>
>
>
> --
> H.J.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]