This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCHv3] PowerPC: Fix a race condition when eliding a lock
- From: Adhemerval Zanella <adhemerval dot zanella at linaro dot org>
- To: "Paul E. Murphy" <murphyp at linux dot vnet dot ibm dot com>
- Cc: libc-alpha at sourceware dot org
- Date: Wed, 2 Sep 2015 09:45:53 -0300
- Subject: Re: [PATCHv3] PowerPC: Fix a race condition when eliding a lock
- Authentication-results: sourceware.org; auth=none
- References: <55D742D3 dot 9050600 at redhat dot com> <1440439895-11812-1-git-send-email-tuliom at linux dot vnet dot ibm dot com> <1441136302 dot 5089 dot 182 dot camel at otta> <55E60E88 dot 50104 at linaro dot org> <55E61799 dot 6010707 at linux dot vnet dot ibm dot com> <55E61F73 dot 3000405 at linaro dot org> <55E62A2A dot 5050704 at linux dot vnet dot ibm dot com>
On 01-09-2015 19:43, Paul E. Murphy wrote:
>
>
> On 09/01/2015 04:58 PM, Adhemerval Zanella wrote:
>
>>> I'm not convinced any of the existing codes should be non-persistent:
>>>
>>> A pthread_mutex_trylock attempt within an elided pthread_mutex_lock is
>>> guaranteed to fail try_tbegin times if there is no contention on the lock.
>>> Aborts get increasingly expensive as you increase the amount of speculative
>>> execution.
>>>
>>> A busy lock likely indicates contention in the critical section which
>>> does not benefit from elision, I'd err on the side of a persistent
>>> failure.
>>>
>>
>> I do not that much information to decide it, although I do see for pthread_mutex_t
>> at least the _ABORT_LOCK_BUSY is not persistent if the critical region does not
>> generate enough contention.
>
> This seems to violate my understanding of the adaptive algorithm. I agree, there
> is a possibility another thread might successfully elide while adapt_count != 0,
> *futex == 0, and it happens to be within the retry loop.
>
> Transactions are not free. The question is: Is it cheaper to risk a few more
> failures in hopes of the transaction succeeding, or just grabbing the lock?
Well that is exactly the question I do not know it will heavy depend of the
kind of algorithm and contention you have.
>
> This becomes less desirable with increasing values of try_tbegin. I've found
> 11 to be the value which factors out the "false" failures under SMT8.
>
If the workloads you are trying are showing a higher try_tbegin is better, than
indeed we can either increase it or set _ABORT_LOCK_BUSY as a persistent failure
so the algorithm will bail out and use locks instead of retrying.
Which kind of benchmark or workloads are you using to evaluate it?