This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCHv3] PowerPC: Fix a race condition when eliding a lock
- From: "Tulio Magno Quites Machado Filho" <tuliom at linux dot vnet dot ibm dot com>
- To: Peter Bergner <bergner at vnet dot ibm dot com>
- Cc: carlos at redhat dot com, adhemerval dot zanella at linaro dot org, triegel at redhat dot com, libc-alpha at sourceware dot org, munroesj at linux dot vnet dot ibm dot com
- Cc:
- Date: Thu, 03 Sep 2015 15:58:55 -0300
- Subject: Re: [PATCHv3] PowerPC: Fix a race condition when eliding a lock
- Authentication-results: sourceware.org; auth=none
- References: <55D742D3 dot 9050600 at redhat dot com> <1440439895-11812-1-git-send-email-tuliom at linux dot vnet dot ibm dot com> <1441136302 dot 5089 dot 182 dot camel at otta>
Peter Bergner <bergner@vnet.ibm.com> writes:
> On Mon, 2015-08-24 at 15:11 -0300, Tulio Magno Quites Machado Filho wrote:
>> +/* Returns 0 if the lock defined by is_lock_free was elided.
>> + ADAPT_COUNT is a per-lock state variable. */
>> +# define ELIDE_LOCK(adapt_count, is_lock_free) \
>> + ({ \
>> + int ret = 0; \
>> + if (adapt_count > 0) \
>> + (adapt_count)--; \
>> + else \
>> + for (int i = __elision_aconf.try_tbegin; i > 0; i--) \
>> + { \
>> + if (__builtin_tbegin (0)) \
>> + { \
>> + if (is_lock_free) \
>> + { \
>> + ret = 1; \
>> + break; \
>> + } \
>> + __builtin_tabort (_ABORT_LOCK_BUSY); \
>> + } \
>> + else \
>> + if (!__get_new_count(&adapt_count)) \
>> + break; \
>> + } \
>> + ret; \
>> + })
>
> Interesting. I was going to suggest following your __builtin_tabort()
> call here with a call to __builtin_unreachable() so we can give the
> compiler a little more freedom to optimize things on that path.
> Then I remembered we had some conditionl tabort insns we might
> be able to use here, since the tabort above is conditional on
> is_lock_free == 0.
>
> The following code seems to generate much better code, but at the
> loss of not passing _ABORT_LOCK_BUSY as a failure cause.
>
> /* Returns 0 if the lock defined by is_lock_free was elided.
> ADAPT_COUNT is a per-lock state variable. */
> # define ELIDE_LOCK(adapt_count, is_lock_free) \
> ({ \
> int ret = 0; \
> if (adapt_count > 0) \
> (adapt_count)--; \
> else \
> for (int i = __elision_aconf.try_tbegin; i > 0; i--) \
> { \
> if (__builtin_tbegin (0)) \
> { \
> __builtin_tabortwci (0x4, is_lock_free, 0) \
> ret = 1; \
> break; \
> } \
> else \
> if (!__get_new_count(&adapt_count)) \
> break; \
> } \
> ret; \
> })
>
>
> However, if I look at the code that consumes the failure code, I see
> that we're only looking for the persistent bit being set and not any
> specific failure cause we passed in:
>
> static inline bool
> __get_new_count (uint8_t *adapt_count)
> {
> /* A persistent failure indicates that a retry will probably
> result in another failure. Use normal locking now and
> for the next couple of calls. */
> if (_TEXASRU_FAILURE_PERSISTENT (__builtin_get_texasru ()))
> {
> if (__elision_aconf.skip_lock_internal_abort > 0)
> *adapt_count = __elision_aconf.skip_lock_internal_abort;
> return false;
> }
>
> /* Same logic as above, but for a number of temporary failures in a
> a row. */
> else if (__elision_aconf.skip_lock_out_of_tbegin_retries > 0
> && __elision_aconf.try_tbegin > 0)
> *adapt_count = __elision_aconf.skip_lock_out_of_tbegin_retries;
> return true;
> }
That seems an interesting suggestion, but out of scope for this patch, IMO.
We're just fixing an issue here. ;-)
--
Tulio Magno