This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCHv3] PowerPC: Fix a race condition when eliding a lock


Peter Bergner <bergner@vnet.ibm.com> writes:

> On Mon, 2015-08-24 at 15:11 -0300, Tulio Magno Quites Machado Filho wrote:
>> +/* Returns 0 if the lock defined by is_lock_free was elided.
>> +   ADAPT_COUNT is a per-lock state variable.  */
>> +# define ELIDE_LOCK(adapt_count, is_lock_free)				\
>> +  ({									\
>> +    int ret = 0;							\
>> +    if (adapt_count > 0)						\
>> +      (adapt_count)--;							\
>> +    else								\
>> +      for (int i = __elision_aconf.try_tbegin; i > 0; i--)		\
>> +	{								\
>> +	  if (__builtin_tbegin (0))					\
>> +	    {								\
>> +	      if (is_lock_free)						\
>> +		{							\
>> +		  ret = 1;						\
>> +		  break;						\
>> +		}							\
>> +	      __builtin_tabort (_ABORT_LOCK_BUSY);			\
>> +	    }								\
>> +	  else								\
>> +	    if (!__get_new_count(&adapt_count))				\
>> +	      break;							\
>> +	}								\
>> +    ret;								\
>> +  })
>
> Interesting.  I was going to suggest following your __builtin_tabort()
> call here with a call to __builtin_unreachable() so we can give the
> compiler a little more freedom to optimize things on that path.
> Then I remembered we had some conditionl tabort insns we might
> be able to use here, since the tabort above is conditional on
> is_lock_free == 0.
>
> The following code seems to generate much better code, but at the
> loss of not passing _ABORT_LOCK_BUSY as a failure cause.
>
> /* Returns 0 if the lock defined by is_lock_free was elided.
>    ADAPT_COUNT is a per-lock state variable.  */
> # define ELIDE_LOCK(adapt_count, is_lock_free)                         \
>   ({                                                                   \
>     int ret = 0;                                                       \
>     if (adapt_count > 0)                                               \
>       (adapt_count)--;                                                 \
>     else                                                               \
>       for (int i = __elision_aconf.try_tbegin; i > 0; i--)             \
>        {                                                               \
>          if (__builtin_tbegin (0))                                     \
>            {                                                           \
>              __builtin_tabortwci (0x4, is_lock_free, 0)                \
>              ret = 1;                                                  \
>              break;                                                    \
>            }                                                           \
>          else                                                          \
>            if (!__get_new_count(&adapt_count))                         \
>              break;                                                    \
>        }                                                               \
>     ret;                                                               \
>   })
>
>
> However, if I look at the code that consumes the failure code, I see
> that we're only looking for the persistent bit being set and not any
> specific failure cause we passed in:
>
> static inline bool
> __get_new_count (uint8_t *adapt_count)
> {
>   /* A persistent failure indicates that a retry will probably
>      result in another failure.  Use normal locking now and
>      for the next couple of calls.  */
>   if (_TEXASRU_FAILURE_PERSISTENT (__builtin_get_texasru ()))
>     {
>       if (__elision_aconf.skip_lock_internal_abort > 0)
>        *adapt_count = __elision_aconf.skip_lock_internal_abort;
>        return false;
>     }
>
>   /* Same logic as above, but for a number of temporary failures in a
>      a row.  */
>   else if (__elision_aconf.skip_lock_out_of_tbegin_retries > 0
>           && __elision_aconf.try_tbegin > 0)
>     *adapt_count = __elision_aconf.skip_lock_out_of_tbegin_retries;
>   return true;
> }

That seems an interesting suggestion, but out of scope for this patch, IMO.
We're just fixing an issue here.  ;-)

-- 
Tulio Magno


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]