This is the mail archive of the glibc-cvs@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

GNU C Library master sources branch master updated. glibc-2.24-503-gdd037fb


This is an automated email from the git hooks/post-receive script. It was
generated because a ref change was pushed to the repository containing
the project "GNU C Library master sources".

The branch, master has been updated
       via  dd037fb3df286b7c2d0b0c6f8d02a2dd8a8e8a08 (commit)
       via  53c5c3d5ac238901c13f28a73ba05b0678094e80 (commit)
       via  8bfc4a2ab4bebdf86c151665aae8a266e2f18fb4 (commit)
       via  c813dae5d8e469262f96b1cda0191ea076f10809 (commit)
      from  8d71242eb7a85860bc4f7cef5463ad61e2ea19b2 (commit)

Those revisions listed above that are new to this repository have
not appeared on any other notification email; so we list those
revisions in full, below.

- Log -----------------------------------------------------------------
http://sourceware.org/git/gitweb.cgi?p=glibc.git;a=commitdiff;h=dd037fb3df286b7c2d0b0c6f8d02a2dd8a8e8a08

commit dd037fb3df286b7c2d0b0c6f8d02a2dd8a8e8a08
Author: Stefan Liebler <stli@linux.vnet.ibm.com>
Date:   Tue Dec 20 15:12:48 2016 +0100

    S390: Optimize lock-elision by decrementing adapt_count at unlock.
    
    This patch decrements the adapt_count while unlocking the futex
    instead of before aquiring the futex as it is done on power, too.
    Furthermore a transaction is only started if the futex is currently free.
    This check is done after starting the transaction, too.
    If the futex is not free and the transaction nesting depth is one,
    we can simply end the started transaction instead of aborting it.
    The implementation of this check was faulty as it always ended the
    started transaction.  By using the fallback path, the the outermost
    transaction was aborted.  Now the outermost transaction is aborted
    directly.
    
    This patch also adds some commentary and aligns the code in
    elision-trylock.c to the code in elision-lock.c as possible.
    
    ChangeLog:
    
    	* sysdeps/unix/sysv/linux/s390/lowlevellock.h
    	(__lll_unlock_elision, lll_unlock_elision): Add adapt_count argument.
    	* sysdeps/unix/sysv/linux/s390/elision-lock.c:
    	(__lll_lock_elision): Decrement adapt_count while unlocking
    	instead of before locking.
    	* sysdeps/unix/sysv/linux/s390/elision-trylock.c
    	(__lll_trylock_elision): Likewise.
    	* sysdeps/unix/sysv/linux/s390/elision-unlock.c:
    	(__lll_unlock_elision): Likewise.

diff --git a/ChangeLog b/ChangeLog
index a3742b5..ee841a0 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,5 +1,17 @@
 2016-12-20  Stefan Liebler  <stli@linux.vnet.ibm.com>
 
+	* sysdeps/unix/sysv/linux/s390/lowlevellock.h
+	(__lll_unlock_elision, lll_unlock_elision): Add adapt_count argument.
+	* sysdeps/unix/sysv/linux/s390/elision-lock.c:
+	(__lll_lock_elision): Decrement adapt_count while unlocking
+	instead of before locking.
+	* sysdeps/unix/sysv/linux/s390/elision-trylock.c
+	(__lll_trylock_elision): Likewise.
+	* sysdeps/unix/sysv/linux/s390/elision-unlock.c:
+	(__lll_unlock_elision): Likewise.
+
+2016-12-20  Stefan Liebler  <stli@linux.vnet.ibm.com>
+
 	* sysdeps/unix/sysv/linux/s390/htm.h(__libc_tbegin_retry): New macro.
 	* sysdeps/unix/sysv/linux/s390/elision-lock.c (__lll_lock_elision):
 	Use __libc_tbegin_retry macro.
diff --git a/sysdeps/unix/sysv/linux/s390/elision-lock.c b/sysdeps/unix/sysv/linux/s390/elision-lock.c
index 3dd7fbc..4a7d546 100644
--- a/sysdeps/unix/sysv/linux/s390/elision-lock.c
+++ b/sysdeps/unix/sysv/linux/s390/elision-lock.c
@@ -50,31 +50,30 @@ __lll_lock_elision (int *futex, short *adapt_count, EXTRAARG int private)
      critical section uses lock elision) and outside of transactions.  Thus,
      we need to use atomic accesses to avoid data races.  However, the
      value of adapt_count is just a hint, so relaxed MO accesses are
-     sufficient.  */
-  if (atomic_load_relaxed (adapt_count) > 0)
-    {
-      /* Lost updates are possible, but harmless.  Due to races this might lead
-	 to *adapt_count becoming less than zero.  */
-      atomic_store_relaxed (adapt_count,
-			    atomic_load_relaxed (adapt_count) - 1);
-      goto use_lock;
-    }
-
-  if (aconf.try_tbegin > 0)
+     sufficient.
+     Do not begin a transaction if another cpu has locked the
+     futex with normal locking.  If adapt_count is zero, it remains and the
+     next pthread_mutex_lock call will try to start a transaction again.  */
+    if (atomic_load_relaxed (futex) == 0
+	&& atomic_load_relaxed (adapt_count) <= 0 && aconf.try_tbegin > 0)
     {
       int status = __libc_tbegin_retry ((void *) 0, aconf.try_tbegin - 1);
       if (__builtin_expect (status == _HTM_TBEGIN_STARTED,
 			    _HTM_TBEGIN_STARTED))
 	{
-	  if (__builtin_expect (*futex == 0, 1))
+	  /* Check the futex to make sure nobody has touched it in the
+	     mean time.  This forces the futex into the cache and makes
+	     sure the transaction aborts if some other cpu uses the
+	     lock (writes the futex).  */
+	  if (__builtin_expect (atomic_load_relaxed (futex) == 0, 1))
 	    /* Lock was free.  Return to user code in a transaction.  */
 	    return 0;
 
 	  /* Lock was busy.  Fall back to normal locking.  */
-	  if (__builtin_expect (__libc_tx_nesting_depth (), 1))
+	  if (__builtin_expect (__libc_tx_nesting_depth () <= 1, 1))
 	    {
 	      /* In a non-nested transaction there is no need to abort,
-		 which is expensive.  */
+		 which is expensive.  Simply end the started transaction.  */
 	      __libc_tend ();
 	      /* Don't try to use transactions for the next couple of times.
 		 See above for why relaxed MO is sufficient.  */
@@ -92,9 +91,9 @@ __lll_lock_elision (int *futex, short *adapt_count, EXTRAARG int private)
 		 is zero.
 		 The adapt_count of this inner mutex is not changed,
 		 because using the default lock with the inner mutex
-		 would abort the outer transaction.
-	      */
+		 would abort the outer transaction.  */
 	      __libc_tabort (_HTM_FIRST_USER_ABORT_CODE | 1);
+	      __builtin_unreachable ();
 	    }
 	}
       else if (status != _HTM_TBEGIN_TRANSIENT)
@@ -110,15 +109,15 @@ __lll_lock_elision (int *futex, short *adapt_count, EXTRAARG int private)
 	}
       else
 	{
-	  /* Same logic as above, but for for a number of temporary failures in
-	     a row.  */
+	  /* The transaction failed for some retries with
+	     _HTM_TBEGIN_TRANSIENT.  Use the normal locking now and for the
+	     next couple of calls.  */
 	  if (aconf.skip_lock_out_of_tbegin_retries > 0)
 	    atomic_store_relaxed (adapt_count,
 				  aconf.skip_lock_out_of_tbegin_retries);
 	}
     }
 
-  use_lock:
   /* Use normal locking as fallback path if transaction does not succeed.  */
   return LLL_LOCK ((*futex), private);
 }
diff --git a/sysdeps/unix/sysv/linux/s390/elision-trylock.c b/sysdeps/unix/sysv/linux/s390/elision-trylock.c
index e21fc26..dee66d4 100644
--- a/sysdeps/unix/sysv/linux/s390/elision-trylock.c
+++ b/sysdeps/unix/sysv/linux/s390/elision-trylock.c
@@ -43,23 +43,36 @@ __lll_trylock_elision (int *futex, short *adapt_count)
 	 until their try_tbegin is zero.
       */
       __libc_tabort (_HTM_FIRST_USER_ABORT_CODE | 1);
+      __builtin_unreachable ();
     }
 
-  /* Only try a transaction if it's worth it.  See __lll_lock_elision for
-     why we need atomic accesses.  Relaxed MO is sufficient because this is
-     just a hint.  */
-  if (atomic_load_relaxed (adapt_count) <= 0)
+  /* adapt_count can be accessed concurrently; these accesses can be both
+     inside of transactions (if critical sections are nested and the outer
+     critical section uses lock elision) and outside of transactions.  Thus,
+     we need to use atomic accesses to avoid data races.  However, the
+     value of adapt_count is just a hint, so relaxed MO accesses are
+     sufficient.
+     Do not begin a transaction if another cpu has locked the
+     futex with normal locking.  If adapt_count is zero, it remains and the
+     next pthread_mutex_lock call will try to start a transaction again.  */
+    if (atomic_load_relaxed (futex) == 0
+	&& atomic_load_relaxed (adapt_count) <= 0 && aconf.try_tbegin > 0)
     {
-      int status;
-
-      if (__builtin_expect
-	  ((status = __libc_tbegin ((void *) 0)) == _HTM_TBEGIN_STARTED, 1))
+      int status = __libc_tbegin ((void *) 0);
+      if (__builtin_expect (status  == _HTM_TBEGIN_STARTED,
+			    _HTM_TBEGIN_STARTED))
 	{
-	  if (*futex == 0)
+	  /* Check the futex to make sure nobody has touched it in the
+	     mean time.  This forces the futex into the cache and makes
+	     sure the transaction aborts if some other cpu uses the
+	     lock (writes the futex).  */
+	  if (__builtin_expect (atomic_load_relaxed (futex) == 0, 1))
+	    /* Lock was free.  Return to user code in a transaction.  */
 	    return 0;
-	  /* Lock was busy.  Fall back to normal locking.  */
-	  /* Since we are in a non-nested transaction there is no need to abort,
-	     which is expensive.  */
+
+	  /* Lock was busy.  Fall back to normal locking.  Since we are in
+	     a non-nested transaction there is no need to abort, which is
+	     expensive.  Simply end the started transaction.  */
 	  __libc_tend ();
 	  /* Note: Changing the adapt_count here might abort a transaction on a
 	     different cpu, but that could happen anyway when the futex is
@@ -68,27 +81,18 @@ __lll_trylock_elision (int *futex, short *adapt_count)
 	  if (aconf.skip_lock_busy > 0)
 	    atomic_store_relaxed (adapt_count, aconf.skip_lock_busy);
 	}
-      else
+      else if (status != _HTM_TBEGIN_TRANSIENT)
 	{
-	  if (status != _HTM_TBEGIN_TRANSIENT)
-	    {
-	      /* A persistent abort (cc 1 or 3) indicates that a retry is
-		 probably futile.  Use the normal locking now and for the
-		 next couple of calls.
-		 Be careful to avoid writing to the lock.  */
-	      if (aconf.skip_trylock_internal_abort > 0)
-		*adapt_count = aconf.skip_trylock_internal_abort;
-	    }
+	  /* A persistent abort (cc 1 or 3) indicates that a retry is
+	     probably futile.  Use the normal locking now and for the
+	     next couple of calls.
+	     Be careful to avoid writing to the lock.  */
+	  if (aconf.skip_trylock_internal_abort > 0)
+	    *adapt_count = aconf.skip_trylock_internal_abort;
 	}
       /* Could do some retries here.  */
     }
-  else
-    {
-      /* Lost updates are possible, but harmless.  Due to races this might lead
-	 to *adapt_count becoming less than zero.  */
-      atomic_store_relaxed (adapt_count,
-			    atomic_load_relaxed (adapt_count) - 1);
-    }
 
+  /* Use normal locking as fallback path if transaction does not succeed.  */
   return lll_trylock (*futex);
 }
diff --git a/sysdeps/unix/sysv/linux/s390/elision-unlock.c b/sysdeps/unix/sysv/linux/s390/elision-unlock.c
index 0b1ade9..e68d970 100644
--- a/sysdeps/unix/sysv/linux/s390/elision-unlock.c
+++ b/sysdeps/unix/sysv/linux/s390/elision-unlock.c
@@ -21,16 +21,37 @@
 #include <htm.h>
 
 int
-__lll_unlock_elision(int *futex, int private)
+__lll_unlock_elision(int *futex, short *adapt_count, int private)
 {
   /* If the lock is free, we elided the lock earlier.  This does not
      necessarily mean that we are in a transaction, because the user code may
-     have closed the transaction, but that is impossible to detect reliably.  */
-  if (*futex == 0)
+     have closed the transaction, but that is impossible to detect reliably.
+     Relaxed MO access to futex is sufficient as we only need a hint, if we
+     started a transaction or acquired the futex in e.g. elision-lock.c.  */
+  if (atomic_load_relaxed (futex) == 0)
     {
       __libc_tend ();
     }
   else
-    lll_unlock ((*futex), private);
+    {
+      /* Update the adapt_count while unlocking before completing the critical
+	 section.  adapt_count is accessed concurrently outside of a
+	 transaction or an aquired lock e.g. in elision-lock.c so we need to use
+	 atomic accesses.  However, the value of adapt_count is just a hint, so
+	 relaxed MO accesses are sufficient.
+	 If adapt_count would be decremented while locking, multiple
+	 CPUs trying to lock the locked mutex will decrement adapt_count to
+	 zero and another CPU will try to start a transaction, which will be
+	 immediately aborted as the mutex is locked.
+	 If adapt_count would be decremented while unlocking after completing
+	 the critical section, possible waiters will be waked up before
+	 decrementing the adapt_count.  Those waked up waiters could have
+	 destroyed and freed this mutex!  */
+      short adapt_count_val = atomic_load_relaxed (adapt_count);
+      if (adapt_count_val > 0)
+	atomic_store_relaxed (adapt_count, adapt_count_val - 1);
+
+      lll_unlock ((*futex), private);
+    }
   return 0;
 }
diff --git a/sysdeps/unix/sysv/linux/s390/lowlevellock.h b/sysdeps/unix/sysv/linux/s390/lowlevellock.h
index ada2e5b..09a933f 100644
--- a/sysdeps/unix/sysv/linux/s390/lowlevellock.h
+++ b/sysdeps/unix/sysv/linux/s390/lowlevellock.h
@@ -33,7 +33,7 @@ extern int __lll_timedlock_elision
 extern int __lll_lock_elision (int *futex, short *adapt_count, int private)
   attribute_hidden;
 
-extern int __lll_unlock_elision(int *futex, int private)
+extern int __lll_unlock_elision(int *futex, short *adapt_count, int private)
   attribute_hidden;
 
 extern int __lll_trylock_elision(int *futex, short *adapt_count)
@@ -42,7 +42,7 @@ extern int __lll_trylock_elision(int *futex, short *adapt_count)
 #  define lll_lock_elision(futex, adapt_count, private) \
   __lll_lock_elision (&(futex), &(adapt_count), private)
 #  define lll_unlock_elision(futex, adapt_count, private) \
-  __lll_unlock_elision (&(futex), private)
+  __lll_unlock_elision (&(futex), &(adapt_count), private)
 #  define lll_trylock_elision(futex, adapt_count) \
   __lll_trylock_elision(&(futex), &(adapt_count))
 # endif  /* ENABLE_LOCK_ELISION */

http://sourceware.org/git/gitweb.cgi?p=glibc.git;a=commitdiff;h=53c5c3d5ac238901c13f28a73ba05b0678094e80

commit 53c5c3d5ac238901c13f28a73ba05b0678094e80
Author: Stefan Liebler <stli@linux.vnet.ibm.com>
Date:   Tue Dec 20 15:12:48 2016 +0100

    S390: Use new __libc_tbegin_retry macro in elision-lock.c.
    
    This patch implements __libc_tbegin_retry macro which is equivalent to
    gcc builtin __builtin_tbegin_retry, except the changes which were applied
    to __libc_tbegin in the previous patch.
    
    If tbegin aborts with _HTM_TBEGIN_TRANSIENT.  Then this macros restores
    the fpc, fprs and automatically retries up to retry_cnt tbegins.
    Further saving of the state is omitted as it is already saved in the
    first round.  Before retrying a further transaction, the
    transaction-abort-assist instruction is used to support the cpu.
    
    This macro is now used in function __lll_lock_elision.
    
    ChangeLog:
    
    	* sysdeps/unix/sysv/linux/s390/htm.h(__libc_tbegin_retry): New macro.
    	* sysdeps/unix/sysv/linux/s390/elision-lock.c (__lll_lock_elision):
    	Use __libc_tbegin_retry macro.

diff --git a/ChangeLog b/ChangeLog
index e51403a..a3742b5 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,5 +1,11 @@
 2016-12-20  Stefan Liebler  <stli@linux.vnet.ibm.com>
 
+	* sysdeps/unix/sysv/linux/s390/htm.h(__libc_tbegin_retry): New macro.
+	* sysdeps/unix/sysv/linux/s390/elision-lock.c (__lll_lock_elision):
+	Use __libc_tbegin_retry macro.
+
+2016-12-20  Stefan Liebler  <stli@linux.vnet.ibm.com>
+
 	* sysdeps/unix/sysv/linux/s390/Makefile (elision-CFLAGS):
 	Add -msoft-float.
 	* sysdeps/unix/sysv/linux/s390/htm.h: New File.
diff --git a/sysdeps/unix/sysv/linux/s390/elision-lock.c b/sysdeps/unix/sysv/linux/s390/elision-lock.c
index 48cc3db..3dd7fbc 100644
--- a/sysdeps/unix/sysv/linux/s390/elision-lock.c
+++ b/sysdeps/unix/sysv/linux/s390/elision-lock.c
@@ -60,17 +60,16 @@ __lll_lock_elision (int *futex, short *adapt_count, EXTRAARG int private)
       goto use_lock;
     }
 
-  int try_tbegin;
-  for (try_tbegin = aconf.try_tbegin;
-       try_tbegin > 0;
-       try_tbegin--)
+  if (aconf.try_tbegin > 0)
     {
-      int status;
-      if (__builtin_expect
-	  ((status = __libc_tbegin ((void *) 0)) == _HTM_TBEGIN_STARTED, 1))
+      int status = __libc_tbegin_retry ((void *) 0, aconf.try_tbegin - 1);
+      if (__builtin_expect (status == _HTM_TBEGIN_STARTED,
+			    _HTM_TBEGIN_STARTED))
 	{
-	  if (*futex == 0)
+	  if (__builtin_expect (*futex == 0, 1))
+	    /* Lock was free.  Return to user code in a transaction.  */
 	    return 0;
+
 	  /* Lock was busy.  Fall back to normal locking.  */
 	  if (__builtin_expect (__libc_tx_nesting_depth (), 1))
 	    {
@@ -81,7 +80,6 @@ __lll_lock_elision (int *futex, short *adapt_count, EXTRAARG int private)
 		 See above for why relaxed MO is sufficient.  */
 	      if (aconf.skip_lock_busy > 0)
 		atomic_store_relaxed (adapt_count, aconf.skip_lock_busy);
-	      goto use_lock;
 	    }
 	  else /* nesting depth is > 1 */
 	    {
@@ -99,28 +97,28 @@ __lll_lock_elision (int *futex, short *adapt_count, EXTRAARG int private)
 	      __libc_tabort (_HTM_FIRST_USER_ABORT_CODE | 1);
 	    }
 	}
+      else if (status != _HTM_TBEGIN_TRANSIENT)
+	{
+	  /* A persistent abort (cc 1 or 3) indicates that a retry is
+	     probably futile.  Use the normal locking now and for the
+	     next couple of calls.
+	     Be careful to avoid writing to the lock.  See above for why
+	     relaxed MO is sufficient.  */
+	  if (aconf.skip_lock_internal_abort > 0)
+	    atomic_store_relaxed (adapt_count,
+				  aconf.skip_lock_internal_abort);
+	}
       else
 	{
-	  if (status != _HTM_TBEGIN_TRANSIENT)
-	    {
-	      /* A persistent abort (cc 1 or 3) indicates that a retry is
-		 probably futile.  Use the normal locking now and for the
-		 next couple of calls.
-		 Be careful to avoid writing to the lock.  See above for why
-		 relaxed MO is sufficient.  */
-	      if (aconf.skip_lock_internal_abort > 0)
-		atomic_store_relaxed (adapt_count,
-				      aconf.skip_lock_internal_abort);
-	      goto use_lock;
-	    }
+	  /* Same logic as above, but for for a number of temporary failures in
+	     a row.  */
+	  if (aconf.skip_lock_out_of_tbegin_retries > 0)
+	    atomic_store_relaxed (adapt_count,
+				  aconf.skip_lock_out_of_tbegin_retries);
 	}
     }
 
-  /* Same logic as above, but for for a number of temporary failures in a
-     row.  See above for why relaxed MO is sufficient.  */
-  if (aconf.skip_lock_out_of_tbegin_retries > 0 && aconf.try_tbegin > 0)
-    atomic_store_relaxed (adapt_count, aconf.skip_lock_out_of_tbegin_retries);
-
   use_lock:
+  /* Use normal locking as fallback path if transaction does not succeed.  */
   return LLL_LOCK ((*futex), private);
 }
diff --git a/sysdeps/unix/sysv/linux/s390/htm.h b/sysdeps/unix/sysv/linux/s390/htm.h
index 6b4e8f4..aa9d01a 100644
--- a/sysdeps/unix/sysv/linux/s390/htm.h
+++ b/sysdeps/unix/sysv/linux/s390/htm.h
@@ -69,7 +69,36 @@
    started.  Thus the user of the tbegin macros in this header file has to
    compile the file / function with -msoft-float.  It prevents gcc from using
    fprs / vrs.  */
-#define __libc_tbegin(tdb)						\
+#define __libc_tbegin(tdb) __libc_tbegin_base(tdb,,,)
+
+#define __libc_tbegin_retry_output_regs , [R_TX_CNT] "+&d" (__tx_cnt)
+#define __libc_tbegin_retry_input_regs(retry_cnt) , [R_RETRY] "d" (retry_cnt)
+#define __libc_tbegin_retry_abort_path_insn				\
+  /* If tbegin returned _HTM_TBEGIN_TRANSIENT, retry immediately so	\
+     that max tbegin_cnt transactions are tried.  Otherwise return and	\
+     let the caller of this macro do the fallback path.  */		\
+  "   jnh 1f\n\t" /* cc 1/3: jump to fallback path.  */			\
+  /* tbegin returned _HTM_TBEGIN_TRANSIENT: retry with transaction.  */ \
+  "   crje %[R_TX_CNT], %[R_RETRY], 1f\n\t" /* Reached max retries?  */	\
+  "   ahi %[R_TX_CNT], 1\n\t"						\
+  "   ppa %[R_TX_CNT], 0, 1\n\t" /* Transaction-Abort Assist.  */	\
+  "   j 2b\n\t" /* Loop to tbegin.  */
+
+/* Same as __libc_tbegin except if tbegin aborts with _HTM_TBEGIN_TRANSIENT.
+   Then this macros restores the fpc, fprs and automatically retries up to
+   retry_cnt tbegins.  Further saving of the state is omitted as it is already
+   saved.  This macro calls tbegin at most as retry_cnt + 1 times.  */
+#define __libc_tbegin_retry(tdb, retry_cnt)				\
+  ({ int __ret;								\
+    int __tx_cnt = 0;							\
+    __ret = __libc_tbegin_base(tdb,					\
+			       __libc_tbegin_retry_abort_path_insn,	\
+			       __libc_tbegin_retry_output_regs,		\
+			       __libc_tbegin_retry_input_regs(retry_cnt)); \
+    __ret;								\
+  })
+
+#define __libc_tbegin_base(tdb, abort_path_insn, output_regs, input_regs) \
   ({ int __ret;								\
      int __fpc;								\
      char __fprs[TX_FPRS_BYTES];					\
@@ -95,7 +124,7 @@
 			      again and result in a core dump wich does	\
 			      now show at tbegin but the real executed	\
 			      instruction.  */				\
-			   "   tbegin 0, 0xFF0E\n\t"			\
+			   "2: tbegin 0, 0xFF0E\n\t"			\
 			   /* Branch away in abort case (this is the	\
 			      prefered sequence.  See PoP in chapter 5	\
 			      Transactional-Execution Facility		\
@@ -111,11 +140,14 @@
 			   "   srl %[R_RET], 28\n\t"			\
 			   "   sfpc %[R_FPC]\n\t"			\
 			   TX_RESTORE_FPRS				\
+			   abort_path_insn				\
 			   "1:\n\t"					\
 			   ".machine pop\n"				\
 			   : [R_RET] "=&d" (__ret),			\
 			     [R_FPC] "=&d" (__fpc)			\
+			     output_regs				\
 			   : [R_FPRS] "a" (__fprs)			\
+			     input_regs					\
 			   : "cc", "memory");				\
      __ret;								\
      })

http://sourceware.org/git/gitweb.cgi?p=glibc.git;a=commitdiff;h=8bfc4a2ab4bebdf86c151665aae8a266e2f18fb4

commit 8bfc4a2ab4bebdf86c151665aae8a266e2f18fb4
Author: Stefan Liebler <stli@linux.vnet.ibm.com>
Date:   Tue Dec 20 15:12:48 2016 +0100

    S390: Use own tbegin macro instead of __builtin_tbegin.
    
    This patch defines __libc_tbegin, __libc_tend, __libc_tabort and
    __libc_tx_nesting_depth in htm.h which replaces the direct usage of
    equivalent gcc builtins.
    
    We have to use an own inline assembly instead of __builtin_tbegin,
    as tbegin has to filter program interruptions which can't be done with
    the builtin.  Before this change, e.g. a segmentation fault within a
    transaction, leads to a coredump where the instruction pointer points
    behind the tbegin instruction instead of real failing one.
    Now the transaction aborts and the code should be reexecuted by the
    fallback path without transactions.  The segmentation fault will
    produce a coredump with the real failing instruction.
    
    The fpc is not saved before starting the transaction.  If e.g. the
    rounging mode is changed and the transaction is aborting afterwards,
    the builtin will not restore the fpc.  This is now done with the
    __libc_tbegin macro.
    
    Now the call saved fprs have to be saved / restored in the
    __libc_tbegin macro.  Using the gcc builtin had forced the saving /
    restoring of fprs at begin / end of e.g. __lll_lock_elision function.
    The new macro saves these fprs before tbegin instruction and only
    restores them on a transaction abort.  Restoring is not needed on
    a successfully started transaction.
    
    The used inline assembly does not clobber the fprs / vrs!
    Clobbering the latter ones would force the compiler to save / restore
    the call saved fprs as those overlap with the vrs, but they only
    need to be restored if the transaction fails.  Thus the user of the
    tbegin macros has to compile the file / function with -msoft-float.
    It prevents gcc from using fprs / vrs.
    
    ChangeLog:
    
    	* sysdeps/unix/sysv/linux/s390/Makefile (elision-CFLAGS):
    	Add -msoft-float.
    	* sysdeps/unix/sysv/linux/s390/htm.h: New File.
    	* sysdeps/unix/sysv/linux/s390/elision-lock.c:
    	Use __libc_t* transaction macros instead of __builtin_t*.
    	* sysdeps/unix/sysv/linux/s390/elision-trylock.c: Likewise.
    	* sysdeps/unix/sysv/linux/s390/elision-unlock.c: Likewise.

diff --git a/ChangeLog b/ChangeLog
index cc21db7..e51403a 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,5 +1,15 @@
 2016-12-20  Stefan Liebler  <stli@linux.vnet.ibm.com>
 
+	* sysdeps/unix/sysv/linux/s390/Makefile (elision-CFLAGS):
+	Add -msoft-float.
+	* sysdeps/unix/sysv/linux/s390/htm.h: New File.
+	* sysdeps/unix/sysv/linux/s390/elision-lock.c:
+	Use __libc_t* transaction macros instead of __builtin_t*.
+	* sysdeps/unix/sysv/linux/s390/elision-trylock.c: Likewise.
+	* sysdeps/unix/sysv/linux/s390/elision-unlock.c: Likewise.
+
+2016-12-20  Stefan Liebler  <stli@linux.vnet.ibm.com>
+
 	* sysdeps/unix/sysv/linux/s390/elision-lock.c
 	(__lll_lock_elision): Use atomics to load / store adapt_count.
 	* sysdeps/unix/sysv/linux/s390/elision-trylock.c
diff --git a/sysdeps/unix/sysv/linux/s390/Makefile b/sysdeps/unix/sysv/linux/s390/Makefile
index f8ed013..3867c33 100644
--- a/sysdeps/unix/sysv/linux/s390/Makefile
+++ b/sysdeps/unix/sysv/linux/s390/Makefile
@@ -22,7 +22,7 @@ ifeq ($(enable-lock-elision),yes)
 libpthread-sysdep_routines += elision-lock elision-unlock elision-timed \
 			      elision-trylock
 
-elision-CFLAGS = -mhtm
+elision-CFLAGS = -mhtm -msoft-float
 CFLAGS-elision-lock.c = $(elision-CFLAGS)
 CFLAGS-elision-timed.c = $(elision-CFLAGS)
 CFLAGS-elision-trylock.c = $(elision-CFLAGS)
diff --git a/sysdeps/unix/sysv/linux/s390/elision-lock.c b/sysdeps/unix/sysv/linux/s390/elision-lock.c
index 1876d21..48cc3db 100644
--- a/sysdeps/unix/sysv/linux/s390/elision-lock.c
+++ b/sysdeps/unix/sysv/linux/s390/elision-lock.c
@@ -19,7 +19,7 @@
 #include <pthread.h>
 #include <pthreadP.h>
 #include <lowlevellock.h>
-#include <htmintrin.h>
+#include <htm.h>
 #include <elision-conf.h>
 #include <stdint.h>
 
@@ -60,27 +60,23 @@ __lll_lock_elision (int *futex, short *adapt_count, EXTRAARG int private)
       goto use_lock;
     }
 
-  __asm__ volatile (".machinemode \"zarch_nohighgprs\"\n\t"
-		    ".machine \"all\""
-		    : : : "memory");
-
   int try_tbegin;
   for (try_tbegin = aconf.try_tbegin;
        try_tbegin > 0;
        try_tbegin--)
     {
-      unsigned status;
+      int status;
       if (__builtin_expect
-	  ((status = __builtin_tbegin((void *)0)) == _HTM_TBEGIN_STARTED, 1))
+	  ((status = __libc_tbegin ((void *) 0)) == _HTM_TBEGIN_STARTED, 1))
 	{
 	  if (*futex == 0)
 	    return 0;
 	  /* Lock was busy.  Fall back to normal locking.  */
-	  if (__builtin_expect (__builtin_tx_nesting_depth (), 1))
+	  if (__builtin_expect (__libc_tx_nesting_depth (), 1))
 	    {
 	      /* In a non-nested transaction there is no need to abort,
 		 which is expensive.  */
-	      __builtin_tend ();
+	      __libc_tend ();
 	      /* Don't try to use transactions for the next couple of times.
 		 See above for why relaxed MO is sufficient.  */
 	      if (aconf.skip_lock_busy > 0)
@@ -100,7 +96,7 @@ __lll_lock_elision (int *futex, short *adapt_count, EXTRAARG int private)
 		 because using the default lock with the inner mutex
 		 would abort the outer transaction.
 	      */
-	      __builtin_tabort (_HTM_FIRST_USER_ABORT_CODE | 1);
+	      __libc_tabort (_HTM_FIRST_USER_ABORT_CODE | 1);
 	    }
 	}
       else
diff --git a/sysdeps/unix/sysv/linux/s390/elision-trylock.c b/sysdeps/unix/sysv/linux/s390/elision-trylock.c
index a3252b8..e21fc26 100644
--- a/sysdeps/unix/sysv/linux/s390/elision-trylock.c
+++ b/sysdeps/unix/sysv/linux/s390/elision-trylock.c
@@ -19,7 +19,7 @@
 #include <pthread.h>
 #include <pthreadP.h>
 #include <lowlevellock.h>
-#include <htmintrin.h>
+#include <htm.h>
 #include <elision-conf.h>
 
 #define aconf __elision_aconf
@@ -30,15 +30,11 @@
 int
 __lll_trylock_elision (int *futex, short *adapt_count)
 {
-  __asm__ __volatile__ (".machinemode \"zarch_nohighgprs\"\n\t"
-			".machine \"all\""
-			: : : "memory");
-
   /* Implement POSIX semantics by forbiding nesting elided trylocks.
      Sorry.  After the abort the code is re-executed
      non transactional and if the lock was already locked
      return an error.  */
-  if (__builtin_tx_nesting_depth () > 0)
+  if (__libc_tx_nesting_depth () > 0)
     {
       /* Note that this abort may terminate an outermost transaction that
 	 was created outside glibc.
@@ -46,7 +42,7 @@ __lll_trylock_elision (int *futex, short *adapt_count)
 	 them to use the default lock instead of retrying transactions
 	 until their try_tbegin is zero.
       */
-      __builtin_tabort (_HTM_FIRST_USER_ABORT_CODE | 1);
+      __libc_tabort (_HTM_FIRST_USER_ABORT_CODE | 1);
     }
 
   /* Only try a transaction if it's worth it.  See __lll_lock_elision for
@@ -54,17 +50,17 @@ __lll_trylock_elision (int *futex, short *adapt_count)
      just a hint.  */
   if (atomic_load_relaxed (adapt_count) <= 0)
     {
-      unsigned status;
+      int status;
 
       if (__builtin_expect
-	  ((status = __builtin_tbegin ((void *)0)) == _HTM_TBEGIN_STARTED, 1))
+	  ((status = __libc_tbegin ((void *) 0)) == _HTM_TBEGIN_STARTED, 1))
 	{
 	  if (*futex == 0)
 	    return 0;
 	  /* Lock was busy.  Fall back to normal locking.  */
 	  /* Since we are in a non-nested transaction there is no need to abort,
 	     which is expensive.  */
-	  __builtin_tend ();
+	  __libc_tend ();
 	  /* Note: Changing the adapt_count here might abort a transaction on a
 	     different cpu, but that could happen anyway when the futex is
 	     acquired, so there's no need to check the nesting depth here.
diff --git a/sysdeps/unix/sysv/linux/s390/elision-unlock.c b/sysdeps/unix/sysv/linux/s390/elision-unlock.c
index 483abe1..0b1ade9 100644
--- a/sysdeps/unix/sysv/linux/s390/elision-unlock.c
+++ b/sysdeps/unix/sysv/linux/s390/elision-unlock.c
@@ -18,6 +18,7 @@
 
 #include <pthreadP.h>
 #include <lowlevellock.h>
+#include <htm.h>
 
 int
 __lll_unlock_elision(int *futex, int private)
@@ -27,10 +28,7 @@ __lll_unlock_elision(int *futex, int private)
      have closed the transaction, but that is impossible to detect reliably.  */
   if (*futex == 0)
     {
-      __asm__ volatile (".machinemode \"zarch_nohighgprs\"\n\t"
-			".machine \"all\""
-			: : : "memory");
-      __builtin_tend();
+      __libc_tend ();
     }
   else
     lll_unlock ((*futex), private);
diff --git a/sysdeps/unix/sysv/linux/s390/htm.h b/sysdeps/unix/sysv/linux/s390/htm.h
new file mode 100644
index 0000000..6b4e8f4
--- /dev/null
+++ b/sysdeps/unix/sysv/linux/s390/htm.h
@@ -0,0 +1,149 @@
+/* Shared HTM header.  Work around false transactional execution facility
+   intrinsics.
+
+   Copyright (C) 2016 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#ifndef _HTM_H
+#define _HTM_H 1
+
+#include <htmintrin.h>
+
+#ifdef __s390x__
+# define TX_FPRS_BYTES 64
+# define TX_SAVE_FPRS						\
+  "   std %%f8, 0(%[R_FPRS])\n\t"				\
+  "   std %%f9, 8(%[R_FPRS])\n\t"				\
+  "   std %%f10, 16(%[R_FPRS])\n\t"				\
+  "   std %%f11, 24(%[R_FPRS])\n\t"				\
+  "   std %%f12, 32(%[R_FPRS])\n\t"				\
+  "   std %%f13, 40(%[R_FPRS])\n\t"				\
+  "   std %%f14, 48(%[R_FPRS])\n\t"				\
+  "   std %%f15, 56(%[R_FPRS])\n\t"
+
+# define TX_RESTORE_FPRS					\
+  "   ld %%f8, 0(%[R_FPRS])\n\t"				\
+  "   ld %%f9, 8(%[R_FPRS])\n\t"				\
+  "   ld %%f10, 16(%[R_FPRS])\n\t"				\
+  "   ld %%f11, 24(%[R_FPRS])\n\t"				\
+  "   ld %%f12, 32(%[R_FPRS])\n\t"				\
+  "   ld %%f13, 40(%[R_FPRS])\n\t"				\
+  "   ld %%f14, 48(%[R_FPRS])\n\t"				\
+  "   ld %%f15, 56(%[R_FPRS])\n\t"
+
+#else
+
+# define TX_FPRS_BYTES 16
+# define TX_SAVE_FPRS						\
+  "   std %%f4, 0(%[R_FPRS])\n\t"				\
+  "   std %%f6, 8(%[R_FPRS])\n\t"
+
+# define TX_RESTORE_FPRS					\
+  "   ld %%f4, 0(%[R_FPRS])\n\t"				\
+  "   ld %%f6, 8(%[R_FPRS])\n\t"
+
+#endif /* ! __s390x__  */
+
+/* Use own inline assembly instead of __builtin_tbegin, as tbegin
+   has to filter program interruptions which can't be done with the builtin.
+   Now the fprs have to be saved / restored here, too.
+   The fpc is also not saved / restored with the builtin.
+   The used inline assembly does not clobber the volatile fprs / vrs!
+   Clobbering the latter ones would force the compiler to save / restore
+   the call saved fprs as those overlap with the vrs, but they only need to be
+   restored if the transaction fails but not if the transaction is successfully
+   started.  Thus the user of the tbegin macros in this header file has to
+   compile the file / function with -msoft-float.  It prevents gcc from using
+   fprs / vrs.  */
+#define __libc_tbegin(tdb)						\
+  ({ int __ret;								\
+     int __fpc;								\
+     char __fprs[TX_FPRS_BYTES];					\
+     __asm__ __volatile__ (".machine push\n\t"				\
+			   ".machinemode \"zarch_nohighgprs\"\n\t"	\
+			   ".machine \"all\"\n\t"			\
+			   /* Save state at the outermost transaction.	\
+			      As extracting nesting depth is expensive	\
+			      on at least zEC12, save fprs at inner	\
+			      transactions, too.			\
+			      The fpc and fprs are saved here as they	\
+			      are not saved by tbegin.  There exist no	\
+			      call-saved vrs, thus they are not saved	\
+			      here.  */					\
+			   "   efpc %[R_FPC]\n\t"			\
+			   TX_SAVE_FPRS					\
+			   /* Begin transaction: save all gprs, allow	\
+			      ar modification and fp operations.  Some	\
+			      program-interruptions (e.g. a null	\
+			      pointer access) are filtered and the	\
+			      trancsaction will abort.  In this case	\
+			      the normal lock path will execute it	\
+			      again and result in a core dump wich does	\
+			      now show at tbegin but the real executed	\
+			      instruction.  */				\
+			   "   tbegin 0, 0xFF0E\n\t"			\
+			   /* Branch away in abort case (this is the	\
+			      prefered sequence.  See PoP in chapter 5	\
+			      Transactional-Execution Facility		\
+			      Operation).  */				\
+			   "   jnz 0f\n\t"				\
+			   /* Transaction has successfully started.  */	\
+			   "   lhi %[R_RET], 0\n\t"			\
+			   "   j 1f\n\t"				\
+			   /* Transaction has aborted.  Now we are at	\
+			      the outermost transaction.  Restore fprs	\
+			      and fpc. */				\
+			   "0: ipm %[R_RET]\n\t"			\
+			   "   srl %[R_RET], 28\n\t"			\
+			   "   sfpc %[R_FPC]\n\t"			\
+			   TX_RESTORE_FPRS				\
+			   "1:\n\t"					\
+			   ".machine pop\n"				\
+			   : [R_RET] "=&d" (__ret),			\
+			     [R_FPC] "=&d" (__fpc)			\
+			   : [R_FPRS] "a" (__fprs)			\
+			   : "cc", "memory");				\
+     __ret;								\
+     })
+
+/* These builtins are correct.  Use them.  */
+#define __libc_tend()							\
+  ({ __asm__ __volatile__ (".machine push\n\t"				\
+			   ".machinemode \"zarch_nohighgprs\"\n\t"	\
+			   ".machine \"all\"\n\t");			\
+    int __ret = __builtin_tend ();					\
+    __asm__ __volatile__ (".machine pop");				\
+    __ret;								\
+  })
+
+#define __libc_tabort(abortcode)					\
+  __asm__ __volatile__ (".machine push\n\t"				\
+			".machinemode \"zarch_nohighgprs\"\n\t"		\
+			".machine \"all\"\n\t");			\
+  __builtin_tabort (abortcode);						\
+  __asm__ __volatile__ (".machine pop")
+
+#define __libc_tx_nesting_depth() \
+  ({ __asm__ __volatile__ (".machine push\n\t"				\
+			   ".machinemode \"zarch_nohighgprs\"\n\t"	\
+			   ".machine \"all\"\n\t");			\
+    int __ret = __builtin_tx_nesting_depth ();				\
+    __asm__ __volatile__ (".machine pop");				\
+    __ret;								\
+  })
+
+#endif

http://sourceware.org/git/gitweb.cgi?p=glibc.git;a=commitdiff;h=c813dae5d8e469262f96b1cda0191ea076f10809

commit c813dae5d8e469262f96b1cda0191ea076f10809
Author: Stefan Liebler <stli@linux.vnet.ibm.com>
Date:   Tue Dec 20 15:12:48 2016 +0100

    S390: Use C11-like atomics instead of plain memory accesses in lock elision code.
    
    This uses atomic operations to access lock elision metadata that is accessed
    concurrently (ie, adapt_count fields).  The size of the data is less than a
    word but accessed only with atomic loads and stores.
    
    See also x86 commit ca6e601a9d4a72b3699cca15bad12ac1716bf49a:
    "Use C11-like atomics instead of plain memory accesses in x86 lock elision."
    
    ChangeLog:
    
    	* sysdeps/unix/sysv/linux/s390/elision-lock.c
    	(__lll_lock_elision): Use atomics to load / store adapt_count.
    	* sysdeps/unix/sysv/linux/s390/elision-trylock.c
    	(__lll_trylock_elision): Likewise.

diff --git a/ChangeLog b/ChangeLog
index 8cdcae6..cc21db7 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,10 @@
+2016-12-20  Stefan Liebler  <stli@linux.vnet.ibm.com>
+
+	* sysdeps/unix/sysv/linux/s390/elision-lock.c
+	(__lll_lock_elision): Use atomics to load / store adapt_count.
+	* sysdeps/unix/sysv/linux/s390/elision-trylock.c
+	(__lll_trylock_elision): Likewise.
+
 2016-12-20  Florian Weimer  <fweimer@redhat.com>
 
 	Do not require memset elimination in explicit_bzero test.
diff --git a/sysdeps/unix/sysv/linux/s390/elision-lock.c b/sysdeps/unix/sysv/linux/s390/elision-lock.c
index ecb507e..1876d21 100644
--- a/sysdeps/unix/sysv/linux/s390/elision-lock.c
+++ b/sysdeps/unix/sysv/linux/s390/elision-lock.c
@@ -45,11 +45,18 @@
 int
 __lll_lock_elision (int *futex, short *adapt_count, EXTRAARG int private)
 {
-  if (*adapt_count > 0)
+  /* adapt_count can be accessed concurrently; these accesses can be both
+     inside of transactions (if critical sections are nested and the outer
+     critical section uses lock elision) and outside of transactions.  Thus,
+     we need to use atomic accesses to avoid data races.  However, the
+     value of adapt_count is just a hint, so relaxed MO accesses are
+     sufficient.  */
+  if (atomic_load_relaxed (adapt_count) > 0)
     {
       /* Lost updates are possible, but harmless.  Due to races this might lead
 	 to *adapt_count becoming less than zero.  */
-      (*adapt_count)--;
+      atomic_store_relaxed (adapt_count,
+			    atomic_load_relaxed (adapt_count) - 1);
       goto use_lock;
     }
 
@@ -74,8 +81,10 @@ __lll_lock_elision (int *futex, short *adapt_count, EXTRAARG int private)
 	      /* In a non-nested transaction there is no need to abort,
 		 which is expensive.  */
 	      __builtin_tend ();
+	      /* Don't try to use transactions for the next couple of times.
+		 See above for why relaxed MO is sufficient.  */
 	      if (aconf.skip_lock_busy > 0)
-		*adapt_count = aconf.skip_lock_busy;
+		atomic_store_relaxed (adapt_count, aconf.skip_lock_busy);
 	      goto use_lock;
 	    }
 	  else /* nesting depth is > 1 */
@@ -101,18 +110,20 @@ __lll_lock_elision (int *futex, short *adapt_count, EXTRAARG int private)
 	      /* A persistent abort (cc 1 or 3) indicates that a retry is
 		 probably futile.  Use the normal locking now and for the
 		 next couple of calls.
-		 Be careful to avoid writing to the lock.  */
+		 Be careful to avoid writing to the lock.  See above for why
+		 relaxed MO is sufficient.  */
 	      if (aconf.skip_lock_internal_abort > 0)
-		*adapt_count = aconf.skip_lock_internal_abort;
+		atomic_store_relaxed (adapt_count,
+				      aconf.skip_lock_internal_abort);
 	      goto use_lock;
 	    }
 	}
     }
 
   /* Same logic as above, but for for a number of temporary failures in a
-     row.  */
+     row.  See above for why relaxed MO is sufficient.  */
   if (aconf.skip_lock_out_of_tbegin_retries > 0 && aconf.try_tbegin > 0)
-    *adapt_count = aconf.skip_lock_out_of_tbegin_retries;
+    atomic_store_relaxed (adapt_count, aconf.skip_lock_out_of_tbegin_retries);
 
   use_lock:
   return LLL_LOCK ((*futex), private);
diff --git a/sysdeps/unix/sysv/linux/s390/elision-trylock.c b/sysdeps/unix/sysv/linux/s390/elision-trylock.c
index 3d5a994..a3252b8 100644
--- a/sysdeps/unix/sysv/linux/s390/elision-trylock.c
+++ b/sysdeps/unix/sysv/linux/s390/elision-trylock.c
@@ -49,8 +49,10 @@ __lll_trylock_elision (int *futex, short *adapt_count)
       __builtin_tabort (_HTM_FIRST_USER_ABORT_CODE | 1);
     }
 
-  /* Only try a transaction if it's worth it.  */
-  if (*adapt_count <= 0)
+  /* Only try a transaction if it's worth it.  See __lll_lock_elision for
+     why we need atomic accesses.  Relaxed MO is sufficient because this is
+     just a hint.  */
+  if (atomic_load_relaxed (adapt_count) <= 0)
     {
       unsigned status;
 
@@ -65,9 +67,10 @@ __lll_trylock_elision (int *futex, short *adapt_count)
 	  __builtin_tend ();
 	  /* Note: Changing the adapt_count here might abort a transaction on a
 	     different cpu, but that could happen anyway when the futex is
-	     acquired, so there's no need to check the nesting depth here.  */
+	     acquired, so there's no need to check the nesting depth here.
+	     See above for why relaxed MO is sufficient.  */
 	  if (aconf.skip_lock_busy > 0)
-	    *adapt_count = aconf.skip_lock_busy;
+	    atomic_store_relaxed (adapt_count, aconf.skip_lock_busy);
 	}
       else
 	{
@@ -87,7 +90,8 @@ __lll_trylock_elision (int *futex, short *adapt_count)
     {
       /* Lost updates are possible, but harmless.  Due to races this might lead
 	 to *adapt_count becoming less than zero.  */
-      (*adapt_count)--;
+      atomic_store_relaxed (adapt_count,
+			    atomic_load_relaxed (adapt_count) - 1);
     }
 
   return lll_trylock (*futex);

-----------------------------------------------------------------------

Summary of changes:
 ChangeLog                                      |   35 +++++
 sysdeps/unix/sysv/linux/s390/Makefile          |    2 +-
 sysdeps/unix/sysv/linux/s390/elision-lock.c    |   94 +++++++------
 sysdeps/unix/sysv/linux/s390/elision-trylock.c |   76 +++++-----
 sysdeps/unix/sysv/linux/s390/elision-unlock.c  |   35 ++++-
 sysdeps/unix/sysv/linux/s390/htm.h             |  181 ++++++++++++++++++++++++
 sysdeps/unix/sysv/linux/s390/lowlevellock.h    |    4 +-
 7 files changed, 335 insertions(+), 92 deletions(-)
 create mode 100644 sysdeps/unix/sysv/linux/s390/htm.h


hooks/post-receive
-- 
GNU C Library master sources


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]