This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH 4/4] S390: Optimize lock-elision by decrementing adapt_count at unlock.


On 01/17/2017 07:52 PM, Torvald Riegel wrote:
On Tue, 2017-01-17 at 16:28 +0100, Stefan Liebler wrote:
On 01/11/2017 11:53 AM, Torvald Riegel wrote:
On Tue, 2016-12-06 at 14:51 +0100, Stefan Liebler wrote:
This patch decrements the adapt_count while unlocking the futex
instead of before aquiring the futex as it is done on power, too.
Furthermore a transaction is only started if the futex is currently free.
This check is done after starting the transaction, too.
If the futex is not free and the transaction nesting depth is one,
we can simply end the started transaction instead of aborting it.
The implementation of this check was faulty as it always ended the
started transaction.  By using the fallback path, the the outermost
transaction was aborted.  Now the outermost transaction is aborted
directly.

This patch also adds some commentary and aligns the code in
elision-trylock.c to the code in elision-lock.c as possible.

I don't think this is quite ready yet.  See below for details.

I'm not too concerned about this fact, given that it's just in
s390-specific code.  But generally, I'd prefer if arch-specific code
aims for the same quality and level of consensus about it as what is our
aim for generic code.

ChangeLog:

	* sysdeps/unix/sysv/linux/s390/lowlevellock.h
	(__lll_unlock_elision, lll_unlock_elision): Add adapt_count argument.
	* sysdeps/unix/sysv/linux/s390/elision-lock.c:
	(__lll_lock_elision): Decrement adapt_count while unlocking
	instead of before locking.
	* sysdeps/unix/sysv/linux/s390/elision-trylock.c
	(__lll_trylock_elision): Likewise.
	* sysdeps/unix/sysv/linux/s390/elision-unlock.c:
	(__lll_unlock_elision): Likewise.
---
 sysdeps/unix/sysv/linux/s390/elision-lock.c    | 37 ++++++++-------
 sysdeps/unix/sysv/linux/s390/elision-trylock.c | 62 ++++++++++++++------------
 sysdeps/unix/sysv/linux/s390/elision-unlock.c  | 29 ++++++++++--
 sysdeps/unix/sysv/linux/s390/lowlevellock.h    |  4 +-
 4 files changed, 78 insertions(+), 54 deletions(-)

diff --git a/sysdeps/unix/sysv/linux/s390/elision-lock.c b/sysdeps/unix/sysv/linux/s390/elision-lock.c
index 3dd7fbc..4a7d546 100644
--- a/sysdeps/unix/sysv/linux/s390/elision-lock.c
+++ b/sysdeps/unix/sysv/linux/s390/elision-lock.c
@@ -50,31 +50,30 @@ __lll_lock_elision (int *futex, short *adapt_count, EXTRAARG int private)
      critical section uses lock elision) and outside of transactions.  Thus,
      we need to use atomic accesses to avoid data races.  However, the
      value of adapt_count is just a hint, so relaxed MO accesses are
-     sufficient.  */
-  if (atomic_load_relaxed (adapt_count) > 0)
-    {
-      /* Lost updates are possible, but harmless.  Due to races this might lead
-	 to *adapt_count becoming less than zero.  */
-      atomic_store_relaxed (adapt_count,
-			    atomic_load_relaxed (adapt_count) - 1);
-      goto use_lock;
-    }
-
-  if (aconf.try_tbegin > 0)
+     sufficient.
+     Do not begin a transaction if another cpu has locked the
+     futex with normal locking.  If adapt_count is zero, it remains and the
+     next pthread_mutex_lock call will try to start a transaction again.  */

This seems to make an assumption about performance that should be
explained in the comment.  IIRC, x86 LE does not make this assumption,
so it's not generally true.  I suppose s390 aborts are really expensive,
and you don't expect that a lock is in the acquired state often enough
so that aborts are overall more costly than the overhead of the
additional load and branch?

Yes, aborting a transaction is expensive.
But you are right, there is an additional load and branch and this
thread will anyway wait in LLL_LOCK for another thread releasing the futex.
See example below.

Note that I'm not actually arguing against having the check -- I just
want a clear explanation in the code including the assumptions about
performance that motivate this particular choice.

If we don't add this information, things will get messy.  And a future
maintainer will not be aware of these assumptions.

+	 atomic accesses.  However, the value of adapt_count is just a hint, so
+	 relaxed MO accesses are sufficient.
+	 If adapt_count would be decremented while locking, multiple
+	 CPUs trying to lock the locked mutex will decrement adapt_count to
+	 zero and another CPU will try to start a transaction, which will be
+	 immediately aborted as the mutex is locked.

I don't think this is necessarily the case.  It is true that if more
than one thread decrements, only one would immediately try to use
elision (because only one decrements from 1 (ignoring lost updates)).

However, if you decrement in the critical section, and lock acquisitions
wait until the lock is free *before* loading adapt_count and choosing
whether to use elision or not, then it shouldn't matter whether you
decrement closer to the lock acquisition or closer to the release.
Waiting for a free lock is done by futex-syscall within LLL_LOCK (see
__lll_lock_wait/__lll_lock_wait_private).
On wakeup the lock is immediately acquired if it is free.
Afterwards there is no loading of adapt_count and no decision whether to
use elision or not.
Following your suggestion means, that we need a further implementation
like in __lll_lock_wait/__lll_lock_wait_private in order to wait for a
free lock and then load adapt_count and choose to elide or not!

Well, what would be required is that after we block using futexes, we
reassess the situation (including potentially using elision), instead of
just proceeding to try to acquire the lock without elision.

If the lock is released and we have been waking up.
Do we know if other threads are also waiting for the futex?
The strategy of __lll_lock_wait is to keep futex==2 and wakup possible waiters with __lll_unlock.


Please have a look at the following example assuming:
adapt_count = 1;
There are two scenarios below: Imagine that only "Thread 3a" or "Thread
3b" is used.

Decrement adapt_count while locking (without this patch):
-Thread 1 __lll_lock_elision:
decrements adapt_count to 0 and acquires the lock via LLL_LOCK.
-Thread 2 __lll_lock_elision:
starts a transaction and ends / aborts it immediately as lock is
acquired. adapt_count is set to 3. LLL_LOCK waits until lock is released
by Thread 1.
-Thread 1 __lll_unlock_elision:
releases lock.
-Thread 2 __lll_lock_elision:
wakes up and aquires lock via waiting LLL_LOCK.
-Thread 3a __lll_lock_elision:
decrements adapt_count to 2 and waits via LLL_LOCK until lock is
released by Thread 2.
-Thread 2 __lll_unlock_elision:
releases lock.
-Thread 3b __lll_lock_elision:
decrements adapt_count to 2 and acquires lock via LLL_LOCK.

Decrement adapt_count while unlocking (with this patch):
-Thread 1 __lll_lock_elision:
acquires the lock via LLL_LOCK. adapt_count remains 1.
-Thread 2 __lll_lock_elision:
LLL_LOCK is used as futex is acquired by Thread 1 or adapt_count > 0. It
waits until lock is released by Thread 1.
-Thread 1 __lll_unlock_elision:
decrements adapt_count to 0 and releases the lock.
-Thread 2 __lll_lock_elision:
wakes up and acquires lock via waiting LLL_LOCK.
-Thread 3a __lll_lock_elision:
LLL_LOCK is used as futex is acquired by Thread 2.
-Thread 2 __lll_unlock_elision:
releases lock. adapt_count remains 0.
-Thread 3b __lll_lock_elision:
starts a transaction.

I agree that if we do NOT wait for the lock to become free before
checking whether we can use elision and do NOT try to use elision after
we blocked using futexes, then decrementing closer to the release of a
mutex can decrease the window in which another thread can observe
adapt_count==0 but lock!=0.

This may have required more changes, but it would have been cleaner
overall, and I guess we would have ended up with less s390-specific
code.

However, it's too late for this now, given that we're past the freeze.
We should get back to this when master opens up for normal development.

If futex is not tested before starting a transaction,
the additional load and branch is not needed, Thread 3a will start and
abort a transaction, set adapt_count to 3 and will wait via LLL_LOCK.
In case Thread 3b, a transaction will be started.

The attached diff removes the futex==0 test.
Later I will make one patch with changelog for this and the other two
patches.

See above, I didn't request to remove it, but just to clearly document
why you included it.  If you think removing it is just as fine, that's
OK with me too.

Removing it is fine.

I think this needs a more thorough analysis (including better
documentation) and/or a microbenchmark.

Regarding the patch:

+	     Since we are in a non-nested transaction there is no need to abort,
+	     which is expensive.  Simply end the started transaction.  */

Is that actually true?  You don't seem to check whether you are indeed
not in a nested transaction.  The difference is that you do not need to
acquire the lock because this is trylock, so you can run a little
further as a transaction.  You will get aborted as soon the lock you
failed to lock is released though.
So, it's not quite clear to me what the expected win is; are you aiming
at programs that don't actually need to acquire the lock and whose
transactions finish quickly enough to not get aborted by a concurrent
release of the lock?

Every time we enter __lll_trylock_elision, we check if we are in a transaction on this CPU and abort it if needed. This is needed to detect multiple elided trylock calls of one thread. Afterwards we start a transaction (if adapt_count permits it) and then we check the futex. If it is free, we return within a transaction. If another thread acquired the lock with a transaction, we can't detect it here. But a conflict will abort the transactions. If the futex is already acquired, the call shall return immediately with an error. Thus the current transaction, which was started before, is ended. You are right, if the other thread releases the lock, the transaction will be aborted. As ending a transaction is faster than aborting, the transaction is ended. The power implementation also ends it. On intel it is aborted, but according to the comment, an abort is used instead of an end due to visibility reasons while profiling.


The rest of the diff looked OK.

If it's okay, I will make one patch with all the reviewed diffs and post it with a ChangeLog in this mail-thread.
Then I would commit it before release to have a consistent state?


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]