This is the mail archive of the libc-hacker@sourceware.cygnus.com mailing list for the glibc project.

Note that libc-hacker is a closed list. You may look at the archives of this list, but subscription and posting are not open.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Spin locks


   From: Ulrich Drepper <drepper@redhat.com>
   Date: 04 May 2000 16:00:35 -0700

   Mark Kettenis <kettenis@wins.uva.nl> writes:

   > Here's a quick note to let people know that I'm working on
   > implementing the POSIX spin locks that are in the current IEEE
   > Std. 1003.1-200X draft.

   For Hurd this is, I assume.  I've committed the Linux implementation
   some time ago.

Oops, seems I was fooled by LinuxThreads having its own ChangeLog
entry :-(.  My intention was to provide a spin lock implementation for
all platforms.  But since you already implemented them for Linux, I'll
not try to push it :-)

Anyway, I've taken a look at the code in linuxthreads/spinlock.c, but
it looks to me as if your implementation is more like a fast and
simple mutex than a spin lock.  In fact the only different between a
spin lock and a PTHREAD_MUTEX_FAST_NP mutex, is that you avoid the
check of the mutex type.

My understanding of what a spin lock does is:

1. Try to get the lock using some atomic operation.
2. If the lock couldn't be obtained, "spin" that is repeat step 1 for
   say N times, that is, keep actively polling for a while.
3. If the lock is still held by another thread, yield the processor.

The idea is that you use spin locks around operations that would only
take a few cycles.  That means that if you're running on a
multiprocessor machine and find that the lock is held by another
thread, chances are high that the lock will become in the next few
cycles and it is advantagous to actively wait to avoid step 3, which
typically involves one or more system calls.  Of course on a single
processor machine you would set N = 0, since spinning is probably
pointless there.

I don't know the LinuxThreads implementation too well, but I believe
step 3 is terribly expensive on Linux (involving several system
calls).  But there may be reasons why spinning is pointless on SMP Linux.

Anyway, you're welcome to use the Hurd's spin lock implementation if
you like :-).

   > Spin locks should be fast and simple.

   Well, make it an option.  Note that you are always exposing a part of
   the interface: the pthread_spinlock_t type.  If you change this you'll
   have problem anyhow.  If you keep the same structure you'll use the
   same algorithms as well (most probably).  There are some more
   advantage in using inlined versions (such as avoiding the lock prefix
   on x86 if you know the application runs on single processor machines).

Hmm, I don't think I completely understand you here.

First, the specification makes it possible to implement spin locks
without exposing anything about the internals (in contrast to mutexes
and condition varibles).  Simply make pthread_spinlock_t a pointer and
let pthread_spin_init allocate the data that's necessary.  That
wouldn't be a terrible smart thing to do I think, which is probably
why you didn't consider it.

Second, I don't see why using inlines helps to avoid using a lock
prefix.  Not inlining would make it possible to compile an optimized
libc for uni-processors.  Inlining would force me to use a lock prefix
if I want to make sure the same binary will work on both single and
multi processor machines.  This issue isn't really relevant since I'm
using xchgl wich doesn't need a lock prefix since the locking id done
implicitly.  It's probably possible to make use btsl like the Linux
kernel does, and drop the lock prefix if we somehow know we're on a
uniprocessor.  But I'm not sure if that would really improve
preformance.

   Anyhow, there definitely should be an option to turn the inlining off.
   Or better, there should be a flag to turn it on.  The extra sanity
   checks are sometimes really useful.

I think a compile-time flag to choose an alternative spin lock
implementation that does additional sanity checks (and therefore
doesn't do any inlining) would be a good idea, but not a priority (the
LinuxThreads implementation doesn't do it either :-)).  I even think
this could implemented on top of the same pthread_spinlock_t type in a
way that would more-or-less work even if not all code is compiled with
that particular flag.

Mark

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]