This is the mail archive of the pthreads-win32@sourceware.org mailing list for the pthreas-win32 project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: starvation in pthread_once?


Hi Gottlob,

Vladimir Kliatchko reimplemented pthread_once to use his implementation of MCS (Mellor-Crummey/Scott) locks.

From ptw32_MCS_lock.c:-

* About MCS locks:
*
* MCS locks are queue-based locks, where the queue nodes are local to the
* thread. The 'lock' is nothing more than a global pointer that points to
* the last node in the queue, or is NULL if the queue is empty.
* * Originally designed for use as spin locks requiring no kernel resources
* for synchronisation or blocking, the implementation below has adapted
* the MCS spin lock for use as a general mutex that will suspend threads
* when there is lock contention.
*
* Because the queue nodes are thread-local, most of the memory read/write
* operations required to add or remove nodes from the queue do not trigger
* cache-coherence updates.
*
* Like 'named' mutexes, MCS locks consume system resources transiently -
* they are able to acquire and free resources automatically - but MCS
* locks do not require any unique 'name' to identify the lock to all
* threads using it.




Gottlob Frege wrote:
Blast from the past - whatever happened to changing call_once() to not
create the named mutex when it wasn't needed?


On Tue, Mar 22, 2005 at 1:26 PM, Gottlob Frege <gottlobfrege@gmail.com> wrote:
If it comes down to it, I might vote for daisy-chaining over
busy-looping (assuming the busy-looping is endless).  Remember, this
all started because the original implementation was polling/sleeping
on 'initted' - and if the busy-looping thread is high-priority, then
we are locked forever...


On Tue, 22 Mar 2005 15:14:07 +1100, Ross Johnson
<rpj@callisto.canberra.edu.au> wrote:
On Mon, 2005-03-21 at 11:07 -0500, Gottlob Frege wrote:

So, it doesn't seem to be getting any easier! *Almost* to the point
where a big named mutex becomes tempting - there is a lot to be said
for simplicity. However, my/the goal is still to at least minimize
the non-contention simple init case...
I'm less and less tempted to use a named mutex. Perhaps there's a
standard technique, but AFAICS it's impossible to guarantee that the
name is unique across the system (and all Windows variants).

And I agree, minimum overhead for the uncontended case is the top
priority (after correct behaviour). I'm not concerned at all about speed
in the cancellation case.

And the event is still an auto-reset, although I no longer think it
really matters - I really haven't had the tenacity to think this stuff
through. If it doesn't matter, manual-reset would be better, I think
- I don't like having 1 thread relying on another thread waking it up,
- for cases where the thread is killed, or strange thread priorities,
etc.
It all looks to me like it will work. I don't recall, in the version
that's in pthreads-win32 now, why I included eventUsers (++/--) in what
you have as the __lock() sections. Maybe to save additional Atomic calls
(bus locks). But now I realise [in that version - not yours] that waking
threads can block unnecessarily when leaving the wait section.

It probably doesn't matter if cancel_event is auto or manual. I think
there will be at most one thread waiting on it. And, for 'event', like
you I'm uncomfortable with daisy-chaining SetEvent() calls.

The only problem with the alternative of using a manual-reset event is
that some thread/s may busy-loop for a bit until an explicit reset
occurs. It seems untidy, but it's probably more robust than daisy-
chained SetEvents given the issues you've identified above.

So I'm tempted to leave both events as manual-reset events. I'm also
guessing that this busy-looping will be extremely rare - perhaps only
when a new thread sneaks in to become initter, then suspends just inside
while the first waiter is waking and heading back to the loop start.

I'll run your design and let you know the results.

Thanks.
Ross




Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]