This is the mail archive of the glibc-bugs@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug nptl/14485] File corruption race condition in robust mutex unlocking


https://sourceware.org/bugzilla/show_bug.cgi?id=14485

--- Comment #12 from Carlos O'Donell <carlos at redhat dot com> ---
(In reply to mail from comment #11)
> Could somebody summarise for me as somebody not familiar with the glibc
> internals, what is the status of this bug, and in which cases am I safe to
> use a robust mutex in a shared memory setup? Thanks!

The problem is with the reuse of memory and waiting for the dying threads to
finish their cleanup of the robust mutex.

You must not unmap the memory until the dying threads have had a chance to do
their cleanup.

Re-iterating what Rich says in comment #8:
https://sourceware.org/bugzilla/show_bug.cgi?id=14485#c8

(a) Use a distinct mapping for each user/thread that uses the shared memory
robust mutex and which also wants to unmap that memory. This way after everyone
who uses the robust mutex removes their mappings, the dying thread's mapping
remains as it is being shutdown and is removed last after it's own cleanup is
done. Only after that point will anything reuse that memory.

(b) Use some kind of process-local synchronization (say sempahore) to wait for
all users of the object to be done before you unmap the memory. The dying
thread would count as a user, so you would not be able to unmap the memory yet,
and that way it would have a chance to run it's own robust mutex cleanup. You
might also use pthread_tryjoin_np to see if the dying thread is cleaned up by
the kernel yet, since that would indicate it was done and you could recover the
use count.

-- 
You are receiving this mail because:
You are on the CC list for the bug.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]