This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
[PATCH v2] malloc: Fix list_lock/arena lock deadlock [BZ #19182]
- From: Florian Weimer <fweimer at redhat dot com>
- To: GNU C Library <libc-alpha at sourceware dot org>
- Cc: Torvald Riegel <triegel at redhat dot com>
- Date: Sun, 6 Dec 2015 13:14:09 +0100
- Subject: [PATCH v2] malloc: Fix list_lock/arena lock deadlock [BZ #19182]
- Authentication-results: sourceware.org; auth=none
- References: <56389C10 dot 6060001 at redhat dot com>
On 11/03/2015 12:35 PM, Florian Weimer wrote:
> This is just a minimal change. The fork handler lock acquisition has to
> go away anyway if we make fork async-signal-safe (bug 4737).
The attached version is slightly more elaborate. It tries to preserve
the original lock order, but has a backup path in case the preferred
lock ordering is not possible. It covers reused_arena as well, which
has a similar ordering violation.
Torvald, we need this patch for backporting, so a review of the
concurrency aspect would be extremely welcome. :)
Florian
2015-12-06 Florian Weimer <fweimer@redhat.com>
[BZ #19182]
* malloc/arena.c (list_lock): Document lock ordering requirements.
(_int_new_arena): Release and re-acquire arena lock if list_lock
cannot be acquired immeidately.
(reused_arena): Likewise.
diff --git a/malloc/arena.c b/malloc/arena.c
index 3dab7bb..834907c 100644
--- a/malloc/arena.c
+++ b/malloc/arena.c
@@ -69,9 +69,11 @@ extern int sanity_check_heap_info_alignment[(sizeof (heap_info)
static __thread mstate thread_arena attribute_tls_model_ie;
/* Arena free list. list_lock protects the free_list variable below,
- and the next_free and attached_threads members of the mstate
- objects. No other (malloc) locks must be taken while list_lock is
- active, otherwise deadlocks may occur. */
+ the next_free and attached_threads members of struct malloc_state
+ objects, and concurrent writes to the next member of these objects.
+ (Read access to the next member is synchronized via a barrier.)
+ When list_lock is acquired, no arena lock must be acquired, but it
+ is permitted to acquire arena locks after list_lock. */
static mutex_t list_lock = _LIBC_LOCK_INITIALIZER;
static size_t narenas = 1;
@@ -790,7 +792,16 @@ _int_new_arena (size_t size)
mutex_init (&a->mutex);
(void) mutex_lock (&a->mutex);
- (void) mutex_lock (&list_lock);
+ /* Put the arena in the global list. If we cannot acquire
+ list_lock, we have to temporarily release the arena lock, to
+ avoid deadlocks. NB: mutex_trylock returns 0 if the lock was
+ acquired. */
+ bool reacquire_arena_lock = mutex_trylock (&list_lock);
+ if (reacquire_arena_lock)
+ {
+ mutex_unlock (&a->mutex);
+ (void) mutex_lock (&list_lock);
+ }
detach_arena (replaced_arena);
@@ -801,6 +812,9 @@ _int_new_arena (size_t size)
(void) mutex_unlock (&list_lock);
+ if (reacquire_arena_lock)
+ (void) mutex_lock (&a->mutex);
+
return a;
}
@@ -884,11 +898,22 @@ reused_arena (mstate avoid_arena)
out:
{
+ /* Update the arena thread attachment counters. If we cannot
+ acquire list_lock, we have to temporarily release the arena
+ lock, to avoid deadlocks. NB: mutex_trylock returns 0 if the
+ lock was acquired. */
mstate replaced_arena = thread_arena;
- (void) mutex_lock (&list_lock);
+ bool reacquire_arena_lock = mutex_trylock (&list_lock);
+ if (reacquire_arena_lock)
+ {
+ (void) mutex_unlock (&result->mutex);
+ (void) mutex_lock (&list_lock);
+ }
detach_arena (replaced_arena);
++result->attached_threads;
(void) mutex_unlock (&list_lock);
+ if (reacquire_arena_lock)
+ (void) mutex_lock (&result->mutex);
}
LIBC_PROBE (memory_arena_reuse, 2, result, avoid_arena);