This is the mail archive of the glibc-bugs@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug libc/13071] New: MALLOC_ARENA_MAX environment does not work properly on some race conditions in malloc().


http://sourceware.org/bugzilla/show_bug.cgi?id=13071

           Summary: MALLOC_ARENA_MAX environment does not work properly on
                    some race conditions in malloc().
           Product: glibc
           Version: unspecified
            Status: NEW
          Severity: minor
          Priority: P2
         Component: libc
        AssignedTo: drepper.fsp@gmail.com
        ReportedBy: yanagimoto@np.css.fujitsu.com


Created attachment 5889
  --> http://sourceware.org/bugzilla/attachment.cgi?id=5889
malloc_strictly_observed_arena_max.patch

MALLOC_ARENA_MAX environment does not work properly on some race
conditions in malloc(). In this case, the number of heaps exceeds 
the maximum limit, then too much virtual memory is consumed
inappropriately. This is a bug, I suppose.

malloc() checks whether narenas variable (the number of arena) exceeds
narenas_limit (MALLOC_ARENA_MAX) or not in the first part of
reused_arena(). However, narenas is incremented _after_ a new arena is
created in _int_new_arena(). So if a number of threads in a multi-
threaded process rush into malloc() simultaneously, a lot of them can
pass the narenas vs. narenas_limit check exceeding the limit, and can
call _int_new_arena().

I made a test program, arena_max_test.c, which hits this problem
actually. The number of arenas greatly exceeds MALLOC_ARENA_MAX.


  $ MALLOC_ARENA_MAX=10 ./arena_max_test
  PID = 28733
ãArena 0:
  system bytes     =     135168
  in use bytes     =      37008
  Arena 1:
  system bytes     =     135168
  in use bytes     =       2512
  ...(snip)...
  Arena 80:
  system bytes     =     135168
  in use bytes     =       2368
  Arena 81:
  system bytes     =     135168
  in use bytes     =       2368
  Total (incl. mmap):
  system bytes     =   11083776
  in use bytes     =     235440
  max mmap regions =          0
  max mmap bytes   =          0

Eighty-two arenas were created even though MALLOC_ARENA_MAX was ten.
The process consumed too much virtual memory in this case.

  $ ps -eo "comm pid vsz rss" | grep "arena_max_test"
  arena_max_test  28733 6625776 20324
                        ~~~~~~~(6.3GB)

This problem can provoke following situations:
 - If memory over commit is disabled (vm.overcommit_memory=2),
   the use of the virtual memory too much exerts a baleful influence
   on the entire system.
 - Virtual memory space shortage on 32-bit archs, and so on.

So I made a solution, introducing a separate counter, new_arena_worker,
from narenas to determine whether to call _int_new_arena() or not.
Also introducing a new lock, worker_lock, for protecting the counter, but
it never make any negative effect if MALLOC_ARENA_MAX environment is not set.
Even if MALLOC_ARENA_MAX is set,

I'd like to hear your opinion about this problem and this patch.

diff -Nur a/malloc/arena.c b/malloc/arena.c
--- a/malloc/arena.c    2011-06-30 18:22:36.000000000 +0900
+++ b/malloc/arena.c    2011-08-10 17:53:13.000000000 +0900
@@ -79,6 +79,7 @@
 static tsd_key_t arena_key;
 static mutex_t list_lock;
 #ifdef PER_THREAD
+static mutex_t worker_lock;
 static size_t narenas;
 static mstate free_list;
 #endif
@@ -354,6 +355,9 @@
     if(ar_ptr == &main_arena) break;
   }
   mutex_init(&list_lock);
+#ifdef PER_THREAD
+  mutex_init(&worker_lock);
+#endif
   atfork_recursive_cntr = 0;
 }

@@ -523,6 +527,9 @@
 #endif

   mutex_init(&list_lock);
+#ifdef PER_THREAD
+  mutex_init(&worker_lock);
+#endif 
   tsd_key_create(&arena_key, NULL);
   tsd_setspecific(arena_key, (Void_t *)&main_arena);
   thread_atfork(ptmalloc_lock_all, ptmalloc_unlock_all, ptmalloc_unlock_all2);
@@ -982,13 +989,9 @@
   return result;
 }

-
-static mstate
-reused_arena (void)
+static int
+get_narenas_limit (void)
 {
-  if (narenas <= mp_.arena_test)
-    return NULL;
-
   static int narenas_limit;
   if (narenas_limit == 0)
     {
@@ -1006,10 +1009,15 @@
         narenas_limit = NARENAS_FROM_NCORES (2);
     }
     }
+  return narenas_limit;
+}

-  if (narenas < narenas_limit)
-    return NULL;
+#define is_arena_max(x) \
+  ((x) > mp_.arena_test && (x) >= get_narenas_limit())

+static mstate
+reused_arena (void)
+{
   mstate result;
   static mstate next_to_use;
   if (next_to_use == NULL)
@@ -1048,10 +1056,37 @@
   mstate a;

 #ifdef PER_THREAD
-  if ((a = get_free_list ()) == NULL
-      && (a = reused_arena ()) == NULL)
-    /* Nothing immediately available, so generate a new arena.  */
-    a = _int_new_arena(size);
+repeat:
+  if ((a = get_free_list()) == NULL) {
+    if (is_arena_max(narenas))
+      a = reused_arena();
+    else {
+      /* Nothing immediately available, so try to generate a new arena. */
+
+      if (mp_.arena_max == 0)
+        a = _int_new_arena(size);
+      else {
+        /* If MALLOC_ARENA_MAX is set, the arena_max limitation is 
+           observed strictly by using the lock. */
+        static int new_arena_worker = 1;
+
+        (void)mutex_lock(&worker_lock);
+        if (is_arena_max(new_arena_worker)) {
+          (void)mutex_unlock(&worker_lock);
+          goto repeat;
+        }
+        new_arena_worker++;
+        (void)mutex_unlock(&worker_lock);
+
+        /* Generate a new arena. */
+        if ((a = _int_new_arena(size)) == NULL ) {
+          (void)mutex_lock(&worker_lock);
+          new_arena_worker--;
+          (void)mutex_unlock(&worker_lock);
+        }
+      }
+    }
+  }
 #else
   if(!a_tsd)
     a = a_tsd = &main_arena;

Regards,
Naoki Yanagimoto

-- 
Configure bugmail: http://sourceware.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]