This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH 3/5] Add single-threaded path to _int_free
- From: Wilco Dijkstra <Wilco dot Dijkstra at arm dot com>
- To: DJ Delorie <dj at redhat dot com>
- Cc: "libc-alpha at sourceware dot org" <libc-alpha at sourceware dot org>, nd <nd at arm dot com>
- Date: Thu, 19 Oct 2017 18:49:28 +0000
- Subject: Re: [PATCH 3/5] Add single-threaded path to _int_free
- Authentication-results: sourceware.org; auth=none
- Authentication-results: spf=none (sender IP is ) smtp.mailfrom=Wilco dot Dijkstra at arm dot com;
- Nodisclaimer: True
- References: <DB6PR0801MB20534AC3908A760CBE1EA173834B0@DB6PR0801MB2053.eurprd08.prod.outlook.com> (message from Wilco Dijkstra on Thu, 12 Oct 2017 09:35:34 +0000),<xnefq86rf0.fsf@greed.delorie.com>
- Spamdiagnosticmetadata: NSPM
- Spamdiagnosticoutput: 1:99
DJ Delorie wrote:
> > + if (SINGLE_THREAD_P)
>> {
>
> If you set have_lock to zero here, you can omit the last two chunks of
> this patch.
Here is the updated version with that change and the original check:
This patch adds single-threaded fast paths to _int_free.
Bypass the explicit locking for larger allocations.
Passes GLIBC tests, OK for commit?
ChangeLog:
2017-10-19 Wilco Dijkstra <wdijkstr@arm.com>
* malloc/malloc.c (_int_free): Add SINGLE_THREAD_P paths.
diff --git a/malloc/malloc.c b/malloc/malloc.c
index e220fba83b0f9dc515aef562bdfca6a3ad13d3ea..ca5cfff3a1b1882ae608219fdec973b7f13cbb21 100644
--- a/malloc/malloc.c
+++ b/malloc/malloc.c
@@ -4195,24 +4195,34 @@ _int_free (mstate av, mchunkptr p, int have_lock)
/* Atomically link P to its fastbin: P->FD = *FB; *FB = P; */
mchunkptr old = *fb, old2;
- unsigned int old_idx = ~0u;
- do
+
+ if (SINGLE_THREAD_P)
{
- /* Check that the top of the bin is not the record we are going to add
- (i.e., double free). */
+ /* Check that the top of the bin is not the record we are going to
+ add (i.e., double free). */
if (__builtin_expect (old == p, 0))
malloc_printerr ("double free or corruption (fasttop)");
- /* Check that size of fastbin chunk at the top is the same as
- size of the chunk that we are adding. We can dereference OLD
- only if we have the lock, otherwise it might have already been
- deallocated. See use of OLD_IDX below for the actual check. */
- if (have_lock && old != NULL)
- old_idx = fastbin_index(chunksize(old));
- p->fd = old2 = old;
+ p->fd = old;
+ *fb = p;
}
- while ((old = catomic_compare_and_exchange_val_rel (fb, p, old2)) != old2);
-
- if (have_lock && old != NULL && __builtin_expect (old_idx != idx, 0))
+ else
+ do
+ {
+ /* Check that the top of the bin is not the record we are going to
+ add (i.e., double free). */
+ if (__builtin_expect (old == p, 0))
+ malloc_printerr ("double free or corruption (fasttop)");
+ p->fd = old2 = old;
+ }
+ while ((old = catomic_compare_and_exchange_val_rel (fb, p, old2))
+ != old2);
+
+ /* Check that size of fastbin chunk at the top is the same as
+ size of the chunk that we are adding. We can dereference OLD
+ only if we have the lock, otherwise it might have already been
+ allocated again. */
+ if (have_lock && old != NULL
+ && __builtin_expect (fastbin_index (chunksize (old)) != idx, 0))
malloc_printerr ("invalid fastbin entry (free)");
}
@@ -4221,6 +4231,11 @@ _int_free (mstate av, mchunkptr p, int have_lock)
*/
else if (!chunk_is_mmapped(p)) {
+
+ /* If we're single-threaded, don't lock the arena. */
+ if (SINGLE_THREAD_P)
+ have_lock = true;
+
if (!have_lock)
__libc_lock_lock (av->mutex);