This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH v3] Make fprintf() function to multithread-safe


On 7/24/2012 3:36 AM, Peng Haitao wrote:
> 
> On 07/24/2012 08:57 AM, Carlos O'Donell wrote:
>> I agree with Roland here.
>>
>> We need to see some real investigation and performance analysis.
>>
>> Making fprintf multithread-safe is important, but we should not do
>> so using a heavy hammer approach involving global locks.
>>
>> Do you have any workloads that you can use for performance testing?
>>
> 
> I execute fprintf() with 10000000 times. the time is as follows:
> Before the patch, execute the test program with 3 times:
> 
> # gcc -o fprintf fprintf_test.c
> # perf stat -e instructions -- ./fprintf > /dev/null
> 
[snip]
> 
> The test method is OK?

It is a good start.

(a) Threading?

You need two tests.

You already have the single-thread test case.

You are missing the multi-threaded test case to show the impact of the 
global lock contention. You should spawn more threads than you have
CPUs to ensure contention happens sooner, but not so many threads that
context switching overhead dominates the measurement.

They should remain distinct tests.

(b) Statistics.

Please run the single-threaded test case hundreds of times.

Three runs is not sufficient.

Hundreds of runs will be sufficient to give a high confidence interval
regardless of the questions we ask later.

Please run the multi-threaded test case millions of times.

Why? We are looking to measure the impact of lock contention and if
the locks aren't contended then it won't measure what we're interested
in measuring. The number of iterations you need to run is going to
be inversely proportional to the rate of contention. If you contend
a lock only once in a thousand acquisitions, then you need to have run
at least a thousand iterations to measure the impact of the acquisition.

(c) Document it on the wiki please.

All performance related patches need their acceptance criteria
documented on the wiki. It doesn't have to be perfect, but we want
to capture the history of performance so we can evaluate newer
patches and methods against the good work you did today.

Please start documenting here:
http://sourceware.org/glibc/wiki/benchmarking/results_2_17

Does all of this make sense?

Cheers,
Carlos.
-- 
Carlos O'Donell
Mentor Graphics / CodeSourcery
carlos_odonell@mentor.com
carlos@codesourcery.com
+1 (613) 963 1026


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]