This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Optimize strstr, strcasestr and memmem


On 22/05/2012, at 7:13 AM, Carlos O'Donell wrote:

> On Mon, May 21, 2012 at 2:18 PM, Maxim Kuvyrkov <maxim@codesourcery.com> wrote:
>>> This should not preclude other people from contributing machine
>>> specific benchmarks.
>> 
>> Certainly.  The design and thought process behind it that I outlined will address the most immediate need for creating at least /some/ performance baseline for GLIBC.
> 
> Sounds good.
> 
>>>> The benchmark design goals that I have so far:
> 
> Could you please start a wiki page for this work if you haven't
> already? That would help me point other people at the design and
> goals.

Done, http://sourceware.org/glibc/wiki/Testing/Benchmark is live.

> 
>>> Why "fixed time?" Why not "fixed number of iterations?" It is already
>>> difficult for maintainers to evaluate TIMEOUT_FACTOR or TIMEOUT in the
>>> tests, extending this to the benchmarks is like throwing salt on a
>>> wound. I would prefer to see a fixed number of test executions rather
>>> than a fixed amount of time.
>> 
>> Why would we want to evaluate TIMEOUT_FACTOR and TIMEOUT for benchmarks beyond setting it to a single value for all tests?  The mechanism for controlling benchmark runtime that I have in mind is that the benchmark body is run in "while (run_benchmark) {}" loop with an alarm set to BENCHMARK_TIME seconds (30-120 seconds).  When the alarm goes off the signal handler sets "run_benchmark = false" and stops the execution of the benchmark, but allowing it to finish the current iteration.
>> 
>> For most benchmarks one can get reasonably-precise results from a 30-120 second run.  Runs below 10 seconds will have too much of startup/warmup error.  Runs above 120 seconds will be just wasting CPU time (assuming that the benchmark body can execute within 1-5 seconds, so that we get well-averaged results).
>> 
>> I'm tired of hand-tweaking iteration numbers for all the different systems out there.
> 
> What do you do if you exceed the runtime but are in the middle of an interation?

Crash and burn.  That should not normally happen as "TIMEOUT to kill" will be set at 1.5-2 times the value of benchmark time (or, rather, benchmark time will be set to a fraction of the TIMEOUT value as expected for "normal" tests).  Provided that a single iteration of benchmark loop will be a small fraction of the overall time, benchmarks will end shortly after the benchmark alarm goes off and well before timeout.

--
Maxim Kuvyrkov
CodeSourcery / Mentor Graphics






Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]