This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH][BZ #12515] Improve precision of clock function


On Thu, May 23, 2013 at 12:20:50PM -0700, Roland McGrath wrote:
> Why do you think this is desireable?  clock is an ancient interface and its
> callers expect the tick-granularity behavior it's always had.  Callers who
> want more precision can use clock_gettime directly.

The spec says[1]:

"In order to measure the time spent in a program, clock() should be
called at the start of the program and its return value subtracted
from the value returned by subsequent calls. The value returned by
clock() is defined for compatibility across systems that have clocks
with different resolutions. The resolution on any particular system
need not be to microsecond accuracy."

So while microsecond accuracy is not mandatory, it doesn't mean that
having microsecond accuracy is wrong.  It has already been pointed out
that there are users out there who wonder why clock had such terrible
precision.  What's the use case for someone to consider a more
precision clock() return value to be a breakage?  If there is a good
reason to consider this breakage, then I could version the symbol so
that older apps retain the low precision.

Siddhesh

[1] http://pubs.opengroup.org/onlinepubs/009696699/functions/clock.html


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]