This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH][BZ #12515] Improve precision of clock function
- From: Siddhesh Poyarekar <siddhesh at redhat dot com>
- To: Roland McGrath <roland at hack dot frob dot com>
- Cc: libc-alpha at sourceware dot org
- Date: Fri, 24 May 2013 10:24:12 +0530
- Subject: Re: [PATCH][BZ #12515] Improve precision of clock function
- References: <20130521145611 dot GM8927 at spoyarek dot pnq dot redhat dot com> <20130523192050 dot EBE582C09E at topped-with-meat dot com>
On Thu, May 23, 2013 at 12:20:50PM -0700, Roland McGrath wrote:
> Why do you think this is desireable? clock is an ancient interface and its
> callers expect the tick-granularity behavior it's always had. Callers who
> want more precision can use clock_gettime directly.
The spec says[1]:
"In order to measure the time spent in a program, clock() should be
called at the start of the program and its return value subtracted
from the value returned by subsequent calls. The value returned by
clock() is defined for compatibility across systems that have clocks
with different resolutions. The resolution on any particular system
need not be to microsecond accuracy."
So while microsecond accuracy is not mandatory, it doesn't mean that
having microsecond accuracy is wrong. It has already been pointed out
that there are users out there who wonder why clock had such terrible
precision. What's the use case for someone to consider a more
precision clock() return value to be a breakage? If there is a good
reason to consider this breakage, then I could version the symbol so
that older apps retain the low precision.
Siddhesh
[1] http://pubs.opengroup.org/onlinepubs/009696699/functions/clock.html