This is the mail archive of the cygwin@cygwin.com mailing list for the Cygwin project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: gettimeofday() does not returns usec resolution


On Wed, Jan 23, 2002 at 12:09:53AM -0800, Tim Prince wrote:
>Ralf Habacker wrote:
>
>> Hi,
>>
>> for kde2 we are building a profiler lib for profiling complex c++
>> applications (currently found in the cvs areas of kde-cygwin.sf.net
>>
>> (http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/kde-cygwin/profiler/)
>>
>
>> using the high resolution timer of native windows (about usec
>> resolution).
>>
>> This lib could be use for easy profiling of any c++ application and
>> libs like cygwin.dll and so on.
>>
>> While adding unix support (and cygwin) for this lib, I noticed, that
>>  the gettimeofday() function returns only a resolution of 10ms (the
>>  time slice resolution) but mostly other unix os returns a
>> resolution in the usec region.  I have appended a testcase for this.
>>
>> Has anyone address this problem already. I have looked int the
>> cygwin and list and found the only topic 
>http://sources.redhat.com/ml/cygwin/2001-12/msg00201.html
>>
>
>>
>> In http://www-106.ibm.com/developerworks/library/l-rt1/ there is a
>> detailed instruction how to use the hugh resolution counter. .
>>
>> $ cat timeofday.c #include <sys/time.h>
>>
>> int main() { struct timeval tp; long a,b;
>>
>> gettimeofday(&tp,0); a =
>> ((unsigned)tp.tv_sec)*1000000+((unsigned)tp.tv_usec);
>>
>> printf("timestamp (us): %d\n",a); usleep(1000); gettimeofday(&tp,0); b
>>  = ((unsigned)tp.tv_sec)*1000000+((unsigned)tp.tv_usec); 
>printf("timestamp
>>  (us): %d (diff) %d\n",b,b-a); }
>>
>>
>> Ralf Habacker
>This is a continuing source of consternation, which may be considered OT 
>for cygwin. I suspect that linux for ia32 tends to use one of the low 
>level cpu tick registers to obtain the microsecond field; I have not 
>examined current source. I don't know that it is possible to guarantee 
>how well the zero of the microsecond field coincides with the second 
>ticks.  On many ia chips, it is possible to use the rdtsc instruction 
>directly, for timing intervals at sub-microsecond resolution.  A 
>calibration run is required, to measure the tick frequency against the 
>lower resolution time of day clock.  linux and Windows, of course, do 
>something along this line, when booting up.  Any working Windows will 
>report usable results via the QueryPerformance API's, usually with 
>better than 10 microsecond resolution, and it seems reasonable for 
>cygwin to base its functions directly on Windows API's.  On many chips 
>the direct use of rdtsc can produce better than 1 microsecond 
>resolution, but then the application takes on the burden of dealing with 
>various odd hardware combinations, rather than expecting the hardware 
>vendor to make Windows work.

Was there a patch submitted for this for cygwin?

If I missed it, I apologize.  Otherwise, since this issue is of zero
importance to me and, apparently anyone else who is working on cygwin,
this will be a source of consternation for some time.

cgf

--
Unsubscribe info:      http://cygwin.com/ml/#unsubscribe-simple
Bug reporting:         http://cygwin.com/bugs.html
Documentation:         http://cygwin.com/docs.html
FAQ:                   http://cygwin.com/faq/


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]