This is the mail archive of the
cygwin@cygwin.com
mailing list for the Cygwin project.
Re: gettimeofday() does not returns usec resolution
> >> Has anyone address this problem already. I have looked int the
> >> cygwin and list and found the only topic
> >http://sources.redhat.com/ml/cygwin/2001-12/msg00201.html
In the attached file there is a "patch" for gettmieofday:
-------
int gettimeofday(struct timeval *tv, struct timezone *tz)
{
LARGE_INTEGER t;
FILETIME f;
double microseconds;
static LARGE_INTEGER offset;
static double frequencyToMicroseconds;
static int initialized = 0;
static BOOL usePerformanceCounter = 0;
if (!initialized) {
LARGE_INTEGER performanceFrequency;
initialized = 1;
usePerformanceCounter = QueryPerformanceFrequency(&performanceFrequency);
if (usePerformanceCounter) {
QueryPerformanceCounter(&offset);
frequencyToMicroseconds = (double)performanceFrequency.QuadPart /
1000000.;
} else {
offset = getFILETIMEoffset();
frequencyToMicroseconds = 10.;
}
}
if (usePerformanceCounter) QueryPerformanceCounter(&t);
else {
GetSystemTimeAsFileTime(&f);
t.QuadPart = f.dwHighDateTime;
t.QuadPart <<= 32;
t.QuadPart |= f.dwLowDateTime;
}
t.QuadPart -= offset.QuadPart;
microseconds = (double)t.QuadPart / frequencyToMicroseconds;
t.QuadPart = microseconds;
tv->tv_sec = t.QuadPart / 1000000;
tv->tv_usec = t.QuadPart % 1000000;
return (0);
}
------------
The following Microsoft page (
http://www.vbapi.com/ref/q/queryperformancecounter.html ) says it supported
on all version (startig from NT3.51 and Windows95) an, at least, it calls
GetTickCount()*1000.
> >> In http://www-106.ibm.com/developerworks/library/l-rt1/ there is a
> >> detailed instruction how to use the hugh resolution counter. .
> >>
> >> $ cat timeofday.c #include <sys/time.h>
> >>
> >> int main() { struct timeval tp; long a,b;
> >>
> >> gettimeofday(&tp,0); a =
> >> ((unsigned)tp.tv_sec)*1000000+((unsigned)tp.tv_usec);
> >>
> >> printf("timestamp (us): %d\n",a); usleep(1000); gettimeofday(&tp,0); b
> >> = ((unsigned)tp.tv_sec)*1000000+((unsigned)tp.tv_usec);
> >printf("timestamp
> >> (us): %d (diff) %d\n",b,b-a); }
I adapted the code to find exactly the minimum time slice, and not "how well
1ms of retard is seen" (it was a 0ns or 15000ns, this is always 15000ns;
moreover printf is out of the "timing" section):
-------
#include <sys/time.h>
#define TV2LONG(t) ((unsigned long)t.tv_sec)*1000000+((unsigned
long)t.tv_usec)
int main() {
struct timeval tp;
unsigned long a, b;
unsigned int n = 1;
for(n=1; n<=20; n++) {
gettimeofday(&tp, 0);
a = TV2LONG(tp);
do {
gettimeofday(&tp, 0);
} while((b = TV2LONG(tp)) <= a);
printf("try %2d: %ld - %ld = %ld\n", n, a, b, b-a);
}
}
-------
> Was there a patch submitted for this for cygwin?
Well, sort of, as said in the first paragraph of this reply =)
> If I missed it, I apologize. Otherwise, since this issue is of zero
> importance to me
I am a little of a maniac indeed, and I'd like to have that patch applied
(even if I don't think I need nano performance anytime soon ^_^) if that's
not a problem. I could also volunteer to make the patch but... I think it's
just amatter of cut-n-paste from Tmi Prince's message (or from this one).
--
Lapo 'Raist' Luchini
lapo@lapo.it (PGP & X.509 keys available)
http://www.lapo.it (ICQ UIN: 529796)
--
Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
Bug reporting: http://cygwin.com/bugs.html
Documentation: http://cygwin.com/docs.html
FAQ: http://cygwin.com/faq/