This is the mail archive of the ecos-patches@sources.redhat.com mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: FP Kernel test added


Nick Garnett wrote:
Jonathan Larmour <jifl@eCosCentric.com> writes:


Nick Garnett wrote:

//==========================================================================
// Test calculation.
//
// Generates an array of random FP values and then repeatedly applies
// a calculation to them and checks that the same result is reached
// each time. The calculation, in the macro CALC, is intended to make
// maximum use of the FPU registers. However, the i386 compiler
// doesn't let this expression get very complex before it starts
// spilling values out to memory.
Surely it's only a good test once we're sure we've spilled some
:-). Perhaps we should have a few more to cover non-x86 FP too (a
check we haven't screwed up and missed off FP registers for example).

Possibly. For that I suspect we would have to go down to using
assembler to explicitly fill the FPU with known values and test them.
It would be difficult to manufacture expressions that cajoled the
compiler into doing that.
What I mean is that it doesn't matter if you spill as long as the FP register set is full. At least GCC only saves some registers when it needs to (using fst) rather than an fsave of the entire register set. And I'm sure that in general the register set will be kept as full as possible.

Hmm... I'm not sure that any calculation on a NaN is necessarily
deterministic? e.g. the exponent could be set to 2047 in the result
register and the sign or mantissa left untouched from the previous
operation in that register, which may make the absolute value not be
deterministic.

I seem to recall having problems like this in the libm testing, and
had to use the isnan() function instead for an explicit test for NaNs.

Well hopefully I will not be generating any NaNs. The values are all
<1, so we won't be getting any overflows. The worst that might happen
is that the result gets progressively rounded to zero.
That was the theory anyway -- it's a long time since I paid any
attention to this sort of stuff.
Doh! A bit more attention on my part would have made me pay more attention to the cast. I was over-hastily thinking (double)0x7fffffff was going to be the double representation of those four bytes rather than 2147483647.0 or thereabouts. Yes I can't see NaNs being generated in that case.

I did notice this though :-)...

/home/jlarmour/sourceware/ecos/ecos/packages/kernel/current/tests/fptest.c: In function `fptest1':
/home/jlarmour/sourceware/ecos/ecos/packages/kernel/current/tests/fptest.c:176: warning: passing arg 1 of `do_test' from incompatible pointer type
/home/jlarmour/sourceware/ecos/ecos/packages/kernel/current/tests/fptest.c: In function `fptest2':
/home/jlarmour/sourceware/ecos/ecos/packages/kernel/current/tests/fptest.c:194: warning: passing arg 1 of `do_test' from incompatible pointer type
/home/jlarmour/sourceware/ecos/ecos/packages/kernel/current/tests/fptest.c: In function `fptest3':
/home/jlarmour/sourceware/ecos/ecos/packages/kernel/current/tests/fptest.c:214: warning: passing arg 1 of `do_test' from incompatible pointer type

A final thing is that even on my speedy PC with hardware FP it takes a long time. What about simulators with software floating point? Or in fact most targets with software floating point? I think the default multiplier should probably be 50 for x86 as a special case (since even an embedded x86 device is likely to be a lot slower than an 800MHz desktop), 10 for almost everything else and 1 for simulators. OK?

If so I'll fix the warnings at the same time.

Jifl
--
eCosCentric http://www.eCosCentric.com/ <info@eCosCentric.com>
--[ "You can complain because roses have thorns, or you ]--
--[ can rejoice because thorns have roses." -Lincoln ]-- Opinions==mine


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]