This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

libm-test.inc: Computing ulps near FP_ZERO.


Team,

For brevity I'm skipping a lot of background information
that a casual reader might need to work through the details
of this post. If you have such questions please raise them
in another thread.

While looking at an LSB test that is failing for glibc 
I had the occasion to think about ulp(x) and exactly 
what does ULP mean and if glibc's math/libm-test.inc
is even defining it correctly in check_float_internal.

I came across a *fascinating* paper by Jean-Michel
Muller specifically on the topic of ulp(x). You can 
read it at INRIA's, thankfully open, archive:
http://hal.inria.fr/inria-00070503/en/

The inaccuracies are not the part of the problem that
intrigued me, but rather *how* we measure inaccuracies
in the test suite.

The upstream LSB bug is here:
https://lsbbugs.linuxfoundation.org/show_bug.cgi?id=2912

The related glibc bug is here:
http://sourceware.org/bugzilla/show_bug.cgi?id=14473

The imaginary result of this particular cpow invocation
is near zero, or rather the accurate result should be -0
(negative zero).

Here is where we have problems. In Java with 64-bit 
double StrictMath.ulp(0x0.0p0) yields 0x0.0000000000001p-1022.
This value is the smallest normal (before getting into 
subnormals) value. That doesn't seem quite right but it's 
better than what glibc says.

For glibc we use 2^(-1 * Number of mantissa bits) as the 
spacing for the next floating point value which makes 
absolutely no sense to me.

Look at math/libm-test.inc line 578:
~~~
         /* ilogb (0) isn't allowed. */
         ulp = diff / FUNC(ldexp) (1.0, - MANT_DIG);
~~~

Yes, ilogb(0) isn't allowed, but that's not an excuse for
using a *huge* value for ulp around zero.

By the strict definition of ulp(x) I would have expected 
Java's ulp(x) to yield the next smallest number after 
zero which is the tiniest subnormal value near zero or 
2^(-1074). However, using the tiniest value would likely 
be an unfair measure of ULPs, resulting in huge ULPs 
values when comparing an expected -0 against a 
not-quite-zero value say -0x1.1a62633145c07p-53 or 
-1.224647e-16 (the imaginary result of cpow({-1-0i},{-1-0i})
as required in the LSB test case).

Using the tiniest value of ulp would result in an ULPs 
value of 2.478713e+307. That makes sense, -1.224e-16 is 
nowhere near zero, and around zero there are nots of 
small subnormals.

Using Java's measure of ulp(x) our cpow result is 
5.503848e+291 ULPs away from zero, a tad better 
than 2.47e+307.

Either way it doesn't seem like glibc's measure for 
FP_ZERO comparisons is useful or correct.

It would seem to me that the correct answer for glibc would
be to install this patch and update all ULPs for all targets:

diff --git a/math/libm-test.inc b/math/libm-test.inc
index 78d2107..f3492c6 100644
--- a/math/libm-test.inc
+++ b/math/libm-test.inc
@@ -248,6 +248,8 @@ static FLOAT max_error, real_max_error, imag_max_error;
 
 #define MANT_DIG CHOOSE ((LDBL_MANT_DIG-1), (DBL_MANT_DIG-1), (FLT_MANT_DIG-1),  \
                         (LDBL_MANT_DIG-1), (DBL_MANT_DIG-1), (FLT_MANT_DIG-1))
+#define MAX_EXP CHOOSE ((LDBL_MAX_EXP-1), (DBL_MAX_EXP-1), (FLT_MAX_EXP-1),    \
+                       (LDBL_MAX_EXP-1), (DBL_MAX_EXP-1), (FLT_MAX_EXP-1))
 
 static void
 init_max_error (void)
@@ -576,8 +578,15 @@ check_float_internal (const char *test_name, FLOAT computed, FLOAT expected,
       switch (fpclassify (expected))
        {
        case FP_ZERO:
-         /* ilogb (0) isn't allowed. */
-         ulp = diff / FUNC(ldexp) (1.0, - MANT_DIG);
+         /* If we compute the distance to the next FP it will be the same as the
+            value of the smallest subnormal number. This would yield a tiny value
+            the would mean that any answer not near zero would have a huge number
+            of ULPs. Instead we decide that the next FP is that of the nearest
+            normal value i.e. not subnormal.  Previously we used 2^(-MANT_DIG) which
+            is too large a value to be useful. Note that we can't use ilogb(0),
+            since that isn't a valid thing to do. The next normal value is going to
+            be 2^(1 - MAX_EXP).  This lines up with the Java implementation of ulp.  */
+         ulp = diff / FUNC(ldexp) (1.0,1 - MAX_EXP);
          break;
        case FP_NORMAL:
          ulp = diff / FUNC(ldexp) (1.0, FUNC(ilogb) (expected) - MANT_DIG);
---

Appling the above patch yields the following failures on x86-64 and i686:

test-float
* cos (pi/2) == 0
* sincos (pi/2, &sin_res, &cos_res) puts 0 in cos_res
* Imaginary part of: cpow (e + 0 i, 0 + 2 * M_PIl i) == 1.0 + 0.0 i
Test suite completed:
  6659 test cases plus 6055 tests for exception flags executed.
  6 errors occurred.

test-double
* cos (pi/2) == 0
* sincos (pi/2, &sin_res, &cos_res) puts 0 in cos_res
* Imaginary part of: cpow (e + 0 i, 0 + 2 * M_PIl i) == 1.0 + 0.0 i
Test suite completed:
  7928 test cases plus 7306 tests for exception flags executed.
  6 errors occurred.

test-ldouble
* cos (pi/2) == 0
* sincos (pi/2, &sin_res, &cos_res) puts 0 in cos_res
* Imaginary part of: cpow (e + 0 i, 0 + 2 * M_PIl i) == 1.0 + 0.0 i
Test suite completed:
  9687 test cases plus 8828 tests for exception flags executed.
  6 errors occurred.

Same for the inline versions.

Just to give you an example:

testing double (inline functions)
Failure: Test: cos (pi/2) == 0
Result:
 is:          6.12323399573676603587e-17   0x1.1a62633145c070000000p-54
 should be:   0.00000000000000000000e+00   0x0.00000000000000000000p+0
 difference:  6.12323399573676603587e-17   0x1.1a62633145c070000000p-54
 ulp       :  275192392932288291382961639461033179209903902386127895622
33145207880600203660646249459781855705384154036134152995679928680665010
03389023264638406893709348826506213454159525660880911233357872236654861
47898565233252638577912718895242447582942748268701023994309760119653369
7628178410064519888896.0000
 max.ulp   :  1.0000
Maximal error of `cos'
 is      : 275192392932288291382961639461033179209903902386127895622331
45207880600203660646249459781855705384154036134152995679928680665010033
89023264638406893709348826506213454159525660880911233357872236654861478
98565233252638577912718895242447582942748268701023994309760119653369762
8178410064519888896 ulp
 accepted: 2 ulp

As expected 6.123e-17 is a *terrible* answer for zero. It should be
a failure until we get down to < 16 ulp.

Unfortunately with the stricter definition of ulp around FP_ZERO
that would mean an answer less than or equal to 2^(-1022+4) or
2^(-1018) e.g. <= 3.560118e-307, which is very small.

It seems that glibc's math/libm-test.inc is not correctly computing
ulp around FP_ZERO, and that this results in a testsuite pass when
it should not.

The first red flag was that the test case that fails for the LSB
was passing in glibc because of our wide ULPs around FP_ZERO.

The second red flag was that we didn't match Java's StrictMath.ulp()
function as a comparative implementation.

Did I miss something or misunderstand some concept?

Cheers,
Carlos.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]