The libm version of pow() is producing inaccurate results when the base is very close to 1.0 and the exponent has a very large magnitude. Once I get this bug reported, I'll attach an example source that demonstrates the problem. The constants used in this example come from the paranoia test program. I'm running this on Fedora Core 3 with the glibc-2.3.4-2.fc3 rpm. The output of the test program is: pow( 0.9999999999999999, -18014398509482000.0) ideal ~ 7.3890560989307099 (diff = 5.9508e-14) paranoia = 7.3890560989306504 64-bit log/exp = 7.3890560989306637 (diff = 1.33227e-14) 64-bit pow = 7.3890560954893578 (diff = -3.44129e-09) 53-bit log/exp = 7.3890560989306637 (diff = 1.33227e-14) 53-bit pow = 54.5981484040811011 (diff = 47.2091) The "ideal" value was computed using the arbitrary-precision calculator calc (from ftp://ftp.uu.net/pub/calc). The value expected by paranoia is reported next, and it's pretty close (deviation of about 6e-14). The values produced by log,*,exp, are similarly close. The values produced by pow are the problematic ones. Fir the 64-bit pow case, the deviation from the paranoia value is around 3e-9, or at least 10,000 times too much deviation. I've also included code that sets the x86 rounding mode to 53-bit precision. The log,*,exp code still produces a pretty good value. The pow function call's error gets really out of hand, producing a result which doesn't even have the right order of magnitude. I don't know if libm is supposed to be able to support the x87 53-bit precision rounding mode, but it would be nice if the values were a bit closer than this. :) Regardless, the error in the default 64-bit rounding mode is what I'm reporting, at least primarily. I see that the ieee754_pow routine is using an iterative approach to compute the value, because the exponent is an integer. Note that the exponent is an integer only because it's so large that all of the mantissa bits are over in the integral part of the value. I suspect that this approach just accumulates too much multiplication error. However, I also suspect that it was coded that way for a good reason, presumably because there are cases where it produces less error than the log,*,exp approach. So, perhaps there should just be some additional condition that precludes that code from being used for these values. Maybe some a cutoff if the base is very close to 1.0. I'm not sure. I ran across one other discussion of a bug very similar to this one by some python maintainers at the following URL's: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=705231&group_id=5470 http://mail.python.org/pipermail/python-bugs-list/2003-May/017987.html They mentioned that they intended to report the issue, but perhaps they didn't do so.

Created attachment 395 [details] loss_of_power.c

Bugs with pow() are mentioned to be fixed in fdlibm 5.3 <ftp://ftp.netlib.org/fdlibm/changes> so, maybe if will be easy to fix them, glibc was based on fdlibm.

The current glibc routines for pow are not based on fdlibm anymore.

Provide a patch.

Suspended until somebody provides a patch.

*** Bug 2579 has been marked as a duplicate of this bug. ***

Still present.

I should add: part of the ABI is that the libm functions are expected to be called with the FPU in 64-bit mode, so the results with it in 53-bit mode are not a bug; it's only the 64-bit mode case where we ought to do better.

Created attachment 6245 [details] C code with a better algorithm Hi all, When the exponent is an integer, a fast exponentiation is used and when it is negative the base is inverted before (cf e_pow.S). So in this case the error induced by the inversion is cumulated in squaring several times. In the present case base=(2^53-1)*2^(-53), exponent=2^54 and 1/base=(2^52+1)*2^(-52), classical approximation shows that (1/base)^(2^54))~exp(4) and 1/(base^(2^54))~exp(2). So an easy way to avoid to cumulated error induced by inversion is to invert the result rather than invert the base. The attached code (in C) shows that clearly. I'm not able to fix the assembly code in e_pow.S, so I let experimented people do that. Thanks Calixte

(In reply to comment #9) > Created attachment 6245 [details] > C code with a better algorithm Note concerning your code: it assumes that char is signed. You should use int for small integers, not char. > When the exponent is an integer, a fast exponentiation is used [...] This is not perfect (you still won't get a very accurate result), but certainly better than the current code (I haven't looked at it), which is apparently affected by a cancellation. Note that if you want to compute pow(x,y) where y is not necessarily an integer, you can write y = yi + yf, where yi is an integer and |yf| < 1 (e.g. |yf| <= 0.5 or 0 <= yf < 1). Then mathematically, pow(x,y) = pow(pow(x,yi),yf), and you can use that to get a more accurate result than the current algorithm (I haven't checked in detail...). For more accurate algorithms concerning the integer powers: @Article{KLLLM2009a, AUTHOR = {P. Kornerup and Ch. Lauter and V. Lefèvre and N. Louvet and J.-M. Muller}, TITLE = {Computing Correctly Rounded Integer Powers in Floating-Point Arithmetic}, JOURNAL = {{ACM} Transactions on Mathematical Software}, VOLUME = {37}, NUMBER = {1}, MONTH = jan, YEAR = {2010}, URL = {http://doi.acm.org/10.1145/1644001.1644005}, DOI = {10.1145/1644001.1644005} } and for correct rounding of the general pow() function, assuming you already have an accurate algorithm: @Article{LauLef2009a, AUTHOR = {Ch. Lauter and V. Lefèvre}, TITLE = {An efficient rounding boundary test for \texttt{pow(x,y)} in double precision}, JOURNAL = {{IEEE} Transactions on Computers}, PUBLISHER = {{IEEE} Computer Society Press, Los Alamitos, CA}, VOLUME = {58}, NUMBER = {2}, PAGES = {197--207}, MONTH = feb, YEAR = {2009}, KEYWORDS = {floating-point arithmetic,correct rounding,power function}, URL = {http://www.vinc17.net/research/papers/ieeetc2009-powr.pdf}, DOI = {10.1109/TC.2008.202} }

Clearly the pow alogrithm should be reimplemented with a better one. What I proposed is just an easy workaround which can be easily and quickly implemented (and it would improve the actual results). About the char in the C code: it is just used for its sign (+/- 1) not for its value.

(In reply to comment #11) > What I proposed is just an easy workaround which can be easily and quickly > implemented (and it would improve the actual results). I'm not sure that improving the code for integer exponents only would be a good idea. A consequence would be that the f(x) = pow(a,x) implementation would be completely non-monotonous (with important differences near the integer inputs), with possible bad side effects on some codes. > About the char in the C code: it is just used for its sign (+/- 1) not for its > value. But a sign is part of the value. On platforms where char is unsigned, a char value cannot have a negative sign.

(In reply to comment #10) > Note that if you want to compute pow(x,y) where y is not necessarily an > integer, you can write y = yi + yf, where yi is an integer and |yf| < 1 > (e.g. |yf| <= 0.5 or 0 <= yf < 1). Then mathematically, > pow(x,y) = pow(pow(x,yi),yf), and you can use that to get a more accurate > result than the current algorithm (I haven't checked in detail...). This is incorrect. I should have re-read my message. There are 2 possible simple range reductions that wouldn't require much rewriting, in order to reduce the problem to integer exponents (if one is able to fix that easily). 1. y = yi + yf, where yi is an integer and |yf| < 1. Then mathematically, pow(x,y) = pow(x,yi) * pow(x,yf), where pow(x,yi) can be computed with an iterated exponentiation. Note that yi and yf are computed exactly. 2. y = y0 * 2^e, where 0.5 <= |y0| < 1 for instance (which can be obtained with frexp()). Then mathematically, pow(x,y) = pow(pow(x,y0),2^e), where the ^(2^e) can be computed with repeated squarings. Now, I fear that if the iterated exponentiation is not done in multiple precision, this won't solve the problem for very large values of n. Indeed, pow(x,n) evaluated with such an algorithm (whatever iteration is used) is approximated with a factor of about[*] (1+epsilon)^(n-1) > 1 + (n-1).epsilon, where 1+epsilon is the error bound on the multiplication in Higham's notation. And this would be slow. [*] Actually this is an upper bound, but in average, you would probably get the same kind of problems with large values of n. And Calixte, I've looked at your comment #9 and your code more closely, and I agree that in the case of a negative exponent, the inversion should be done after the exponentiation if such an algorithm is used.

I've done some tests, and it seems that pow(x,y) for large y is inaccurate only when y is an integer. So, if I understand correctly, an iterative algorithm is used for integers y, and a log-exp algorithm (which seems accurate enough on the few values I've tested) is used for other values. So, a possible easy fix would be to always use the log-exp algorithm, possibly except for small integers y. Note: I've updated the bug summary following this observation (large exponent → large integer exponent).

Fixed by: commit c483f6b4a4277bc209820efc1ae35d976af57b4e Author: Joseph Myers <joseph@codesourcery.com> Date: Mon Apr 9 09:42:05 2012 +0000 Fix x86 pow inaccuracy for large integer exponents (bug 706). (A similar issue for x86/x86_64 powl is tracked in bug 13881.)

*** Bug 260998 has been marked as a duplicate of this bug. *** Seen from the domain http://volichat.com Page where seen: http://volichat.com/adult-chat-rooms Marked for reference. Resolved as fixed @bugzilla.