This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH] Reduce the maximum precision for exp and log
- From: "Joseph S. Myers" <joseph at codesourcery dot com>
- To: Siddhesh Poyarekar <siddhesh dot poyarekar at gmail dot com>
- Cc: Siddhesh Poyarekar <siddhesh at redhat dot com>, <libc-alpha at sourceware dot org>
- Date: Mon, 11 Mar 2013 15:04:13 +0000
- Subject: Re: [PATCH] Reduce the maximum precision for exp and log
- References: <20130228160427.GE2358@spoyarek.pnq.redhat.com><Pine.LNX.4.64.1302281701490.5096@digraph.polyomino.org.uk><CAAHN_R16buVZqAJhGdS9p32ttayXDe2=20NCqce+6GWL2CNF=w@mail.gmail.com><CAAHN_R1585nmGWr3qKh9bwd1aE=5CXEgjTN6iGUYCDDgLnq_oA@mail.gmail.com>
On Mon, 11 Mar 2013, Siddhesh Poyarekar wrote:
> I've not been able to convince myself that the findings of the paper
> apply to glibc libm. As I understand it, the results of the paper
> conclude that rounding an approximation of f(x) to N bits is
> equivalent to rounding the exact value at infinite precision. It says
> nothing about the precision of intermediate computations involved in
> computing f(x) however, since that could depend on the method used to
> arrive at the result and could be more than N. Does this make sense
> or is there a known correlation between intermediate computation
> precision and the final precision that I don't know about?
The general result of such exhaustive searches for worst rounding cases is
information of the form:
* For arguments that can be represented in double, the exact result is at
least 2^-N ulps away from any value half way between two representable
values (or, exact-result/nearest-halfway-value is at least 2^-M away from
1, which can be an easier form of result to reason with). This is what's
relevant for round-to-nearest.
* If directed rounding is considered, something similar where you look at
distances from representable values rather than half-way values.
To apply this to a particular implementation, that computes an approximate
value to some number of bits, you need to show that (approximate-result /
exact-result) is close enough to 1 that approximate-result and
exact-result are on the same side of the same halfway value, given the
bound on how close exact-result / halfway-value can be to 1.
The approximate result might, say, sum some number of terms of a series
that converges to the exact value. In that case, for each step in the
computation you may wish to bound how far the approximate result of that
step can be from the exact result of that step, as well as how far the
exact result of summing that number of terms can be from the exact value
of the function from an infinite sum.
See, for example, the "Error calculus" or "Relative error analysis", and
the following sections, in MPFR's algorithms.tex.
--
Joseph S. Myers
joseph@codesourcery.com