This is the mail archive of the
mailing list for the glibc project.
Re: [RFC] Use simpler math functions when user selects -ffast-math
- From: OndÅej BÃlka <neleai at seznam dot cz>
- To: Joseph Myers <joseph at codesourcery dot com>
- Cc: andrew dot n dot senkevich at gmail dot com, libc-alpha at sourceware dot org
- Date: Wed, 12 Aug 2015 19:25:31 +0200
- Subject: Re: [RFC] Use simpler math functions when user selects -ffast-math
- Authentication-results: sourceware.org; auth=none
- References: <20150811223007 dot GA21338 at domone> <alpine dot DEB dot 2 dot 10 dot 1508121636330 dot 20932 at digraph dot polyomino dot org dot uk>
On Wed, Aug 12, 2015 at 04:45:55PM +0000, Joseph Myers wrote:
> On Wed, 12 Aug 2015, OndÅej BÃlka wrote:
> > Hi Andrew,
> > When I checked isfinite performance I noticed that for most math
> > functions wrapper is unnecessary when -ffinite-math-only is set from
> > -ffast-math as they evaluate to constant.
> It's already the case that -ffinite-math-only causes __*_finite versions
> of many libm functions to be used, so bypassing the wrappers. Or do you
> mean something other than the math/w_*.c wrappers?
Ah, when I read code I didn't notice these. I got this when thinking how
to eliminate wrappers. I got idea that we could inline wrapper and teach
gcc dataflow to compute float upper and lower bounds. Currently it couldn't simplify
if (0.0 < x && x < 1.0 && __finite (x))
That would allow add macro to test with builtin_constant_p if function
arguments imply finite result and call function_finite if so.
> > Adding new symbols would allow to fix bug that due accuracy math
> > functions need to be very slow on some inputs. As with -ffast-math you
> > don't have to worry much about accuracy we could finally fix these bugs.
> As far as I know, we have consensus on the documented accuracy goals for
> libm functions, which do *not* require pow, exp, log, sin, cos, tan, asin,
> acos, atan or atan2 to be correctly rounded. Thus, various dbl-64
> performance issues (bugs 5781, 13932, 17211, at least) could probably be
> addressed by removing the very slow multiple-precision cases and allowing
> non-correctly-rounded results for some inputs.
> *But* simply removing the multiple-precision cases isn't appropriate
> without justification that it is safe. That is, each removal needs to
> include an error analysis of the existing non-multiple-precision code that
> shows why, even if you remove the multiple-precision case, the errors
> still won't be too big for any inputs. The error analysis might be fairly
> simple, but it does need to be there.
So first we should document accuracy of these functions, which could
also allow to simplify vector ones when they use same algorithm.