This is the mail archive of the gsl-discuss@sources.redhat.com mailing list for the GSL project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: specfunc error analysis - generated and propagated errors.


Hi,

On Wed, Sep 08, 2004 at 01:21:21PM -0600, Gerard Jungman was heard to remark:
> On Wed, 2004-09-08 at 11:17, Jonathan G. Underwood wrote:
> > I wonder if a cleaner design might be to move to a backward error 
> > analysis philosophy, whereby the functions are evaluated as they are 
> > currently, but without the forward error analysis, and the test suite is 
> > used to chacterise/establish the accuracy of the implementation by 
> > comparing known values of the functions to computed values.
> 
> I like this idea. This would work well for the functions of
> a single variable. On the other hand, those are often the

backward error analysis might be just the right thing for the 
"gold-plated" algorithms that Gerard refered to.   However,
it makes me nervous in other cases e.g. the mentioned incomplete 
gamma, where errors can go from tiny to huge for some small 
changes in input parameters.  

Besides, I'm guessing that *if* a good backward-analysis can 
be done for an algo, then the algo can probably be fixed to 
provide the 'gold-plated' answers.   :)

Besides, "comparing known values of the functions to computed values" 
is a sure-fire recipie for disaster, since users often explore paramter
values that the authors didn't test for ... I still remember being burned
by the NAG libraries, on bessel functions of high order, for which the 
the output of the NAG algos resembled swiss cheese... 

> > returned by the _e functions. It would make for a much cleaner design of 
> > the specfun code, and also remove the runtime computational overhead of 
> > (most-times often unused) the forward error analysis.

I don't get it ... there is a 'simple' technical fix to this, 
by using some c pre-prossesor macro magic.  You can have
your cake and eat it too.  Compile functions with and without
error esitmates from the same source code base, and then use
macro magic to define two different subroutine names, one for each
version.


The first part is 'easy': define macros that will conditionally 
compile the error estimating code.  For example:

#ifdef WITH_ERR_EST
#define DO_EST(x) x
#else 
#define DO_EST(x) 
#endif

...gsl_some_func ... 
{
   double c = a*b;
   DO_EST (double err_est;)
   DO_EST (err_est = 1.0e-16*(1/a + 1/b);)
}


The harder part is to tweak the subroutine names:

#ifdef WITH_ERR_EST
#define SUBR_DECL(sub_name)  int sub_name##_e (double a, double b, gsl_sf_result * result)
#else 
#define SUBR_DECL(sub_name)  double sub_name (double a, double b)
#endif

and then:

SUBR_DECL(gsl_multiply)
{
   double c = a*b;
   DO_EST (double err_est;)
   DO_EST (err_est = 1.0e-16*(1/a + 1/b);)

   RETVAL;
}

where RETVAL is conditionally defined to return either the value or the
result code ... 

You can then build both versions and put both in the same library,
without even modifying the makesfiles:  for eaxample, if the above
example were in the file "gsl_mult.c"  then you could have a 
"gsl_really_mult.c" which looked like:

#define WITH_ERR_EST
#include "gsl_mult.c"
#undef WITH_ERR_EST
#include "gsl_mult.c"

and bingo, you've got one .o with both routines in it ... 


--linas


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]