This is the mail archive of the gsl-discuss@sources.redhat.com mailing list for the GSL project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: discret random distributions: test


yOn Fri, 24 Dec 2004, Jerome BENOIT wrote:

> Thanks for the reply.
> 
> Brian Gough wrote:
> > Jerome BENOIT writes:
> >  > I understood the sampling part and the comparing part.
> >  > What confuses me is the compatibility criteria.
> >  > In particular, why the undimensionless sigma
> >  > variable (a difference over a square root) is compare
> >  > to a dimensionless value (a constant) ?
> > 
> > The number of counts in a bin is a dimensionless quantity.
> > 
> 
> I guess that there is a missunderstanding on my side:
> is there somewhere in the (classical) literature
> something which can clarify my understanding of the criteria ?

Most generators of random distributions (uniform or otherwise) are
tested by comparing their output with the (or an) expected result.  As
in: "Suppose I generate some large number of samples from this
distribution, and they are truly random.  Then I >>know<< the actual
distribution of values that I >>should<< get.  If I compare the
distribution of values that I >>did<< get to the one I should have
gotten, I can calculate the probability of getting the one I got.  If
that probability is very, very low, there is a pretty good chance that
my generator is broken."

For example, suppose I am generating heads or tails via a coin flip, or
random bits with a generator.  If I generate a large number of them (N)
and M of them turn out to be heads or 1's, I can compute very exactly
from the binomial distribution what the probability is that I got the
particular N/M pair that I did get.  If that probability is small
(perhaps I generated 1000 samples and 900 turned out to be heads) then
we would doubt the generator, or if it were a coin we would doubt that
the coin was an unbiased coin.  We might begin to suspect that if we
flipped it a second 1000 times, we would be significantly more likely to
get heads than tails.

The value computed is called the "p-value" -- the probability of getting
the distribution you got presuming a truly random distribution
generator.  Of course, p-values are THEMSELVES distributed randomly,
presumably uniformly, between 0 and 1.  Sometimes generators might fail
by getting too CLOSE to the "expected result" -- like a coin that always
flipped exactly 500 heads out of 1000 flips, you'd start to look to see
if it were generating sequences like HTHTHT that aren't random at all.

So you can do a bit better by performing lots of trials and generating a
distribution of p-values, and comparing that distribution to a uniform
one to obtain ITS p-value.  Usually one uses a Kolmogorov-Smirnov test
to do this (and/or to compare a nonuniform distribution generator to the
expected nonuniform distribution in the first place).  Alternatively,
one can plot a histogram of p-values and compare it to uniform, although
that isn't quantitatively as sound.

This kind of testing (and more) is described in Knuth's The Art of
Programming, volume II (Seminumerical Algorithms) and is also described
in some detail in the white paper associated with the NIST STS suite for
testing RNG's.  A less detailed description is given in the documents
associated with George Marsaglia's Diehard suite of random number
generator tests.  There are links to both of these sites near the top of
the main project page for an RNG tester I've been writing here:

  http://www.phy.duke.edu/~rgb/General/dieharder.php

If you grab one of the source tarballs from this site, in the docs
directory are both the STS white paper and diehard.txt from diehard (as
well as several other white papers of interest from e.g. FNAL and CERN).

So in a nutshell, most tests are ultimately based on the central limit
theorem.  From theory one gets a mean (expected value) and standard
deviation for some sampled quantity at some sample size.  One generates
a sample (large enough that the CLT has some validity, minimally more
than 30, sometimes much larger).  One compares the difference between
the (computed) value you get and the value you expected to get to the
standard deviation, and use the error function (for example) or chisq
distribution to determine the probability of getting what you got.  If
(very) small, maybe bad.  If not, you can either accept it as good
(really, as "not obviously bad") or work harder to resolve a problem.

Hope this helps...and Merry Christmas!

   rgb

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb@phy.duke.edu



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]