This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Test suite docs


This is my first experience running the test suite, and it is quite
frustrating.  All I wanted was to run the tests before and after a
change I'm about to suggest on gdb-patches.  Unfortunately, I ended up
wasting my scarce free time on figuring out several gory details.

While I'm no newcomer to Free Software, and I expect to spend some
time figuring out things on my own when it comes to using a new piece
of software, the test suite makes it exceptionally hard, IMHO.  Some
of the reasons are out of our control: the tests use several software
packages (Dejagnu which uses Expect which uses TCL), so answers are
potentially scattered across several unrelated packages, and the fact
that none of them has GNU standard Info manuals (or at least I
couldn't find them on fencepost.gnu.org) doesn't help.  But that's
just one more reason to have a good user-level documentation in GDB to
help overcome these difficulties.

I try to summarize the problems I needed to solve below.  Could
someone experienced in running the test suite please consider writing
up some user-friendly instructions for first-time users of the test
suite?  I volunteer to do whatever it takes to add them to the
appropriate parts of the GDB documentation, but I need someone ``in
the know'' of the test suite internals to provide the content.

I did find the few words that gdb/README says about running the tests,
and the brief section in gdbint.texinfo; however, they still fall
short of what I needed to know, see below.

TIA

Here are the questions I couldn't find answers to:

  . Where do I find the canonical results for my platform?

    People talk about XFAILs and ``unexpected failures'', but there
    seems to be no place to consult the expected results for all the
    tests and see if what you get is okay or not.  The test suite
    prints a summary of the tests, but how do I find out what are
    those ``unexpected successes'' and ``expected failures''?  What
    are those XPASS, XFAIL, UNTESTED, and other indications displayed
    while the suite runs?

  . How do I compare two runs?  If diff'ing testsuite/gdb.sum is the
    right way, it seems to not be documented anywhere, and gdb.sum
    doesn't seem to be preserved across runs, so one must manually
    copy it to avoid overwriting it.  Am I missing something?

  . How does one disable a specific test?  Suppose some test takes an
    exceptionally long time -- how do I run the suite without it?
    
    gdbint.texinfo tells how to _run_ a specific test or a short list
    of test, but that method is not practical for _disabling_ a small
    number of tests and running all the rest.  gdbint.texinfo also
    says something about not ``adding expected failures lightly'', but
    keeps silent about how does one make a test an expected failure.
    In general, the language in that section of gdbint assumes the
    reader is already an experienced writer of Dejagnu tests, which is
    not a good assumption for a manual.

  . Where do I look for definitions and docs of specific subroutines
    that the *.exp files use?

    Suppose I've found out about the set_xfail subroutine, and want to
    look into it and see whether it can be used to disable a test:
    where do I look for its definition or its documentation?  There's
    the test's *.exp file, there's the testsuite/lib/ subdirectory
    (which, btw, is only mentioned in passing in gdbint.texinfo), and
    then there are Dejagnu and Expect and TCL.  Could we please have a
    list of files or directories to look in, and a list of
    documentation files to browse?


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]