This is the mail archive of the mauve-patches@sourceware.org mailing list for the Mauve project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

New Harness, version 1


Okay, the new Harness is checked in, please use it and comment on
features you would like added.  

Running tests is now much easier, please use the harness script in the
top folder.  ./harness -help will print a help message, and the README
file gives more detailed instructions, but it is very intuitive.  Simply
specify the tests or folders you want to run.

"./harness javax.swing" will run all the tests in
gnu.testlet.javax.swing.  

It is important to specify the VM you wish to use to run the tests.  If
you always (or almost always) run the Mauve tests using the same VM, you
should export the environment variable MAUVEVM.  For example, I have
MAUVEVM=jamvm, so the above command would run all the javax.swing tests
on JamVM.  Alternately, you can use the -vm option (say, if you want to
run against Sun to make sure the tests pass, or just for comparison):

"./harness -vm PATH-TO-EXECUTABLE" javax.swing"

This will override the MAUVEVM variable.  If neither of these options is
used, the tests will be run on the system "java" VM, whatever you have
that set to.

Comments are greatly appreciated.

2006-04-05  Anthony Balkissoon  <abalkiss@redhat.com>

	* Harness.java: New file.
	* README: New file (old one moved to README.OldHarness).
	* README.OldHarness: New file, copied from the old README.
	* RunnerProcess.java: New file.
	* harness: New file.

--Tony
Index: Harness.java
===================================================================
RCS file: Harness.java
diff -N Harness.java
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ Harness.java	5 Apr 2006 20:07:20 -0000
@@ -0,0 +1,683 @@
+// Copyright (c) 2006  Red Hat, Inc.
+// Written by Anthony Balkissoon <abalkiss@redhat.com>
+
+// This file is part of Mauve.
+
+// Mauve is free software; you can redistribute it and/or modify
+// it under the terms of the GNU General Public License as published by
+// the Free Software Foundation; either version 2, or (at your option)
+// any later version.
+
+// Mauve is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License
+// along with Mauve; see the file COPYING.  If not, write to
+// the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, 
+// Boston, MA 02110-1301 USA.
+
+/*
+ * See the README file for information on how to use this
+ * file and what it is designed to do.
+ */
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.FileReader;
+import java.io.IOException;
+import java.io.InputStreamReader;
+import java.io.PrintWriter;
+import java.util.Vector;
+
+/**
+ * The Mauve Harness.  This class parses command line input and standard
+ * input for tests to run and runs them in a separate process.  It detects
+ * when that separate process is hung and restarts the process.
+ * @author Anthony Balkissoon abalkiss at redhat dot com
+ *
+ */
+public class Harness
+{    
+  // This is the name of the program we will run to actually run the tests.
+  private static final String RUNNERPROCESSNAME = "RunnerProcess";
+
+  // How long a test may run before it is considered hung
+  private long timeout = 60000; // 60 seconds, can be changed via -timeout flag
+
+  // Whether or not we should recurse into directories when a folder is
+  // specified to be tested
+  private boolean recursion = true;
+
+  // Whether we should run in noisy mode
+  private boolean verbose = false;
+  
+  // Whether we should display one-line summaries for passing tests
+  private boolean showPasses = false;
+
+  // The total number of tests run
+  private int total_tests = 0;
+
+  // The total number of failing tests (not harness.check() calls)
+  private int total_test_fails = 0;
+
+  // All the tests that were specified on the command line rather than
+  // through standard input or an input file
+  private Vector commandLineTests = null;
+
+  // All the tests that were explicitly excluded via the -exclude option
+  private Vector excludeTests = new Vector();
+
+  // A way to speak to the runner process
+  private PrintWriter out = null;
+
+  // A way to listen to the runner process
+  private BufferedReader in = null;
+
+  // The process that will run the tests for us
+  private Process runnerProcess = null;
+
+  // A watcher to determine if runnerProcess is hung
+  private TimeoutWatcher watcher = null;
+
+  // A flag indicating whether or not runnerProcess is hung
+  private boolean testIsHung = false;
+
+  // A lock used for synchronizing access to testIsHung
+  private Object lock = new Object();
+
+  // The arguments used when this Harness was invoked, we use this to create an
+  // appropriate RunnerProcess
+  String[] harnessArgs = null;
+
+  // The path to the executable for the VM on which the tests will be run
+  String vmCommand = null;
+  
+
+  public static void main(String[] args)
+  {
+    // Create a new Harness, set it up with the args, and run
+    // the appropriate tests.
+    Harness harness = new Harness();
+    harness.setupHarness(args);
+  }
+
+  private void setupHarness(String[] args)
+  {
+    harnessArgs = args;
+    String file = null;
+    for (int i = 0; i < args.length; i++)
+      {        
+        if (args[i].equals("-norecursion"))
+          recursion = false;
+        else if (args[i].equals("-verbose"))
+          verbose = true;
+        else if (args[i].equals("-showpasses"))
+          showPasses = true;
+        else if (args[i].equals("-help") || args[i].equals("--help")
+                 || args[i].equals("-h"))
+          printHelpMessage();
+        else if (args[i].equalsIgnoreCase("-file"))
+          {
+            // User wants to use an input file to specify which tests to run.
+            if (++i >= args.length)
+              throw new RuntimeException("No file path after '-file'.  Exit");
+            file = args[i];
+          }
+        else if (args[i].equals("-exclude"))
+          {
+            // User wants to exclude some tests from the run.
+            if (++i >= args.length)
+              throw new RuntimeException ("No test or directory " +
+                    "given after '-exclude'.  Exit");
+            if (args[i].endsWith(".java"))
+              args[i] = args[i].substring(0, args[i].length() - 5);
+            excludeTests.add(startingFormat(args[i]));
+          }
+        else if (args[i].equals("-timeout"))
+          {
+            // User wants to change the timeout value.
+            if (++i >= args.length)
+              throw new RuntimeException ("No timeout value given " +
+                    "after '-timeout'.  Exit");
+            timeout = Long.parseLong(args[i]);
+          }
+        else if (args[i].charAt(0) == '-')
+          {
+            // One of the ignored options (ie - handled by RunnerProcess)
+            // such as -verbose.
+          }
+        else if (args[i] != null)
+          {
+            // This is a command-line (not standard input) test or directory.
+            if (commandLineTests == null)
+              commandLineTests = new Vector();
+            commandLineTests.add(startingFormat(args[i]));
+          }          
+      }
+
+    // Determine the VM on which we will run the tests.
+    vmCommand = System.getProperties().getProperty("java.vm.exec");
+    if (vmCommand == null)
+      vmCommand = "java";
+    
+    // Start the runner process and run all the tests.
+    initProcess(vmCommand, args);
+    runAllTests(file, commandLineTests);
+
+    if (total_tests > 1)
+      System.out.println("\nTEST RESULTS:\n" + total_test_fails + " of "
+                         + total_tests + " tests failed.");
+    else if (total_tests == 0)
+      {
+        // If no tests were run, try to help the user out by suggesting what
+        // the problem might have been.
+        if (recursion == false)
+          {
+            System.out.println ("No tests were run.\nDid you use -norecursion " +
+                    "and specify a folder that had no tests in it?\n" +
+                    "For example, 'jamvm -norecursion javax.swing' will not " +
+                    "run any tests\nbecause no tests are located directly in " +
+                    "the javax.swing folder.\n\nTry removing the -norecursion " +
+                    "option.  Use the -help option for more\ninformation or " +
+                    "read the README file.");
+          }
+        else if (excludeTests != null && excludeTests.size() > 0)
+          {
+            System.out.println ("No tests were run.\nDid you use -exclude " +
+                                "and exclude all tests (or all specified tests)? \n" +
+                                "Use the -help option for more information or " +
+                                "read the README file.");
+          }
+        else
+          {
+            System.out.println ("No tests were run.\nDid you specify a test that " +
+                                "doesn't exist or a folder that contains no tests? \n" +
+                                "Use the -help option for more information or " +
+                                "read the README file.");
+          }          
+      }
+    else if (total_test_fails == 0 && !showPasses)
+      System.out.println ("TEST RESULT: pass");
+    finalize();
+    System.exit(total_test_fails > 0 ? 1 : 0);
+  }  
+  
+  /**
+   * This method takes a String and puts it into a consistent format
+   * so we can deal with all test names in the same way.  It ensures
+   * that tests start with "gnu.testlet" and that slashes ('/', which
+   * are file separators) are replaced with dots (for use in class names).
+   * It also strips the .java or .class extensions if they are present, 
+   * and removes single trailing dots.
+   * @param val
+   * @return
+   */
+  private static String startingFormat(String val)
+  {
+    if (val != null)
+      {
+        val = val.replace(File.separatorChar, '.');
+        if (! val.startsWith("gnu.testlet."))
+          val = "gnu.testlet." + val;
+        if (val.endsWith("."))
+          val = val.substring(0, val.length() - 1);
+        if (val.endsWith(".class"))
+          val = val.substring(0, val.length() - 6);
+      }
+    return val;
+  }
+  
+  /**
+   * This method prints a help screen to the console and then exits.
+   */
+  static void printHelpMessage()
+  {
+    System.out.println(
+            "This is the Mauve Harness.  Usage:\n\n" +
+            " ./harness <options> <testcase | folder>\n" +
+            "  If no testcase or folder is specified, all the tests will be run. \n" +
+            "  It is strongly recommended that you use the -vm option or set the \n" +
+            "  environment variable MAUVEVM." +
+            "\n\nExample: './harness -vm jamvm -showpasses javax.swing'\n" +
+            "  will use jamvm (located in your path) to run all the tests in the\n" +
+            "  gnu.testlet.javax.swing folder and will display PASSES\n" +
+            "  as well as FAILS.\n\nOptions:\n" +
+            "  -vm [vmpath]:            specify the vm on which to run the tests\n" +
+            "                           It is strongly recommended that you use this option\n" +
+            "                           or set the MAUVEVM environment variable.  See the \n" +
+            "                           README file for more details.\n" +
+            "  -exclude [test|folder]:  specifies a test or a folder to exclude\n" +
+            "                           from the run\n" +
+            "  -norecursion:            if a folder is specified to be run, don't run\n" +
+            "                           the tests in its subfolders\n" +
+            "  -showpasses:             display passing tests as well as failing ones\n" +
+            "  -exceptions:             print stack traces for uncaught exceptions\n" +
+            "  -timeout [millis]:       specifies a timeout value for the tests\n" +
+            "                           (default is 60000 milliseconds)\n" +
+            "  -verbose:                run in noisy mode, displaying extra information\n" +
+            "  -file [filename]:        specifies a file that contains the names of\n" +
+            "                           tests to be run (one per line)\n" +
+            "  -debug:                  displays some extra information for failing tests that\n" +
+            "                           use the harness.check(Object, Object) method\n" +
+            "  -xmlout [filename]:      specifies a file to use for xml output\n" +
+            "  -help:                   display this help message\n");
+                       
+    System.exit(0);
+  }
+  
+  protected void finalize()
+  {
+    //Clean up 
+    try
+      {
+        in.close();
+        out.close();
+        runnerProcess.destroy();
+      } 
+    catch (IOException e) 
+      {
+        System.err.println("Could not close the interprocess pipes.");
+        System.exit(-1);
+      }
+  }
+  
+  /**
+   * This method sets up our runner process - the process that actually
+   * runs the tests.  This needs to be done once initially and also
+   * every time a test hangs.
+   * @param runtime the VM to use (ie "java", "jamvm", etc).
+   * @param name the name 
+   * @param args
+   */
+  private void initProcess(String runtime, String[] args)
+  {
+    String command = runtime + " " + RUNNERPROCESSNAME;
+    
+    for (int i = 0; i < args.length; i++)      
+      command += " " + args[i];
+      
+    try
+      {
+        runnerProcess = Runtime.getRuntime().exec(command);
+        out = new PrintWriter(runnerProcess.getOutputStream(), true);
+        in = new BufferedReader(new InputStreamReader(runnerProcess.getInputStream()));
+        watcher = new TimeoutWatcher(timeout);
+      }
+    catch (IOException e)
+      {
+        System.err.println("Problems invoking RunnerProcess: " + e);
+        finalize();
+        System.exit(1);
+      }
+  }
+
+  /**
+   * This method runs all the tests, both from the command line and from
+   * standard input.  This is so the legacy method of running tests by 
+   * echoing the classname and piping it to the Harness works, but so does
+   * a more natural "jamvm Harness <TESTNAME>".
+   * @param file the file input of testnames to run
+   * @param commandLineTests the Vector of tests that were specified on the
+   * command line
+   */
+  private void runAllTests(String file, Vector commandLineTests)
+  {   
+    // Run the commandLine tests.  These were assembled into 
+    // <code>commandLineTests</code> in the setupHarness method.
+    if (commandLineTests != null)
+      {
+        for (int i = 0; i < commandLineTests.size(); i++)
+          {
+            String cname = null;
+            cname = (String) commandLineTests.elementAt(i);
+            if (cname == null)
+              break;
+            processTest(cname);
+          }
+      }
+    
+    
+    // Now run the standard input tests.  First we determine if the input is
+    // coming from a file (if the -file option was used) or from stdin.
+    BufferedReader r = null;
+    if (file != null)
+      // The -file option was used, so set up our BufferedReader to use the
+      // input file.
+      try
+        {
+          r = new BufferedReader(new FileReader(file));
+        }
+      catch (FileNotFoundException x)
+        {
+          throw new RuntimeException("Cannot find \"" + file + "\".  Exit");
+        }
+    else
+      {
+        // The -file option was not used, so use stdin instead.
+        r = new BufferedReader(new InputStreamReader(System.in));
+        try
+          {
+            if (! r.ready())
+              {
+                // If no tests were specified to be run, we will run all the 
+                // tests (except those explicitly excluded).
+                if (commandLineTests == null || commandLineTests.size() == 0)
+                  processTest("gnu.testlet.all");
+                return;
+              }
+          }
+        catch (IOException ioe)
+          {
+          }
+      }
+
+    // Now process all the tests specified in the file or from stdin.
+    while (true)
+      {
+        String cname = null;
+        try
+          {
+            cname = r.readLine();
+            if (cname == null)
+              break;
+          }
+        catch (IOException x)
+          {
+            // Nothing.
+          }
+        processTest(startingFormat(cname));
+      }
+  }
+  
+  /**
+   * This method runs a single test in a new Harness and increments the
+   * total tests run and total failures, if the test fails.  Prints
+   * PASS and adds to the report, if the appropriate options are enabled.
+   * @param testName the name of the test
+   */
+  private void runTest(String testName)
+  {
+    String outputFromTest;
+    int temp = -1;
+
+    // Start the timeout watcher
+    if (watcher.isAlive())
+      watcher.reset();
+    else
+      watcher.start();
+    
+    // Tell the RunnerProcess to run test with name testName
+    out.println(testName);
+    
+    while (true)
+      {
+        // This while loop polls for output from the test process and 
+        // passes it to System.out unless it is the signal that the 
+        // test finished properly.  Also checks to see if the watcher
+        // thread has declared the test hung and if so ends the process.
+        if (testIsHung)
+          {
+            synchronized (lock)
+            {
+              testIsHung = false;
+            }
+            finalize();
+            initProcess(vmCommand, harnessArgs);
+            break;
+          }
+        try
+        {
+          if (in.ready())
+            {
+              outputFromTest = in.readLine();              
+              if (outputFromTest.startsWith("RunnerProcess:"))
+                {
+                  // This means the test finished properly, now have to see if
+                  // it passed or failed.
+                  if (outputFromTest.endsWith("pass"))
+                    temp = 0;
+                  else if (outputFromTest.endsWith("fail"))
+                    temp = 1;
+                  else if (outputFromTest.endsWith("not-a-test"))
+                    {
+                      // Temporarily decrease the total number of tests,
+                      // because it will be incremented later even 
+                      // though the test was not a real test.
+                      total_tests--;
+                      temp = 0;
+                    }
+                  break;
+                }                
+              else
+                // This means it was just output from the test, like a 
+                // System.out.println within the test itself, we should
+                // pass these on to stdout.
+                System.out.println(outputFromTest);
+            }
+        }
+        catch (IOException e)
+        {
+        }
+      }
+    if (temp == -1)
+      {
+        // This means the watcher thread had to stop the process 
+        // from running.  So this is a fail.
+        if (verbose)
+          System.out.println("  FAIL: timed out. \nTEST FAILED: timeout "
+                             + stripPrefix(testName));
+        else
+        System.out.println("FAIL: " + stripPrefix(testName)
+                           + "\n  Test timed out.  Use -timeout [millis] " +
+                                "option to change the timeout value.");
+        
+        total_test_fails++;
+      }
+    else
+      total_test_fails += temp;
+    total_tests ++;
+    
+    // If the test passed and the user wants to know about passes, tell them.
+    if (showPasses && temp == 0 && !verbose)
+      System.out.println ("PASS: "+stripPrefix(testName));
+  }
+  
+  /**
+   * This method is used to potentially run a single test.  If runAnyway is
+   * false we've reached here as a result of processing a directory and we
+   * should only run tests if they end in ".java" to avoid running tests
+   * multiple times.
+   *  
+   * @param cname the name of the test to run
+   * @param runAnyway true if we should run the test even if it doesn't end
+   * with ".java"
+   * @return -1 if the test was explicitly excluded via the -exclude option,
+   * 0 if cname represents a single test, 1 if cname does not represent a 
+   * single test
+   */  
+  private int processSingleTest(String cname, boolean runAnyway)
+  {
+    if (cname.endsWith(".java"))
+      {
+        runAnyway = true;
+        // FIXME: we need to invoke the compiler here in case the .class
+        // is not present or is out of date.
+        
+        cname = cname.substring(0, cname.length() - 5);
+        String temp = cname.replace('.', File.separatorChar) + ".class";
+        File f = new File(temp);
+        if (!f.exists())
+          return -1;
+      }
+    // Avoid running tests multiple times by quitting if this method was 
+    // called from processDirectory and the file wasn't a .java file.
+    if (!runAnyway)
+      return -1;
+
+    // If the test should be excluded return -1, this is a signal
+    // to processTest that it should quit.
+    if (excludeTests.contains(cname))
+      return -1;
+
+    // Check if cname represents a single test, and if so run it.
+    try
+      {
+        Class.forName(cname);
+      }
+    catch (Throwable t)
+      {
+        // This means it wasn't a single test.
+        return 1;
+      }
+    // If we've reached here, we've found a legitimate test, so run it and then
+    // return 0 to say that everything was fine.
+    runTest(cname);
+    return 0;
+  }
+  
+  /**
+   * This method processes all the tests in a directory.  To avoid running
+   * tests twice, we pass <code>false</code> as the 2nd argument to 
+   * processSingleTest.  This method calls itself if it finds a directory.
+   * @param cname the name of the directory
+   */
+  private void processDirectory(String cname)
+  {
+    cname = cname.replace('.', File.separatorChar);
+    File dir = new File(cname);
+    if (! dir.exists())
+      return;
+    
+    String[] filenames = dir.list();
+    if (filenames == null)
+      return;
+    
+    // Look through all the files and folders in dir, call this method
+    // on any folders and call processSingleTest on any files.  Since we pass
+    // false as our 2nd argument to processSingleTest, we will not run tests
+    // twice (ie, one for the ".java" file and one for the ".class file).
+    for (int k = 0; k < filenames.length; k++)
+      {
+        String temp = dir.getPath() + File.separatorChar + filenames[k];
+        File f = new File(temp);
+        // If it's a directory, call this method on it.
+        if (f.isDirectory() && recursion
+            && ! excludeTests.contains(startingFormat(temp)))
+          processDirectory(temp);
+        else
+          processSingleTest(temp.replace(File.separatorChar, '.'), false);
+      }
+  }
+  /**
+   * This method handles the input, whether it is a single test or a folder
+   * and calls runTest on the appropriate .class files.  Will also compile
+   * tests that haven't been compiled or that have been changed since last
+   * being compiled.
+   * @param cname the input file name - may be a directory
+   */
+  private void processTest(String cname)
+  {
+    if (cname.equals("CVS") || cname.endsWith(File.separatorChar + "CVS")
+        || cname.indexOf("$") != - 1 || excludeTests.contains(cname))
+      return;
+
+    if (cname.equals("gnu.testlet.all"))
+      cname = "gnu.testlet";
+
+    // If processSingleTest returns -1 then this test was explicitly 
+    // excluded with the -exclude option, and if it returns 0 then 
+    // the test was successfully run and we should stop here.  Only
+    // if it returns 1 should we try to process cname as a directory.
+    if (processSingleTest(cname, true) == 1)
+      processDirectory(cname);    
+  }
+  
+  /**
+   * Removes the "gnu.testlet." from the start of a String.
+   * @param val the String
+   * @return the String with "gnu.testlet." removed
+   */
+  private static String stripPrefix(String val)
+  {
+    if (val.startsWith("gnu.testlet."))
+      val = val.substring(12);
+    return val;
+  }
+
+  /**
+   * This class is used for our timer to cancel tests that have hung.
+   * @author Anthony Balkissoon abalkiss at redhat dot com
+   *
+   */
+  class TimeoutWatcher implements Runnable
+  {
+    private long millisToWait;
+    private Thread watcherThread;
+    boolean loop = true;
+    
+    /**
+     * Creates a new TimeoutWatcher that will wait for <code>millis</code>
+     * milliseconds once started.
+     * @param millis the number of milliseconds to wait before declaring the 
+     * test as hung
+     */
+    public TimeoutWatcher(long millis)
+    {
+      millisToWait = millis;
+      watcherThread = new Thread(this);
+    }
+    
+    /**
+     * Start the watcher thread, ie start the countdown.     
+     */
+    public void start()
+    {
+      watcherThread.start();      
+    }
+    
+    /**
+     * Return true if the watcher thread is currently counting down.
+     * @return true if the watcher thread is alive
+     */
+    public boolean isAlive()
+    {
+      return watcherThread.isAlive();
+    }
+    
+    /**
+     * Reset the counter and wait another <code>millisToWait</code>
+     * milliseconds before declaring the test as hung.     
+     */
+    public synchronized void reset()
+    {
+      loop = true;
+      notify();
+    }
+    
+    public synchronized void run()
+    {
+      Thread.currentThread().setPriority(Thread.MAX_PRIORITY);
+      while (loop)
+        {
+          // We set loop to false here, it will get reset to true if 
+          // reset() is called from the main Harness thread.
+          loop = false;
+          try
+          {
+            wait(millisToWait);
+          }
+          catch (InterruptedException ie)
+          {}
+        }
+      // The test is hung, set testIsHung to true so the process will be 
+      // destroyed and restarted.
+      synchronized (lock)
+        {
+          testIsHung = true;
+        }
+    }
+  }
+}
Index: README
===================================================================
RCS file: /cvs/mauve/mauve/README,v
retrieving revision 1.20
diff -u -r1.20 README
--- README	4 Jan 2005 20:53:30 -0000	1.20
+++ README	5 Apr 2006 20:07:20 -0000
@@ -7,6 +7,7 @@
 given runtime.
 
 
+******BUILDING THE TESTSUITE*******
 To build, first run configure.  You can control the configuration with
 some environment variables:
 
@@ -25,210 +26,125 @@
                                 (Use this option if your local firewall
                                  blocks outgoing connections on port 25.)
 
-Note that you will need GNU make to use this testsuite.  If your
-installation provides GNU make under a different name, such as gmake,
-replace `make' with `gmake' in the following.
 
-Use `make check' to run the tests.  You can set the make variable
-`KEYS' to select a subset of the tests.  KEYS is a list of keys that
-must be matched by the files to be tested.  Some values:
 
-     * Any key starting with `java.' is taken to be the name of a
-       class hierarchy.
-       E.g., the key `java.lang' matches only test classes in java.lang.*.
+*******RUNNING THE TESTS*******
+The Mauve tests are run using the harness script in the top directory.  To run
+all the tests using your system default "java" VM, type "./harness".
+
+This is rarely what developers want to do, so below we provide details on how
+to do the following:
+
+  1.  Specify the VM on which to run the tests
+  2.  Select a subset of the tests to run
+  3.  Use an input file to specify the tests to run
+  4.  Change the timeout interval
+  5.  Change the information displayed when tests are run
+  
+
+1.  Specifying the VM on which to run the tests
+
+  This can be done in 2 ways: setting the environment variable MAUVEVM, or 
+  using the -vm [vmpath] option when running the harness script.  The latter
+  overrides the former, so you can set MAUVEVM to be the VM that you usually
+  use, and can use the -vm option when you occasionally want to use another VM.
+
+  If, for example, I wanted to run all the JTable tests using JamVM, and then 
+  run them all on Sun's VM for comparison, I would type:
+
+  ./harness javax.swing.JTable -vm jamvm
+
+  and then
+
+  ./harness javax.swing.JTable -vm java
+
+  if "java" was a system command to run Sun's VM.  If not, you should specify
+  the path to Sun's "java" executable (ex: /usr/lib/java-1.5.0/bin/java).
+
+
+2.  Selecting a subset of the tests to run
+
+  This is a common task for developers, you may be working to fix a bug in a 
+  certain area and want to run all the related tests, but not the entire
+  testsuite.  Simply specify the folder containing the tests, and all the tests
+  in that folder (and its subfolders, although this can be turned off) will be 
+  run.
+
+  Example: run all the java.net tests (remember, this uses system default 
+  "java" unless you have environment variable MAUVEVM set):
+
+  1.  ./harness java.net
+  2.  ./harness gnu.testlet.java.net
+  3.  ./harness gnu/testlet/java/net
+  3.  ./harness gnu/testlet/java/net/
+  *   It makes no difference if you use "." or "/", or if you have the
+      "gnu.testlet" preceeding the test folder or if you have a trailing "/".
+      
+  You may want to exclude certain tests from a run, this done using the
+  -exclude option.  Extending our previous example, let's run all the java.net
+  tests except for the java.net.Socket tests.
+      
+  1.  ./harness java.net -exclude java.net.Socket
+  2.  ./harness -exclude java.net.Socket java.net
+
+  The test or folder you want to exclude must follow the -exclude option, but 
+  other than that, the order doesn't matter.  In example #2 above java.net is
+  still taken to be tests you want to run, not tests you want to exclude.  So
+  if you want to exclude more than one folder, you need to use the -exclude
+  flag multiple times.
+  
+  If a folder has several subfolders and you want to exclude them all, you
+  can use the -norecursion option instead of explicitly excluding them 
+  all.  So to run the AbstractDocument tests but not the BranchElement or
+  LeafElement tests, type:
+  
+  ./harness javax.swing.text.AbstractDocument -norecursion
+  
+  Again, the order of the arguments/options after ./harness doesn't matter.
+  
+  
+3.  Using an input file to specify tests to be run
+
+  Simply use the -file [filename] option.  The input file should list, one per
+  line, the tests or folders you want to test.
+
+  Example: ./harness -file myInputFile
+
+  The input file specifies only tests to be run, not excluded, to exclude tests
+  you need to explicitly do so as in Section 2 above. 
+
+  Example: ./harness -file myInputFile -exclude java.net.Socket
+  
+
+4.  Changing the timeout interval
+
+  The Harness detects tests that have hung and terminates them.  It does so 
+  simply by allowing all tests to run for 60 seconds and if they haven't 
+  completed, declaring them hung.  If a test simply takes a long time you may
+  want to increase this interval.  If on the other hand, no passing tests take
+  longer than a few seconds and hanging tests are slowing down your test runs,
+  you may want to decrease the interval.  To set the timeout interval use
+  the -timeout [interval] option.  The interval is specified in milliseconds.
+
+  Example: ./harness gnu.java.security -timeout 30000
+  
+  will set the timeout to be 30 seconds instead of 60.
+  
+  
+5.  Changing the information displayed during test runs
+
+  By default the Harness prints only messages for those tests that fail, and 
+  only prints minimal information about uncaught exceptions. The following
+  options affect what is printed:
+  
+  -verbose: prints information about each passing harness.check() call within
+  the tests, whether they pass or fail.
+  
+  -exceptions: prints full stack traces for uncaught exceptions
+  
+  -showpasses: prints one-line summaries for passing tests
+  
+  -debug:      prints toString() information when a 
+               harness.check(Object, Object) call fails.
+  
 
-     * Any key starting with `!java.' is used to exclude a class
-       hierarchy.
-       E.g., `!java.lang.reflect' can be used to omit all reflection
-       tests, while still including the rest of java.lang.
-
-     JDK1.0  Run JDK1.0 tests only
-     JDK1.1  Run JDK1.1 tests only
-     JDK1.2  Run JDK1.2 tests only
-
-If an otherwise unrecognized tag QUUX is seen, and the file
-`mauve-QUUX' exists in the mauve source directory, then the contents
-of this file are treated as a list of tags.  For instance, here is the
-current version of `mauve-libjava', which is used by the `libjava'
-library implementation:
-
-	# Config file that tells mauve about the `libjava' tag.
-	JDK1.0
-	JDK1.1
-	!java.beans
-
-Note that anything after a `#' on a line is treated as a comment and
-is stripped.
-
-File inclusion and exclusion commands are processed in order.  So, for
-instance, specifying `!java.lang java.lang.System' will omit all
-java.lang tests except for System.  Every file starts off in the
-`included' state; an explicit exclusion is needed to reject it.  (So,
-e.g., specifying just `java.lang' is insufficient to select only
-java.lang.  You must specify `!java. java.lang'.  Note the `.'!)
-
-If no tags are given, "JDK1.0 JDK1.1" are assumed.
-
-You can use `make recheck' to re-run `make check' with the key list
-you last used.
-
-An alternative way to compiling and running all tests is the batch_run
-script.  This makes it easy to run all test in one batch without worrying
-wheter all tests compile and/or running them crashes or hangs the runtime.
-
-batch_run and the runner helper script aren't integrated with the configure
-setup yet so you will have to edit them by hand to explicitly set the
-COMPILER variable in batch_run and the RUNTIME variable in runner.
-Optionally you can also change the KEYS setting in batch_run if you don't
-want to run all tests.  You can also set the variable NATIVE=true in
-batch_run when you want to use gcj (without -C) in native mode.
-
-When a test cannot be compiled with the given COMPILER batch_run will
-output FAIL: <testname> COMPILE FAILED and go on with the next test.
-If the runner detects a runtime crash or timeout (runner variable
-WAIT=60 seconds) it will output FAIL: <testname> CRASH or TIMEOUT.
-
-If you want to run a single test by hand, you can feed its name
-directly to the test harness.  Make sure to remember the `gnu.testlet'
-prefix for the test cases.  E.g.:
-
-  echo gnu.testlet.java.lang.Character.classify | \
-    java gnu.testlet.SimpleTestHarness -verbose -debug PATH-TO-SRCDIR
-
-The optional `-verbose' command-line argument makes the test suite a
-little noisier about what it is doing.  In particular it will print
-information about caught exceptions in this case.
-
-Some tests may provide even more information about failures to aid
-with debugging a particular run-time system.  These messages are enabled
-by specifying the optional command-line argument `-debug'
-
-If you are only interested in the results of the tests (not only which
-ones FAIL but also which ones PASS). Then you can use the '-resultsonly'
-command-line argument. This will also suppress the printing of a summary
-at the end.
-
-The '-exceptions' command-line argument to causes the display of full
-stack traces when a test fails due to an uncaught exception.
-
-You may use the environment variable TESTFLAGS to provide these
-flags to the invocation of the SimpleTestHarness in the `check' target.
-For instance,
-
-  make check "TESTFLAGS=-verbose -debug"
-
-will run the testsuite in verbose and debug mode.
-
-================================================================
-
-Tags in a test are specified a little differently from tags on the
-command line.
-
-Each tag on the command line is first mapped to a list of actual tags.
-E.g., "JDK1.2" implies all the tags "JDK1.0", "JDK1.1", and "JDK1.2".
-
-If any tag from the expanded list is matched by the test case, then
-the test case is chosen.
-
-However, if one of the tags specified on the command line appears in
-the test with a `!' prefix, then the test is rejected.
-
-Tags must all appear on a single line beginning "// Tags: ".
-
-Many files test functionality that has existed since JDK1.0.  The
-corresponding line in the source:
-
-    // Tags: JDK1.0
-
-Here is how you would tag something that first appeared in JDK1.2:
-
-    // Tags: JDK1.2
-
-Here is how you would tag something that was eliminated in JDK1.2:
-
-    // Tags: JDK1.0 !JDK1.2
-
-The idea behind this scheme is that it is undesirable to update all
-the files whenever we add a tag.  So instead most tags are defined in
-terms of primitive tags, and then we note the exceptions.
-
-When adding a new tag, change the `choose' program to map the
-specified tag onto the implied tags.  There is some code near the top
-that handles this transformation.
-
-Files should only hold tags describing their prerequisites.  In
-particular, limitations of a given library implementation should not
-be mentioned in file tags (because when the library changes, this
-would necessitate global edits).  Instead, put such limitations in a
-`mauve-QUUX' tag expansion file.
-
-================================================================
-
-Some test cases may require extra utility classes to run.  When the
-choose script selects a test case for running, the framework
-identifies the supporting classes through another magic comment in the
-test source.
-
-Support classes must all appear on a single line beginning with the
-string "// Uses: ".  The framework assumes that all utility classes
-used are found in that same package as the test case.
-
-================================================================
-
-Graphical tests that display windows and accept input are marked with
-the GUI tag.  GUI tests are not included in the default list of tags.
-If the GUI tag does appear in KEYS, batch_run will spawn an Xvfb
-process, set DISPLAY to that X server and run the
-graphical/interactive tests there.  By default, metacity is run in
-Xvfb; use the WM environment variable to run a different window
-manager.  If you'd rather run the tests directly on your desktop, set
-SHOW_GUI_TESTS=1.
-
-================================================================
-
-The test harness can also ignore known failures.  Simply create a
-file 'xfails' in the directory where the tests are being run which
-contains 'FAIL:' entries from previous test runs.  The order of the
-lines in the file is immaterial.  Also, the -verbose flag must be
-used.  Totals for both XFAILs and XPASSes will be output at the
-end of the run.
-
-In this way, implementations can track known failures and subsequent
-test runs can thus highlight regressions.
-
-================================================================
-
-There are still a few things to do in the test framework.
-
-
-It would be nice if we could have tests that can specify their
-expected output.  The expected output could be encoded directly in the
-test, e.g.:
-
-      /*{
-      expected output here
-      }*/
-
-The test harness would be reponsible for extracting this from the test
-source and then setting things up so that the checking is done
-correctly.  SimpleTestHarness could do this by setting System.out to
-point to some string buffer for the duration of the test.
-
-
-Change things so that the .o files can be built directly from the
-.java files without any intermediate .class files.  (Use the same
-configuration options that libjava uses.)
-
-
-Some tests probably should be run in their own environment.  This
-could be implemented using a new "group" magic comment, along with
-changes to the `choose' program to generate the list of classes in
-chunks.  Each such chunk would be fed into SimpleTestHarness in a
-separate invocation.
-
-
-It would be interesting to be able to compare test results for
-unspecified things against Sun's implementation.  This could be done
-by adding a new method to the test harness.  The `expected' argument
-would come from Sun's implementation.  Unlike `check', a failure here
-would simply be informative.
Index: README.OldHarness
===================================================================
RCS file: README.OldHarness
diff -N README.OldHarness
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ README.OldHarness	5 Apr 2006 20:07:20 -0000
@@ -0,0 +1,234 @@
+This is Mauve, a free test suite for the Java Class Libraries.
+
+Mauve is intended to test several different varieties of the
+libraries.  For instance, it will contain tests that are specific to a
+particular JDK version.  Tags in the test files help the test
+framework decide which tests should or should not be run against a
+given runtime.
+
+
+To build, first run configure.  You can control the configuration with
+some environment variables:
+
+     JAVA   Name of Java interpreter to use
+     JAVAC  Name of Java (to class) compiler to use
+     GCJ    Name of Java (to object) compiler to use
+
+GCJ is only used when the `--with-gcj' option is given to configure.
+
+The configure script also supports the following `--with' options:
+
+     --with-tmpdir=DIR          Put temporary files in DIR
+                                defaults to `/tmp'
+     --with-mailhost=HOSTNAME   Use mail server at HOSTNAME for socket tests 
+                                defaults to `mx10.gnu.org'
+                                (Use this option if your local firewall
+                                 blocks outgoing connections on port 25.)
+
+Note that you will need GNU make to use this testsuite.  If your
+installation provides GNU make under a different name, such as gmake,
+replace `make' with `gmake' in the following.
+
+Use `make check' to run the tests.  You can set the make variable
+`KEYS' to select a subset of the tests.  KEYS is a list of keys that
+must be matched by the files to be tested.  Some values:
+
+     * Any key starting with `java.' is taken to be the name of a
+       class hierarchy.
+       E.g., the key `java.lang' matches only test classes in java.lang.*.
+
+     * Any key starting with `!java.' is used to exclude a class
+       hierarchy.
+       E.g., `!java.lang.reflect' can be used to omit all reflection
+       tests, while still including the rest of java.lang.
+
+     JDK1.0  Run JDK1.0 tests only
+     JDK1.1  Run JDK1.1 tests only
+     JDK1.2  Run JDK1.2 tests only
+
+If an otherwise unrecognized tag QUUX is seen, and the file
+`mauve-QUUX' exists in the mauve source directory, then the contents
+of this file are treated as a list of tags.  For instance, here is the
+current version of `mauve-libjava', which is used by the `libjava'
+library implementation:
+
+	# Config file that tells mauve about the `libjava' tag.
+	JDK1.0
+	JDK1.1
+	!java.beans
+
+Note that anything after a `#' on a line is treated as a comment and
+is stripped.
+
+File inclusion and exclusion commands are processed in order.  So, for
+instance, specifying `!java.lang java.lang.System' will omit all
+java.lang tests except for System.  Every file starts off in the
+`included' state; an explicit exclusion is needed to reject it.  (So,
+e.g., specifying just `java.lang' is insufficient to select only
+java.lang.  You must specify `!java. java.lang'.  Note the `.'!)
+
+If no tags are given, "JDK1.0 JDK1.1" are assumed.
+
+You can use `make recheck' to re-run `make check' with the key list
+you last used.
+
+An alternative way to compiling and running all tests is the batch_run
+script.  This makes it easy to run all test in one batch without worrying
+wheter all tests compile and/or running them crashes or hangs the runtime.
+
+batch_run and the runner helper script aren't integrated with the configure
+setup yet so you will have to edit them by hand to explicitly set the
+COMPILER variable in batch_run and the RUNTIME variable in runner.
+Optionally you can also change the KEYS setting in batch_run if you don't
+want to run all tests.  You can also set the variable NATIVE=true in
+batch_run when you want to use gcj (without -C) in native mode.
+
+When a test cannot be compiled with the given COMPILER batch_run will
+output FAIL: <testname> COMPILE FAILED and go on with the next test.
+If the runner detects a runtime crash or timeout (runner variable
+WAIT=60 seconds) it will output FAIL: <testname> CRASH or TIMEOUT.
+
+If you want to run a single test by hand, you can feed its name
+directly to the test harness.  Make sure to remember the `gnu.testlet'
+prefix for the test cases.  E.g.:
+
+  echo gnu.testlet.java.lang.Character.classify | \
+    java gnu.testlet.SimpleTestHarness -verbose -debug PATH-TO-SRCDIR
+
+The optional `-verbose' command-line argument makes the test suite a
+little noisier about what it is doing.  In particular it will print
+information about caught exceptions in this case.
+
+Some tests may provide even more information about failures to aid
+with debugging a particular run-time system.  These messages are enabled
+by specifying the optional command-line argument `-debug'
+
+If you are only interested in the results of the tests (not only which
+ones FAIL but also which ones PASS). Then you can use the '-resultsonly'
+command-line argument. This will also suppress the printing of a summary
+at the end.
+
+The '-exceptions' command-line argument to causes the display of full
+stack traces when a test fails due to an uncaught exception.
+
+You may use the environment variable TESTFLAGS to provide these
+flags to the invocation of the SimpleTestHarness in the `check' target.
+For instance,
+
+  make check "TESTFLAGS=-verbose -debug"
+
+will run the testsuite in verbose and debug mode.
+
+================================================================
+
+Tags in a test are specified a little differently from tags on the
+command line.
+
+Each tag on the command line is first mapped to a list of actual tags.
+E.g., "JDK1.2" implies all the tags "JDK1.0", "JDK1.1", and "JDK1.2".
+
+If any tag from the expanded list is matched by the test case, then
+the test case is chosen.
+
+However, if one of the tags specified on the command line appears in
+the test with a `!' prefix, then the test is rejected.
+
+Tags must all appear on a single line beginning "// Tags: ".
+
+Many files test functionality that has existed since JDK1.0.  The
+corresponding line in the source:
+
+    // Tags: JDK1.0
+
+Here is how you would tag something that first appeared in JDK1.2:
+
+    // Tags: JDK1.2
+
+Here is how you would tag something that was eliminated in JDK1.2:
+
+    // Tags: JDK1.0 !JDK1.2
+
+The idea behind this scheme is that it is undesirable to update all
+the files whenever we add a tag.  So instead most tags are defined in
+terms of primitive tags, and then we note the exceptions.
+
+When adding a new tag, change the `choose' program to map the
+specified tag onto the implied tags.  There is some code near the top
+that handles this transformation.
+
+Files should only hold tags describing their prerequisites.  In
+particular, limitations of a given library implementation should not
+be mentioned in file tags (because when the library changes, this
+would necessitate global edits).  Instead, put such limitations in a
+`mauve-QUUX' tag expansion file.
+
+================================================================
+
+Some test cases may require extra utility classes to run.  When the
+choose script selects a test case for running, the framework
+identifies the supporting classes through another magic comment in the
+test source.
+
+Support classes must all appear on a single line beginning with the
+string "// Uses: ".  The framework assumes that all utility classes
+used are found in that same package as the test case.
+
+================================================================
+
+Graphical tests that display windows and accept input are marked with
+the GUI tag.  GUI tests are not included in the default list of tags.
+If the GUI tag does appear in KEYS, batch_run will spawn an Xvfb
+process, set DISPLAY to that X server and run the
+graphical/interactive tests there.  By default, metacity is run in
+Xvfb; use the WM environment variable to run a different window
+manager.  If you'd rather run the tests directly on your desktop, set
+SHOW_GUI_TESTS=1.
+
+================================================================
+
+The test harness can also ignore known failures.  Simply create a
+file 'xfails' in the directory where the tests are being run which
+contains 'FAIL:' entries from previous test runs.  The order of the
+lines in the file is immaterial.  Also, the -verbose flag must be
+used.  Totals for both XFAILs and XPASSes will be output at the
+end of the run.
+
+In this way, implementations can track known failures and subsequent
+test runs can thus highlight regressions.
+
+================================================================
+
+There are still a few things to do in the test framework.
+
+
+It would be nice if we could have tests that can specify their
+expected output.  The expected output could be encoded directly in the
+test, e.g.:
+
+      /*{
+      expected output here
+      }*/
+
+The test harness would be reponsible for extracting this from the test
+source and then setting things up so that the checking is done
+correctly.  SimpleTestHarness could do this by setting System.out to
+point to some string buffer for the duration of the test.
+
+
+Change things so that the .o files can be built directly from the
+.java files without any intermediate .class files.  (Use the same
+configuration options that libjava uses.)
+
+
+Some tests probably should be run in their own environment.  This
+could be implemented using a new "group" magic comment, along with
+changes to the `choose' program to generate the list of classes in
+chunks.  Each such chunk would be fed into SimpleTestHarness in a
+separate invocation.
+
+
+It would be interesting to be able to compare test results for
+unspecified things against Sun's implementation.  This could be done
+by adding a new method to the test harness.  The `expected' argument
+would come from Sun's implementation.  Unlike `check', a failure here
+would simply be informative.
Index: RunnerProcess.java
===================================================================
RCS file: RunnerProcess.java
diff -N RunnerProcess.java
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ RunnerProcess.java	5 Apr 2006 20:07:20 -0000
@@ -0,0 +1,741 @@
+// Copyright (c) 2006  Red Hat, Inc.
+// Written by Anthony Balkissoon <abalkiss@redhat.com>
+// Adapted from gnu.testlet.SimpleTestHarness written by Tom Tromey.
+// Copyright (c) 2005  Mark J. Wielaard  <mark@klomp.org>
+
+// This file is part of Mauve.
+
+// Mauve is free software; you can redistribute it and/or modify
+// it under the terms of the GNU General Public License as published by
+// the Free Software Foundation; either version 2, or (at your option)
+// any later version.
+
+// Mauve is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License
+// along with Mauve; see the file COPYING.  If not, write to
+// the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, 
+// Boston, MA 02110-1301 USA.
+
+// This file is used by Harness.java to run the tests in a separate process
+// so that the process can be killed by the Harness if it is hung.
+
+import gnu.testlet.ResourceNotFoundException;
+import gnu.testlet.TestHarness;
+import gnu.testlet.TestReport;
+import gnu.testlet.TestResult;
+import gnu.testlet.TestSecurityManager;
+import gnu.testlet.Testlet;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.FileReader;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.io.Reader;
+import java.util.Vector;
+
+public class RunnerProcess
+    extends TestHarness
+{
+  // A description of files that are not tests
+  public static final String NOT_A_TEST_DESCRIPTION = "not-a-test";
+  
+  // Total number of harness.check calls since the last checkpoint
+  private int count = 0;
+
+  // Total number of harness.check fails plus harness.fail calls
+  private int failures = 0;
+
+  // The expected fails
+  private static Vector expected_xfails = new Vector();
+
+  // The number of expected failures that did fail
+  private int xfailures = 0;
+
+  // The number of expected failures that passed (unexpectedly)
+  private int xpasses = 0;
+
+  // The total number of harness.check calls plus harness.fail calls
+  private int total = 0;
+
+  // True if we should run in verbose (noisy) mode
+  private boolean verbose = false;
+
+  // True if failing calls to harness.check(Object, Object) should print the
+  // toString methods of each Object
+  private boolean debug = false;
+
+  // True if stack traces should be printed for uncaught exceptions
+  private boolean exceptions = false;
+
+  // A description of the test
+  private String description;
+
+  // The name of the last checkpoint
+  private String last_check;
+
+  // The TestReport if a report is necessary
+  private TestReport report = null;
+
+  // The result of the current test
+  private TestResult currentResult = null;
+
+  
+  protected RunnerProcess(boolean verbose, boolean debug,
+                              boolean exceptions, TestReport report)
+  {
+    this.verbose = verbose;
+    this.debug = debug;
+    this.exceptions = exceptions;
+    this.report = report;
+
+    try
+      {
+        BufferedReader xfile = new BufferedReader(new FileReader("xfails"));
+        String str;
+        while ((str = xfile.readLine()) != null)
+          {
+            expected_xfails.addElement(str);
+          }
+      }
+    catch (FileNotFoundException ex)
+      {
+        // Nothing.
+      }
+    catch (IOException ex)
+      {
+        // Nothing.
+      }
+  }
+
+  public static void main(String[] args)
+  {    
+    boolean verbose = false;
+    boolean debug = false;
+    boolean exceptions = false;
+    String xmlfile = null;
+    TestReport report = null;
+    
+    // The test that Harness wants us to run.
+    String testname;
+    
+    // This reader is used to get testnames from Harness
+    BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
+    
+    // Parse the arguments so we can create an appropriate RunnerProcess
+    // to run the tests.
+    for (int i = 0; i < args.length; i++)
+      {        
+        if (args[i].equals("-verbose"))
+          // User wants to run in verbose mode.
+          verbose = true;
+        else if (args[i].equals("-debug"))
+          // User wants extra debug info.
+          debug = true;
+        else if (args[i].equals("-exceptions"))
+          // User wants stack traces for uncaught exceptions.
+          exceptions = true;
+        else if (args[i].equals("-xmlout"))
+          {
+            // User wants a report.
+            if (++i >= args.length)
+              throw new RuntimeException("No file path after '-xmlout'.");
+            xmlfile = args[i];
+          }
+      }
+    // If the user wants an xml report, create a new TestReport.
+    if (xmlfile != null)
+      {
+        report = new TestReport(System.getProperties());
+      }
+
+    while (true)
+      {
+        // Ask Harness for a test to run, run it, report back to Harness, and
+        // then repeat the cycle.
+        try
+        {
+          testname = in.readLine();
+          RunnerProcess harness = 
+            new RunnerProcess(verbose, debug, exceptions, report);
+          runAndReport(harness, testname, report, xmlfile);
+        }
+        catch (IOException ioe)
+        {          
+          System.out.println("Problems communicating between " +
+                "Harness and RunnerProcess");
+        }
+      }
+  }
+  
+  /**
+   * This method runs a single test.  If an exception is caught some
+   * information is printed out so the test can be debugged.
+   * @param name the name of the test to run
+   */
+  protected void runtest(String name)
+  {
+    // Try to ensure we start off with a reasonably clean slate.
+    System.gc();
+    System.runFinalization();
+
+    currentResult = new TestResult(name);
+
+    checkPoint(null);
+
+    Testlet t = null;
+    try
+      {
+        Class k = Class.forName(name);
+
+        Object o = k.newInstance();
+        if (! (o instanceof Testlet))
+          {
+            description = NOT_A_TEST_DESCRIPTION;
+            return;
+          }
+
+        t = (Testlet) o;
+      }
+    catch (Throwable ex)
+      {
+        // Maybe the file was marked not-a-test, check that before we report
+        // it as an error
+        try
+        {
+          File f = new File(name.replace('.', File.separatorChar) + ".java");
+          BufferedReader r = new BufferedReader(new FileReader(f));
+          String firstLine = r.readLine();
+          // Since some people mistakenly put not-a-test not as the first line
+          // we have to check through the file.
+          while (firstLine != null)
+            {
+              if (firstLine.contains("not-a-test"))
+                  {
+                    description = NOT_A_TEST_DESCRIPTION;
+                    return;
+                  }
+              firstLine = r.readLine();
+            }
+        }
+        catch(FileNotFoundException fnfe)
+        {          
+        }
+        catch (IOException ioe)
+        {          
+        }
+        
+        String d = "FAIL: " + stripPrefix(name)
+                   + "uncaught exception when loading";
+        currentResult.addException(ex, "failed loading class " + name);
+        if (verbose || exceptions)
+          d += ": " + ex.toString();
+
+        if (exceptions)
+          ex.printStackTrace(System.out);
+        debug(ex);
+        if (ex instanceof InstantiationException
+            || ex instanceof IllegalAccessException)
+          debug("Hint: is the code we just loaded a public non-abstract "
+                + "class with a public nullary constructor???");
+        ++failures;
+        ++total;
+      }
+
+    if (t != null)
+      {
+        description = name;
+        try
+          {
+            if (verbose)
+              System.out.println("TEST: " + stripPrefix(name));
+            t.test(this);
+            removeSecurityManager();
+          }
+        catch (Throwable ex)
+          {
+            if (failures == 0 && !verbose)
+              System.out.println ("FAIL: " + stripPrefix(name) + ":");
+            removeSecurityManager();
+            String s = (last_check == null ? "" : " at " + last_check + " ["
+                                                  + (count + 1) + "]");
+            String d = exceptionDetails(ex, name, exceptions);
+            currentResult.addException(ex, "uncaught exception" + s);
+            System.out.println(d);
+            if (exceptions)
+              ex.printStackTrace(System.out);
+            debug(ex);
+            ++failures;
+            ++total;
+          }
+      }
+    if (report != null)
+      report.addTestResult(currentResult);
+  }
+
+  
+  /**
+   * This method runs a single test in a new Harness and increments the
+   * total tests run and total failures, if the test fails.  Prints
+   * PASS and adds to the report, if the appropriate options are enabled.
+   * @param harness the TestHarness to use for this test
+   * @param testName the name of the test
+   * @param report the TestReport to generate
+   * @param xmlfile the name of the file for xml output
+   */
+  static void runAndReport(RunnerProcess harness, String testName,
+                      TestReport report, String xmlfile)
+  {
+    // If this call to runtest hangs, Harness will terminate this process.
+    harness.runtest(testName);
+    // If the test wasn't a real test, return and tell Harness so.
+    if (harness.description.equals(NOT_A_TEST_DESCRIPTION))
+      {
+        System.out.println("RunnerProcess:not-a-test");
+        return;
+      }
+    
+    int temp = harness.done();
+    
+    // Print the report if necessary.
+    if (report != null)
+      {
+        File f = new File(xmlfile);
+        try
+          {
+            report.writeXml(f);
+          }
+        catch (IOException e)
+          {
+            throw new Error("Failed to write data to xml file: "
+                            + e.getMessage());
+          }
+      }
+    
+    // Report back to Harness that we've finished properly, whether the test
+    // passed or failed.  Harness will wait for a message starting with 
+    // "RunnerProcess" and if it doesn't receive it after a certain amount of 
+    // time (specified in the timeout variable) it will consider the test hung
+    // and will terminate and restart this Process.
+    if (temp == 0)
+      System.out.println ("RunnerProcess:pass");
+    else
+      System.out.println("RunnerProcess:fail");
+  }
+  
+  private final String getDescription(StackTraceElement[] st)
+  {
+    // Find the line number of the check() call that failed.
+    int line = -1;
+    for (int i = 0; i < st.length; i++)
+      {
+        if (st[i].getClassName().equals(description))
+          {
+            line = st[i].getLineNumber();
+            break;
+          }
+      }
+    
+    return ("  line " + line + ": " + ((last_check == null) ? "" : last_check) + " ["
+            + (count + 1) + "]");
+  }
+
+  protected int getFailures()
+  {
+    return failures;
+  }
+
+  /**
+   * Removes the "gnu.testlet." from the start of a String.
+   * @param val the String
+   * @return the String with "gnu.testlet." removed
+   */
+  private static String stripPrefix(String val)
+  {
+    if (val.startsWith("gnu.testlet."))
+      val = val.substring(12);
+    return val;
+  }
+  
+  /**
+   * A convenience method that sets a checkpoint with the specified name
+   * then prints a message about the forced fail.
+   *
+   * @param name  the checkpoint name.
+   */
+  public void fail(String name)
+  {
+    checkPoint(name);
+    check2(false);
+    System.out.println ("forced fail");
+  }
+  
+  /**
+   * Checks the two objects for equality and prints a message if they are not
+   * equal.
+   *
+   * @param result  the actual result.
+   * @param expected  the expected result.
+   */
+  public void check(Object result, Object expected)
+  {
+    boolean ok = (result == null ? expected == null : result.equals(expected));
+    check2(ok);
+    // This debug message may be misleading, depending on whether
+    // string conversion produces same results for unequal objects.
+    if (! ok)
+      {
+        String gotString = result == null ? "null"
+                                         : result.getClass().getName();
+        String expString = expected == null ? "null"
+                                           : expected.getClass().getName();
+        
+        // If the strings are equal but the objects aren't, we have to tell
+        // the user so, otherwise we can just print the strings.
+        if (gotString.equals(expString))
+          {
+            // Since the toString() methods can print long and ugly information
+            // we only use them if the user really wants to see it, ie
+            // if they used the -debug option.
+            if (debug)
+              {
+                gotString = result.toString();
+                expString = expected.toString();
+                System.out.println("\n           got " + gotString
+                                   + "\n\n           but expected " + expString
+                                   + "\n\n");
+                return;
+              }
+            else
+              {
+                System.out.println("objects were not equal.  " +
+                        "Use -debug for more information.");
+                return;
+              }
+          }
+        System.out.println("got " + gotString + " but expected " + expString);
+      }
+  }
+
+  /**
+   * Checks two booleans for equality and prints out a message if they are not
+   * equal.
+   * 
+   * @param result the actual result.
+   * @param expected the expected result.
+   */
+  public void check(boolean result, boolean expected)
+  {
+    boolean ok = (result == expected);
+    check2(ok);
+    if (! ok)
+      System.out.println("got " + result + " but expected " + expected);
+  }
+
+  /**
+   * Checks two ints for equality and prints out a message if they are not
+   * equal.
+   * 
+   * @param result the actual result.
+   * @param expected the expected result.
+   */
+  public void check(int result, int expected)
+  {
+    boolean ok = (result == expected);
+    check2(ok);
+    if (! ok)
+      System.out.println("got " + result + " but expected " + expected);
+  }
+
+  /**
+   * Checks two longs for equality and prints out a message if they are not
+   * equal.
+   * 
+   * @param result the actual result.
+   * @param expected the expected result.
+   */
+  public void check(long result, long expected)
+  {
+    boolean ok = (result == expected);
+    check2(ok);
+    if (! ok)
+      System.out.println("got " + result + " but expected " + expected);
+  }
+
+  /**
+   * Checks two doubles for equality and prints out a message if they are not
+   * equal.
+   * 
+   * @param result the actual result.
+   * @param expected the expected result.
+   */
+  public void check(double result, double expected)
+  {
+    // This triple check overcomes the fact that == does not
+    // compare NaNs, and cannot tell between 0.0 and -0.0;
+    // and all without relying on java.lang.Double (which may
+    // itself be buggy - else why would we be testing it? ;)
+    // For 0, we switch to infinities, and for NaN, we rely
+    // on the identity in JLS 15.21.1 that NaN != NaN is true.
+    boolean ok = (result == expected ? (result != 0)
+                                       || (1 / result == 1 / expected)
+                                    : (result != result)
+                                      && (expected != expected));
+    check2(ok);
+    if (! ok)
+      System.out.println("got " + result + " but expected " + expected);
+  }
+  
+  /**
+   * Checks if <code>result</code> is true.  If not, prints out 
+   * a message.
+   * @param result the boolean to check
+   */
+  public void check(boolean result)
+  {
+    check2(result);
+    if (!result)
+      System.out.println ("boolean passed to check was false");
+  }
+  
+  /**
+   * This method prints out failures and checks the XFAILS file.
+   * @param result true if the test passed, false if it failed
+   */
+  private void check2(boolean result)
+  {
+    // If the test failed we have to print out some explanation.
+    StackTraceElement[] st = new Throwable().getStackTrace();
+    String desc = getDescription(st);
+
+
+    if (! result)
+      {
+        
+        currentResult.addFail((last_check == null ? "" : last_check)
+                              + " (number " + (count + 1) + ")");
+        if (! expected_xfails.contains(desc))
+          {
+            // If the failure wasn't expected, we need to print it to the
+            // screen.
+            if (failures == 0 && !verbose)
+              System.out.println ("FAIL: " + stripPrefix(description) + ":");
+            if (verbose)
+              System.out.print("  FAIL:");
+            System.out.print(desc + " -- ");
+            ++failures;
+          }
+        else if (verbose)
+          {
+            // If it was expected but verbose is true, we also print it.
+            System.out.println("X" + desc  + " -- ");
+            ++xfailures;
+          }
+      }
+    else
+      {
+        // The test passed.  Only print info if verbose is true
+        currentResult.addPass();
+        if (verbose)
+          {
+            if (expected_xfails.contains(desc))
+              {
+                System.out.println("XPASS: " + desc);
+                ++xpasses;
+              }
+            else
+              System.out.println("  pass:" + desc);
+          }
+      }
+    ++count;
+    ++total;
+  }
+
+  public Reader getResourceReader(String name) throws ResourceNotFoundException
+  {
+    return new BufferedReader(new InputStreamReader(getResourceStream(name)));
+  }
+
+  public InputStream getResourceStream(String name)
+      throws ResourceNotFoundException
+  {
+    // The following code assumes File.separator is a single character.
+    if (File.separator.length() > 1)
+      throw new Error("File.separator length is greater than 1");
+    String realName = name.replace('#', File.separator.charAt(0));
+    try
+      {
+        return new FileInputStream(getSourceDirectory() + File.separator
+                                   + realName);
+      }
+    catch (FileNotFoundException ex)
+      {
+        throw new ResourceNotFoundException(ex.getLocalizedMessage() + ": "
+                                            + getSourceDirectory()
+                                            + File.separator + realName);
+      }
+  }
+
+  public File getResourceFile(String name) throws ResourceNotFoundException
+  {
+    // The following code assumes File.separator is a single character.
+    if (File.separator.length() > 1)
+      throw new Error("File.separator length is greater than 1");
+    String realName = name.replace('#', File.separator.charAt(0));
+    File f = new File(getSourceDirectory() + File.separator + realName);
+    if (! f.exists())
+      {
+        throw new ResourceNotFoundException("cannot find mauve resource file"
+                                            + ": " + getSourceDirectory()
+                                            + File.separator + realName);
+      }
+    return f;
+  }
+
+  public void checkPoint(String name)
+  {
+    last_check = name;
+    count = 0;
+  }
+
+  public void verbose(String message)
+  {
+    if (verbose)
+      System.out.println(message);
+  }
+
+  public void debug(String message)
+  {
+    debug(message, true);
+  }
+
+  public void debug(String message, boolean newline)
+  {
+    if (debug)
+      {
+        if (newline)
+          System.out.println(message);
+        else
+          System.out.print(message);
+      }
+  }
+
+  public void debug(Throwable ex)
+  {
+    if (debug)
+      ex.printStackTrace(System.out);
+  }
+
+  public void debug(Object[] o, String desc)
+  {
+    debug("Dumping Object Array: " + desc);
+    if (o == null)
+      {
+        debug("null");
+        return;
+      }
+
+    for (int i = 0; i < o.length; i++)
+      {
+        if (o[i] instanceof Object[])
+          debug((Object[]) o[i], desc + " element " + i);
+        else
+          debug("  Element " + i + ": " + o[i]);
+      }
+  }
+
+  private void removeSecurityManager()
+  {
+    SecurityManager m = System.getSecurityManager();
+    if (m instanceof TestSecurityManager)
+      {
+        TestSecurityManager tsm = (TestSecurityManager) m;
+        tsm.setRunChecks(false);
+        System.setSecurityManager(null);
+      }
+  }
+
+  /**
+   * This method returns some information about uncaught exceptions.
+   * Nothing is printed if the test was run with the -exceptions flag since in
+   * that case a full stack trace will be printed.
+   * @param ex the exception
+   * @param name the name of the test
+   * @param exceptions true if a full stack trace will be printed
+   * @return a String containing some information about the uncaught exception
+   */
+  private String exceptionDetails(Throwable ex, String name,
+                                         boolean exceptions)
+  {
+    // If we can't get a stack trace, we return no details.
+    StackTraceElement[] st = ex.getStackTrace();
+    if (st == null || st.length == 0)
+      return "  uncaught exception:";
+
+    // lineOrigin will store the line number in the test method that caused
+    // the exception.
+    int lineOrigin = -1;
+    
+    // This for loop looks for the line within the test method that caused the
+    // exception.
+    for (int i = 0; i < st.length; i++)
+      {
+        if (st[i].getClassName().equals(name)
+            && st[i].getMethodName().equals("test"))
+          {
+            lineOrigin = st[i].getLineNumber();
+            break;
+          }
+      }
+    
+    // sb holds all the information we wish to return.
+    StringBuilder sb = 
+      new StringBuilder("  line " + lineOrigin + ": " + 
+                        (last_check == null ? "" : last_check) +
+                        " [" + (count + 1) + "] -- uncaught exception:");
+    
+    // If a full stack trace will be printed, this method returns no details.
+    if (exceptions)
+      return sb.toString();
+    
+    // Otherwise, add some details onto the buffer before returning.
+    sb.append("\n  " + ex.getClass().getName() + " in ");
+    sb.append(stripPrefix(st[0].getClassName()) + "." + st[0].getMethodName()
+              + " (line " + st[0].getLineNumber() + ")");
+    sb.append("\n  Run tests with -exceptions to print exception " +
+              "stack traces.");
+    return sb.toString();
+  }
+  
+  /**
+   * This method is called from Harness to tidy up.  It prints out appropriate
+   * information and returns 0 if the test passed or 1 if it failed.
+   * @return 0 if the test passed, 1 if it failed
+   */
+  protected int done()
+  {
+    if (failures > 0 && verbose)
+      {
+        System.out.print("TEST FAILED: ");
+        System.out.println(failures + " of " + total + " checks failed "
+                           + stripPrefix(description));
+      }
+    else if (verbose)
+      System.out.println("TEST PASSED (" + total + " checks) "
+                         + stripPrefix(description));
+    if (xpasses > 0)
+      System.out.println(xpasses + " of " + total
+                         + " tests unexpectedly passed");
+    if (xfailures > 0)
+      System.out.println(xfailures + " of " + total
+                         + " tests expectedly failed");
+    return failures > 0 ? 1 : 0;
+  }
+}
Index: harness
===================================================================
RCS file: harness
diff -N harness
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ harness	5 Apr 2006 20:07:20 -0000
@@ -0,0 +1,52 @@
+#!/bin/bash
+
+# Copyright (c) 2006 Red Hat.
+# Written by Anthony Balkissoon abalkiss@redhat.com
+
+# This file is part of Mauve.
+
+# Mauve is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2, or (at your option)
+# any later version.
+
+# Mauve is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+
+# You should have received a copy of the GNU General Public License
+# along with Mauve; see the file COPYING.  If not, write to
+# the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, 
+# Boston, MA 02110-1301 USA.
+
+## This tests to see if the MAUVEVM environment variable was set
+defaultVM=0
+if test "x$MAUVEVM" = "x"; then
+MAUVEVM="java"
+defaultVM=1
+fi
+
+## And this checks to see if the VM was set by a command line option
+helpOption=0;
+TESTS=
+while [ $# -gt 0 ]; do
+  case "$1" in 
+   -vm ) defaultVM=0; MAUVEVM=$2 ; shift 2 ;;
+   -help | --help | -h ) helpOption=1; TESTS=${TESTS}" "${1} ; shift ;;
+   * ) TESTS=${TESTS}" "${1} ; shift ;;
+  esac
+done
+
+if [ $defaultVM -eq 1 ]
+    then
+    if [ $helpOption -eq 0 ]
+	then
+	echo "Running tests with 'java' - to set the VM use the -vm option";
+	echo "or set the environment variable MAUVEVM";
+	echo ""
+    fi
+fi
+
+$MAUVEVM -Djava.vm.exec="$MAUVEVM" Harness $TESTS
+

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]