This is the mail archive of the firstname.lastname@example.org mailing list for the Mauve project. See the Mauve home page for more information.
Artur wrote: > We have to decide if mauve is going to be core library test, or vm test. > I think that it is designed to be former - I don't think that we should > use same framework for vm testing. Yes. The current intent is to only produce functional test cases for the core libraries. > Having said that, I would like to present my idea of designing/running > tests. > > We have master program, which scan directories for text files with > tests' main class names. For each test it: > 1) loads test class (by either Class.forName() or classloader - second > would allow us to forget about lengthy package names) > 2) instantiates it through: test = (Test) class.newInstance() > 3) creates Thread(test) > 4) sets result harness for test : test.setHarness(harness) > 5) runs the thread (thus running test in separate thread) > 6) reads results from harness One problem with this proposal is that it depends on working threads. There's at least one flavour of Java that does not include support for threads (JavaCard). That being said, it is useful to distinguish between the test harness code and the test cases themselves. Obviously, the really interesting part of this project is the suite of test cases. We need them to be runnable on a wide array of Java systems. That's why we need to be careful about the supporting code that goes into the test cases themselves (for instance - it would be hard to defend using threads in a java.lang.Short test _case_ (I realize that this is not what you were suggesting - it's just an example)). The test harness is a different matter. What we've done so far is designed/implemented an abstract test harness class, and provided a sample derivation of this harness (SimpleTestHarness). Ideally, the sample harness packaged with Mauve would be useful to many people - however, at Cygnus we are implementing our derivation which integrates into our existing machinery for testing embedded systems. > harness would support following methods > addError(String) > addWarning(String) // for non-critical differences from sun's JDK > testing(String) I'm not sure from your email whether or not you've checked out the existing code. Something like addWarning() would be useful. Sun's specs occasionally leave stuff to the imagination. It would be nice to provide some way to capture differences. > testing(..) method would be called before each part of test, many times > in each method. This allows to know which error caused thread to die. > For example > > harness.testing("Hashmap.put with null key&value") > hashmap.put(null,null); We currently don't have anything like this. Right now the test harness counts how many tests have been run for a given Testlet. When a test fails, it tells you the count of the failure. This sometimes makes looking up failures in the source a little annoying. However, in an ideal world, you wouldn't actually be doing this a lot, and not forcing people to textually describe every single little test makes writing cases much less tedious. Perhaps a useful trade-off would be to define something like test checkpoints. Each testlet has a default checkpoint ("beginning of testlet"). You could add checkpoints by calling harness.checkpoint("checkpoint description"). When a test fails, it gives you the count from the last checkpoint. This would make it easy to look up failures in Testlets with large numbers of individual tests. > One question at end: > How exact mauve is supposed to be ? Should it test if given class works > (more or less) or should it test EVERY aspect of visible API ? Said > other way, are we designing nice regression test, or full blown JCK ? The latter would be preferable I think. AG -- Anthony Green Cygnus Solutions Sunnyvale, California