This is the mail archive of the mailing list for the Mauve project. See the Mauve home page for more information.

[Date Prev][Date Next][Thread Prev][Thread Next]
[Date Index] [Subject Index] [Author Index] [Thread Index]

What are we testing ?

There was questions about JNI few messages ago. I would like to share my
thoughts on the subject.

We have to decide if mauve is going to be core library test, or vm test.
I think that it is designed to be former - I don't think that we should
use same framework for vm testing.

VM tests require a lot more work - you need to write C functions, load
native libraries, VM can crash for some magical reason. 

If somebody wants to include vm testing, please do it in separate tree,
using different makefiles etc, etc. Mauve should be designed to work in
STABLE vm, with not-perfect core library, not to test if some opcodes
are undefined, or how badly VM will crash with env->FindClass(env,

Having said that, I would like to present my idea of designing/running

We have master program, which scan directories for text files with
tests' main class names. For each test it:
1) loads test class (by either Class.forName() or classloader - second
would allow us to forget about lengthy package names)
2) instantiates it through: test = (Test) class.newInstance()
3) creates Thread(test)
4) sets result harness for test : test.setHarness(harness)
5) runs the thread (thus running test in separate thread)
6) reads results from harness

harness would support following methods
addWarning(String) // for non-critical differences from sun's JDK

testing(..) method would be called before each part of test, many times
in each method. This allows to know which error caused thread to die.
For example

harness.testing("Hashmap.put with null key&value")

HashMap should not throw NullPointerException here, but it could.
Instead of unknown error, we know exactly at which point test failed -
it is given in harness variable set by testing method. Such exceptions
can be catched gracefully by enclosing scope of test and test can end
itself in orderly fasion, but even if some notcatchable error is thrown,
master testing machine can timeout after some period of time - this will
also allow testing deadlocks and endless loops (and too slow VMs :)

This is for scenario when some unexpected error happens. In most cases
probable error can be safely checked and thus addError method - it
allows to add many errors, allowing to not block on first error.

One question at end:
How exact mauve is supposed to be ? Should it test if given class works
(more or less) or should it test EVERY aspect of visible API ? Said
other way, are we designing nice regression test, or full blown JCK ?