This is the mail archive of the frysk@sourceware.org mailing list for the frysk project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: test case, exceptions, and tearDown


I have not raised these issues in any JUnit thrad, nor do I believe
there is much point in doing so.  The JUnit developers have stated their
opinion rather strongly on these issues, and I honestly do not believe
they are open to changing their mind on this.  I had numerous
conversations without organizations about this topic, and the feelings
on both sides of the issue seem to be rather strong.

Sorry that there has not been a more public discussion about this, that
I am aware of.

If a lack of distinction between a test failing due to external
influences (or being non-deterministic) and a recognized exception
triggering a failure is acceptable, the suggested use of junit is of
course sufficient.  Given our situation (needing to deal with more
system specific details), and as you say not being able to depend on a
write-one-run-anywhere situation, I don't think we can simply stick to
the overall recommendations by the junit team.  After all, we are *not*
working within their ideal environment.

Anyway, given that there are clearly two positions on this topic,
majority opinion ought to drive the final decision.  If we are to go
with a folding of ERROR and FAIL, I do want to suggest though that
failing tests are given more attention in view of such failure
potentially being a result of a true problem with the test itself rather
than a reflection of the implemented assertions not passing.

	Cheers,
	Kris

On Thu, Jul 05, 2007 at 11:51:49AM -0400, Andrew Cagney wrote:
> Kris,
> 
> You rase several interesting points; and as you note there are different 
> schools of thought.  Can you point me at the JUnit thread where you 
> raised this issue?  I'd be interested in reading the discussion.
> 
> JUnit gives us a common framework, and a set of conventions, that is 
> proving more than sufficient to our needs.  The only real stumbling 
> block we've encountered is that JUnit holds to Java's underlying 
> assumption that you yoour test can be written-once and run-everywhere.  
> Frysk, being system dependent, doesn't have that luxury and so needs 
> ways to identify tests that can't work or have problems in specific 
> circumstances; and for that we've kludged up a work-around that draws on 
> POSIX and its definition of UNSUPPORTED and UNRESOLVED.
> 
> Given the success we're seeing with developers adding JUnit tests, I 
> consider this more than sufficient.
> 
> Andrew
> 
> 
> Kris Van Hees wrote:
> >While I agree in principle with the things you quote below, and in
> >general with most of the junit FAQ (as I have for many years), I do also
> >believe that we shouldn't necessarily blindly follow whatever is stated
> >in the FAQ.  I mean that in very generic terms: the purpose of having a
> >testsuite is to provide us with clear, consistent, and above all useful
> >results.
> >
> >One thing I have always had an issue with concerning junit is the
> >blending of ERROR and FAIL in junit 4.  I believe that the distinction
> >between a failed test (assertions failed or expected failure mode (e.g.
> >exception) occcured) and a test execution that failed (unexpected event
> >caused the test to not complete as expected, i.e. something made it
> >impossible to make a correct PASS/FAIL determination).  In that sense, I
> >do not think it to be wise to always take the strict rule of just
> >letting exceptions bubble up to the framework.  To me, the most
> >important word in the FAQ about this topic is "unexpected".  If there is
> >an exception that is known to validly occur in a failure scenario (it
> >makes the test fail to satisfy it assertions) then that should be
> >reflected in the result differently from an unexpected exception that
> >interferes with the test execution.
> >
> >There are obviously different schools of thought, but given that Frysk
> >is already using 2 testing frameworks that are set up to return results
> >that fall within the POSIX categories (as described in the dejagnu
> >documentation as reference - discussed in past months), it would make
> >sense that we make use of the full spectrum of result messages in a
> >meaningful way.
> >  
> 


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]