This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH roland/test-unsupported] Let tests result in UNSUPPORTED; use that for unbuildable C++ cases
- From: Roland McGrath <roland at hack dot frob dot com>
- To: Joseph Myers <joseph at codesourcery dot com>
- Cc: "GNU C. Library" <libc-alpha at sourceware dot org>
- Date: Tue, 10 Mar 2015 15:10:11 -0700 (PDT)
- Subject: Re: [PATCH roland/test-unsupported] Let tests result in UNSUPPORTED; use that for unbuildable C++ cases
- Authentication-results: sourceware.org; auth=none
- References: <20150307010200 dot B51122C3B39 at topped-with-meat dot com> <alpine dot DEB dot 2 dot 10 dot 1503070123330 dot 9654 at digraph dot polyomino dot org dot uk>
> I wonder if any existing tests return a count of errors (incrementing a
> variable on error rather than setting it to 1, then returning that
> variable) and could potentially return 77 by accident. But such tests
> would already be broken if they returned 256, so any issues can be
> considered pre-existing.
Right. (To be pedantic, there might be such cases where there are >= 77
but < 256 errors possible in the test, in which case it would be a new
issue in practice.) Off hand I found test-efgcvt.c seems to be in this
category. There might be others.
I think we should formalize our use of exit status values in tests and then
audit all the tests to make sure they conform. That would go along with an
audit to make sure everything is using test-skeleton and so forth. I think
with this as a stated plan, it's OK to install the new infrastructure with
the possible new risk of FAIL->UNSUPPORTED for this corner case.
> And if cross-compiling without test-wrapper (so run-built-tests = no), in
> principle all affected tests should be UNSUPPORTED.
In principle, yes. But I don't think it's especially useful in the real
world. Certainly it can wait until further cleanup of the test
infrastructure we will do at some point in the future.
> And at some point there's the issue of allowing tests to report status
> for their individual assertions (so a test might have some assertions
> that pass and some that are unsupported).
Something like that I don't think we'll achieve before a complete overhaul
of our entire approach to tests.
> I don't think XPASSes should cause "make check" to fail - or at least,
> that's contrary to how we use XFAILs at present. [...]
OK. I used the logic that XPASS is an unexpected anomaly that merits
investigation. I still tend to think that this is what it should mean.
But it was not the intent of this change to change existing policy on the
meaning of XFAIL or XPASS. We can revisit all the subtleties you raised
later on. For now, I'll change my code to admit XPASS for purposes of the
'make check' exit status (but still list them in the summary output).
Thanks,
Roland