This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Remove code handling old ARM aliases from GDB


On Thu, 5 May 2011, Joel Brobecker wrote:
> > As having introduced this, you're on the hook to investigate and
> > rectify.  Another option to those I mentioned is to xfail or
> > kfailing the failing tests.  Please.
>
> I've tried to understand the problem, and it seems to me that Joseph's
> position is entirely reasonable in this case. He has effectively
> introduced new tests that happen to fail in the arm-elf case, is that
> correct?

A gating test within the arm testsuite was changed from matching
xscale*-*-* to arm*-*-*, so going from never-match to
always-match which at a glance seems correct there.  Except that
some of the uncovered tests fail.

The theory is that way back then, a tuple matching xscale*-*-*
enabled stuff that was disabled on arm*-*-*.  (It seems this has
some bearing to reality, see sim/arm/ChangeLog).  Hopefully the
tests passed at that time, so no, they're not *new* tests.
Since then, at least parts of the xscale stuff have been folded
into arm*-*, but the testsuite hasn't been updated and the tests
and/or the simulator has rotted.

My take is that when you modify stuff and there's related
breakage where there was none apparent before your change, you
should at least put a minimum of effort into checking why, and
even fix it, regardless of whether it was a just a test that
never ran before or whatever.

> If they fail on arm-elf, someone who cares about this platform
> should investigate them and determine whether to fix them, or whether
> they are a sim/gdb problem (kfail), or a problem from an external
> dependency (xfail).

I opened PR 12737.

In my autotester for the src/sim simulators for various targets,
I'm using the thumbs-up from make check from the arm-elf
testsuite as a sign that it's at least no worse off than before
for that target.  Having new, old or re-discovered tests that
fail where none did before, doesn't help.  And no, keeping track
of new vs. old fails is overkill in the src/sim testsuite.

brgds, H-P


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]