This is the mail archive of the
gdb-patches@sourceware.org
mailing list for the GDB project.
Re: [RFC] delete gdb.cp/ambiguous.exp ?
- From: Pedro Alves <palves at redhat dot com>
- To: Doug Evans <dje at google dot com>, GDB Patches <gdb-patches at sourceware dot org>
- Date: Tue, 19 Aug 2014 22:47:21 +0100
- Subject: Re: [RFC] delete gdb.cp/ambiguous.exp ?
- Authentication-results: sourceware.org; auth=none
- References: <yjt27g25tmac dot fsf at ruffy dot mtv dot corp dot google dot com> <53F38030 dot 1020406 at redhat dot com> <CADPb22TNGVOJ-WM9dQ5z_KetULps7ZfnXBG5DkxPOYTR00D8cw at mail dot gmail dot com> <53F38E62 dot 60403 at redhat dot com> <CADPb22RwCfsg4xFmhM-w8UfoA6uCiQ-xGm2PtR9ir7C4RKtM3w at mail dot gmail dot com>
On 08/19/2014 09:10 PM, Doug Evans wrote:
> On Tue, Aug 19, 2014 at 10:50 AM, Pedro Alves <palves@redhat.com> wrote:
>> On 08/19/2014 06:24 PM, Doug Evans wrote:
>>
>>>> Actually enabling the test (removing the skip, and adding
>>>> nowarnings), we see that indeed GDB outputs no warning:
>>>
>>> But given the early exit the test itself is never run.
>>
>> As I said, I removed that. ;-)
>
> I know (I thought that was obvious).
The statement you replied to clearly said I locally enabled the test
(_that_ much seems obvious to me), and you replied back as if I hadn't.
It just sounded like the start of going in circles. So I still don't
know what did you intend to actually say or refute to.
>>> I'm all for filing a bug and recording the test in the bug report.
>>> I'm even ok with keeping the test as is.
>>> The high order bit for me is exploring what the community wishes be
>>> done with this kind of "dead code".
>>
>> You're asking for a generic opinion, of the abstract "community",
>> while I think "case by case" generally applies. ;-)
>
> I'm asking people for their opinion on this case (and cases like it).
> If I call them a "community", which is what gdb@ and gdb-patches@ is
> to me, I don't see the problem.
[
This is getting off topic, and about how we can improve communication
[which I am all for!], but I think the problem for me is in the very
lose definition of what is a "this case", and then wanting a generic rule.
I don't know what you mean by that exactly. It begs for a generic
"thou shalt do this or that" kind of answer, but, personally, I find it
difficult to answer such a question. I think we should be reasonable and
look at things in a little more detail than applying a rubber stamp rule.
So I did my best to actually look at the test and give out a rationale
for why I thought _enabling_ the test (because it's skipped today) would be
more useful than deleting it under a broad "dead code" rule. The answer
I got back, in turn was, paraphrasing: "yeah, but never runs today or for
a long while!". Well, I know that, of course, otherwise I wouldn't have
gone through the trouble of force-enabling the test locally! Again, that
feels like a discussion that will quickly degenerate to going around in
circles. It feels like we tend to talk past one another frequently,
while I believe there's no intention to on either side. How can we
improve here?
]
>
> Removal of dead code is, in my experience here, *rarely* a case by
> case decision.
> [each case may have minor details that are different, but the high
> order bit is generally DELETE IT :-)]
Right. When I say "case by case", I'm kind of defending against
considering unreachable C code the same as #if 0 code that is serving
as comment the same as clearly dead testsuite/lib/*.exp code
the same as a test that happens to be skipped on the most popular platform.
I don't think it's so rare that an investigation finds that what is currently
dead could or should be reachable and a fix in that direction is made.
And we don't have to go very far for that:
https://sourceware.org/ml/gdb-patches/2014-08/msg00180.html
> And yet this is a little different than an #if 0 in code, so I'm asking.
Yeah.
> We certainly want to document this issue in some way, but if we're
> just going to leave the test as is, not being run, and yet have to
> disable it for new compilers as they come along, then I think there's
> value in documenting the issue in a different way (e.g., bug reports
> are fine for this sort of thing).
>
>> My inclination for
>> tests is to first check whether there's something salvageable. If
>> someone wrote the test, it was probably important enough. If it's indeed
>> "dead code", then of I'd go for just removing it. I looked, and it seemed
>> to me that the test actually covers an aspect of printing that we don't
>> cover elsewhere, and actually reveals a bug.
>> So IMO, in this particular case, we should keep the test, remove the gcc check,
>> modernize and KFAIL it , and then have the bug fixed (if people agree it's
>> a bug). I'm of course not suggesting you do that all of yourself, but
>> since you asked, that's what I'd prefer see done in this case.
>
> This is feedback I can work with. Thanks.
>
> Going forward, as we discover bugs, are people ok with checking in a
> KFAIL'd test for a bug before the bug is fixed?
It depends on the bug, but in general yes I'm okay with KFAILs.
I've added and fixed some myself.
> "Just fix the bug."
> is the canonical response.
Again a generalization. ;-) Myself, I'll give out that response to
workarounds when I think the fix is doable in reasonable time compared
to the time needed to do the workaround. I may be mistaken, but I don't
recall saying that in response to a new KFAIL test, unless again, I
trust the fix is more trivial than the submitter initially believes.
> And yet there are often higher priorities.
> How do we document the bug's existence so that it's not forgotten
> until someone has time to fix it? File a bug report, of course.
> Well, I file bug reports all the time, I certainly don't have time to
> fix all of them instead of filing the bug report. And if in
> reproducing the bug I'm 80% of the way there in writing the test, is
> it ok to at least finish and check in that part, even if I don't have
> time to fix the bug?
IMO, yes, within reason.
> Or, instead of allowing that, should we just "grandfather in", so to
> speak, all existing disabled tests, not allow new KFAIL'd ones, and
> migrate these existing tests to KFAIL if we don't have time to fix the
> bug?
I don't think we should forbid new KFAILed tests as a rule.
Thanks,
Pedro Alves