This is the mail archive of the
mailing list for the binutils project.
RE: [PATCH] [ARC] Improve filtering instructions while disassembling.
> There is something that maybe you could quantify that would help (I
> think) understand more about this patch.
> If we consider two arc extensions, lets call them A and B, then we
> could imagine their opcode spaces as being like this:
> The algorithm in your patch relies on spotting an instruction from
> either 'A' or 'B' in the non-overlapping region, before spotting an
> instruction from the overlapping region (right?)
> If the first thing we spot is an instruction from the overlapping
> region then we have no way to know if we should pick 'A' or 'B', so we
> just pick one based on code order (say 'A').
> After that if we see a 'B' from the non-overlapping region then (I
> think) we'll get a '.word ...' or some other immediate value form in
> the output (right?).
> So my question would be, for the extensions that you have visibility
> of, which you know overlap, what is the degree of overlap? If the
> overlap is high then we'll likely be wrong ~50% of the time. If the
> degree of overlap is low, then we'll be right more often.
> I also wonder if the patch could be improved such that if the first
> thing we spot is an overlap then maybe we can signal this in some way
> (to indicate that we are guessing at the decode)? Or maybe as Nick
> suggested earlier, error in that case with a message indicating that a
> command line flag is needed?
> Also, I wonder if we first see an overlap, and then see a
> non-overlapping instruction from the sub-set we didn't choose (imagine
> we guessed 'A', but then see a non-overlapping 'B') maybe we should
> error, or at least print a warning to confess that we likely got it
Per Nick conversation, the above is not applying as the disassembler will emit a warning when a potential overlapping encoding is happening. Hence, the user is aware of a potential miss disassembling, and he/she can use flags to control them.
> In general I understand, and am sympathetic to your desire to handle
> legacy, or non-free toolchains, my concern here is primarily that I
> would rather see the fuller meta-data driven solution implemented
> first, and then add the fuzzy/guess based solution as a later pass.
Your statements are not accurate and malicious (in general). However, you are welcome to contribute.
> I have not looked at your revised patch in detail yet (sorry, I was
> crazy busy today), but over this patch I would like to see the
> deficiencies in the algorithm (discussed above) mentioned more clearly
> code comments around you filtering code. The original patch (I felt)
> gave the impression that this change would "do-the-right-thing", when
> I think it's more "have-a-guess-at-what-the-right-thing-is".
If you did not look at my revised patch and followed the thread until now, what are we talking about?