This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: why I dislike qXfer


On 06/16/2016 08:59 PM, taylor, david wrote:
> 
>> From: Pedro Alves [mailto:palves@redhat.com]

>> So a workaround that probably will never break is to adjust your stub to only
>> remember the xml fragment for only one (or a few) threads at a time, and
>> serve off of that.  That would only be a problem if gdb "goes backwards"
>> I.e., if gdb requests a lower offset (other than 0) than the previous
>> requested offset.
> 
> What I was thinking of doing was having no saved entries or, depending on
> GDB details yet to be discovered, one saved entry.
> 
> Talk to the core OS people about prohibiting characters that require quoting
> from occurring in the thread name.
> 
> Compute the maximum potential size of an entry with no padding.
> 
> Do arithmetic on the offset to figure out which process table entry to start with.
> 
> Do arithmetic on the length to figure out how many entries to process
> 
> Pad each entry at the end with spaces to bring it up to the maximum
> 
> For dead threads, fill the entry with spaces.
> 
> Report done ('l') when there are no more live threads between the current
> position and the end of the process table.

That sounds over complicated, but, up to you.

I think "no saved entries" would be problematic, unless you assume
that gdb never requests a chunk smaller than the size of one entry.
Because if it does, and you return half of a thread element,
when gdb fetches the rest of the element, the thread might have
changed state already.  So e.g., you end up returning an impossible
extended info, or thread name, with a Frankenstein-like mix of before/after
state change  (extended info goes "AAAA" -> "BBBB", and you report
back "AA" + "BB").  And if you're going to save one entry, might as well
keep it simple, as in my original suggestion.

> 
>> The issue is that qXfer was originally invented for (binary) target objects for
>> which gdb wants random access.  However, "threads", and few other target
>> objects are xml based.  And for those, it must always be that gdb reads the
>> whole object, or at least reads it sequentially starting from the beginning.  I
>> can well imagine optimizations where gdb processes the xml as it is reading it
>> and stops reading before reaching EOF.  But that wouldn't break the
>> workaround.
> 
> The qXfer objects for which I am thinking of implementing stub support, fall into
> two categories:
> 
> . small enough that I would expect GDB to read it in toto in one chunk.
>   For example, auxv.  Initially, I will likely have two entries (AT_ENTRY, AT_NULL);
>   6 or 7 others might get added later.  Worst case, it all easily fits in one packet.

GDB does cache some objects like that, but others doesn't.  E.g., auvx 
is cached nowadays, but that wasn't always the case, and most others
objects are not cached.

>>> It's too late now, but I would much prefer interfaces something like:
>>>
>>> either
>>>     qfXfer:object:read:annex:length
>>>     qsXfer:object:read:annex:length
>>> or
>>>     qfXfer:object:read:annex
>>>     qsXfer:object:read:annex
>>>
>>> [If the :length wasn't part of the spec, then send as much as you want
>>> so long as you stay within the maximum packet size.  My preference
>>> would be to leave off the length, but I'd be happy either way.]
>>
>> What would you do if the object to retrieve is larger than the maximum
>> packet size?
> 
> Huh?  qfXfer would read the first part, each subsequent qsXfer would read
> the next chunk.  If you wanted to think of it in offset/length terms, the offset
> for qfXfer would be zero; for qsXfer it would be the sum of the sizes (ignoring
> GDB escaping modifications) of the qfXfer packet and any qsXfer that occurred
> after the qfXfer and before this qsXfer.
> 
> As now, sub-elements (e.g. <thread> within <threads>) could be contained within
> one packet or split between multiple packets.  Put the packets together in the order
> received with no white space or anything else between them and pass the result off
> to GDB's XML processing.
> 
> Or do I not understand your question?

If you're still going to need to handle sub-elements split between packets,
then other than that making explicit the assumption that gdb reads the
object sequentially, what's the real difference between this and gdb
fetching using the existing qXfer, but requesting larger chunks, e.g.,
the size of the stub's reported max packet length?

On the "leave off the length", I don't think it'd be a good idea for the
target to be in complete control of the transfer chunk size, without
having a way to interrupt the transfer.  I mean, there's no real limit
on the incoming packet size (gdb grows the buffer dynamically), and
if gdb requests qfXfer:object:read:annex and the stub decides to send the
whole multi-megabyte object back in one go, that's going to hog
the RSP channel until the packet is fully transferred.

Thanks,
Pedro Alves


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]