This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: why I dislike qXfer


On 06/16/2016 06:42 PM, taylor, david wrote:
> 
>> From: Pedro Alves [mailto:palves@redhat.com]

> 
> We allow an arbitrary number of GDBs to connect to the GDB stub running
> in the OS kernel -- each connection gets a dedicated thread.
> 
> Currently, we support 320 threads.  This might well increase in the
> future.  With thread name and everything else I want to send back at the
> maximum (because that reflects how much space I might need under the
> offset & length scheme), I calculate 113 bytes per thread (this counts
> <thread> and </thread>) to send back -- before escaping.
> 
> So, if I 'snapshot' everything every time I get a packet with an offset of 0,
> the buffer would need to be over 32K bytes in size.  I don't want to
> increase the GDB stub stack size by this much.  So, that mens either
> limiting the number of connections (fixed, pre-allocated buffers) or
> using kernel equivalents of malloc and free (which is discouraged) or
> coming up with a different approach -- e.g., avoiding the need for the
> buffer...

So a workaround that probably will never break is to adjust your stub to
only remember the xml fragment for only one (or a few) threads at a time, and
serve off of that.  That would only be a problem if gdb "goes backwards"
I.e., if gdb requests a lower offset (other than 0) than the previous
requested offset.

The issue is that qXfer was originally invented for (binary) target objects
for which gdb wants random access.  However, "threads", and few other target
objects are xml based.  And for those, it must always be that gdb reads
the whole object, or at least reads it sequentially starting from the
beginning.  I can well imagine optimizations where gdb processes the xml
as it is reading it and stops reading before reaching EOF.  But that
wouldn't break the workaround.

Starting a read somewhere in the middle of the file could be possible
too, but it's require understanding how to skip until some xml element
starts and ignore the fact that the file wouldn't validate.  Plus gdb
doesn't know the size of the file until it reads it fully, so we'd either
some other way to determine that, or make gdb take guesses.
So I'm not seeing this happening anytime soon.

> 
> So, in terms of saved state, with the snapshot it is 35-36K bytes, with the
> process table index it is 2-8 bytes.
> 
> It's too late now, but I would much prefer interfaces something like:
> 
> either
>     qfXfer:object:read:annex:length
>     qsXfer:object:read:annex:length
> or
>     qfXfer:object:read:annex
>     qsXfer:object:read:annex
> 
> [If the :length wasn't part of the spec, then send as much
> as you want so long as you stay within the maximum packet size.  My
> preference would be to leave off the length, but I'd be happy either way.]

What would you do if the object to retrieve is larger than
the maximum packet size?

Thanks,
Pedro Alves


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]