This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [patch] [gdbserver] Fix memory corruption


On Wed, 02 Mar 2011 16:35:16 +0100, Pedro Alves wrote:
> On Tuesday 01 March 2011 21:34:28, Jan Kratochvil wrote:
> > To make it easily reproducible one can disable try_rle() by patching it:
> > +return 1;
> >    /* Don't go past '~'.  */
> 
> I can't reproduce it.

You may need also -lmcheck for gdbserver.


> > So that putpkt_binary_1's cnt == 16383 will overrun PBUFSIZ 16384 by 4 bytes.
> 
> How did you get that large of a `cnt' in the first place?  The largest
> I get is 16379.

remote_escape_output is called with OUT_MAXLEN == PBUFSIZ - 2 == 16382 and
returns it +1, therefore 16383.
#8  0x0000000000404ca3 in putpkt_binary_1 (buf=0x258f2c0 "mmn"..., cnt=16381, is_notif=0) at remote-utils.c:801
801	  free (buf2);

Maybe your system does not have
# cat /proc/*/cmdline|tr -cd '$#}*'|wc -c
84


> gdb does:
> 
>   /* Request only enough to fit in a single packet.  The actual data
>      may not, since we don't know how much of it will need to be escaped;
>      the target is free to respond with slightly less data.  We subtract
>      five to account for the response type and the protocol frame.  */
>   n = min (get_remote_packet_size () - 5, len);
>   snprintf (rs->buf, get_remote_packet_size () - 4, "qXfer:%s:read:%s:%s,%s",
> 	    object_name, annex ? annex : "",
> 	    phex_nz (offset, sizeof offset),
> 	    phex_nz (n, sizeof n));
> 
> that is, you shouldn't get a read request that big.

Yes, the request is just for 16378 but gdbserver returns more.


> It looks like server.c:handle_qxfer's len caping is forgetting
> to account for the $, # and checksum (should be fixed), but I don't
> think that's the real cause in your example, since it only pushes back
> to gdb as much data as it requested.

Before starting to chase off-by-one here and off-by-one there what is the
practical purpose of such strict packet limits?

When TCP is in use shouldn't the code all around just support arbitrary packet
sizes?  To get rid any constant buffer sizes etc.  Then there still can remain
some vague fragmentation on the GDB client side but both sides should be able
to accept arbitrary packet sizes, shouldn't they?



Thanks,
Jan


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]