This is the mail archive of the
gdb@sourceware.cygnus.com
mailing list for the GDB project.
Re: memory verify
- To: "Brown, Rodney" <rodneybrown at pmsc dot com>
- Subject: Re: memory verify
- From: jtc at redback dot com (J.T. Conklin)
- Date: 01 Dec 1999 11:48:39 -0800
- Cc: "'gdb at sourceware dot cygnus dot com'" <gdb at sourceware dot cygnus dot com>
- References: <9150F3E779F0D211BD370008C733141C38AA12@aus-msg-02.au.pmsc.com>
- Reply-To: jtc at redback dot com
>>>>> "Rodney" == Brown, Rodney <rodneybrown@pmsc.com> writes:
Rodney> Depending how capable the stubs are, you could look at using
Rodney> Andrew Tridgewell's rsync algorithms. I think that uses CRCs
Rodney> to locate differences in files on different boxes to generate
Rodney> a delta file to update the file on one. This could allow the
Rodney> first difference idiom when run over large memory areas,
Rodney> without having to transmit the area over the wire.
Although I use rsync myself, but I never looked at the rsync algorithm
until now. It is simple and elegant, and the fact that only one round
trip is required fits well with the low bandwidth/high latency links
typically used for debugging.
However, the actual rsync algorithm is probably more general than is
needed for optimized downloads. Unlike text files, etc., I think it
is unlikely for a block of data to be found at a different offset in
the region. I also suspect that if one block is different, it is
likely that the rest will be as well. As such, a single checksum
of the region may be all that's needed.
Of course, this assumes that to_XXX_memory() is implemented with a
checksum, crc or cryptographic hash. If it compares by downloading
or uploading data, it will likely be faster to just do the download.
--jtc
--
J.T. Conklin
RedBack Networks