This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFA] Use data cache for stack accesses


On Wed, Aug 26, 2009 at 5:32 PM, Doug Evans<dje@google.com> wrote:
> On Wed, Aug 26, 2009 at 1:08 PM, Pedro Alves<pedro@codesourcery.com> wrote:
>>> > Did you post number showing off the improvements from
>>> > having the cache on? ?E.g., when doing foo, with cache off,
>>> > I get NNN memory reads, while with cache off, we get only
>>> > nnn reads. ?I'd be curious to have some backing behind
>>> > "This improves remote performance significantly".
>>>
>>> For a typical gdb/gdbserver connection here a backtrace of 256 levels
>>> went from 48 seconds (average over 6 tries) to 4 seconds (average over
>>> 6 tries).
>>
>> Nice! ?Were all those single runs started from cold cache, or
>> are you starting from a cold cache and issuing 6 backtraces in
>> a row? ?I mean, how sparse were those 6 tries? ?Shall one
>> read that as 48,48,48,48,48,48 vs 20,1,1,1,1,1 (some improvement
>> due to chunking, and large improvement due to caching in following
>> repeats of the command); or 48,48,48,48,48,48 vs 4,4,4,4,4,4 (large
>> improvement due to chunking --- caching not actually measured)?
>
> The cache was always flushed between backtraces, so that's
> 48, 48. ..., 48 vs 4, 4, ..., 4.
>
> Backtraces win from both chunking and caching.
> Even in one backtrace gdb will often fetch the same value multiple times.
> I haven't computed the relative win.

Besides, the chunking doesn't really work without the caching. :-)


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]