This is the mail archive of the
gdb@sourceware.org
mailing list for the GDB project.
RE: Simics & reverse execution
- From: "Jakob Engblom" <jakob at virtutech dot com>
- To: "'Greg Law'" <glaw at undo-software dot com>
- Cc: "'Michael Snyder'" <msnyder at vmware dot com>, <gdb at sourceware dot org>, "'Julian Smith'" <jsmith at undo-software dot com>
- Date: Thu, 3 Sep 2009 21:16:31 +0200
- Subject: RE: Simics & reverse execution
- References: <002001ca1f0e$4c9b74a0$e5d25de0$@com> <daef60380908170058i455ee534l527e58238a0839b9@mail.gmail.com> <002101ca1f2e$746e1ad0$5d4a5070$@com> <200908171251.07179.pedro@codesourcery.com> <4A899E2E.6080203@vmware.com> <00b801ca1f74$e5610a90$b0231fb0$@com> <4A89B7E4.9010804@vmware.com> <027701ca209f$64c71ce0$2e5556a0$@com> <4A95E319.6020300@vmware.com> <4A97B9C9.8070501@greglaw.net> <010b01ca2a3c$7766ca70$66345f50$@com> <4A9BF84F.3070404@undo-software.com> <025201ca2ace$a9256430$fb702c90$@com> <4A9D2650.6080209@undo-software.com>
> currently are (to allow e.g. a graphical frontend to implement a
> slide-bar to show where in the record log we are). The former is
> precise to instruction count (and signals, etc); the latter may not be
> depending on the details of the target. Actually, percentage is the
> wrong term -- better would be what fraction of the way are we through
> history, e.g. in 1/(2^64) increments, such that half way through
> recorded history would be represented as 2^63.
I can see one big problem with this: you are assuming a bounded recording.
In our case, it is unbounded (except for the current 64 bit counter of
picoseconds used to coordinate all processors and other time-dependent parts in
a simulation system, which bounds execution at an annoying 280 days of time).
In Simics, you can always just continue past the end of the previously seen
execution, and you extend the size of the reversible window. I believe VMWare
does the same from my experiemtns with Workstation 6.5.
I honestly think binary chop is best put into the backend for this reason... the
times I have seen it applied it relied on large state-checking scripts running
that had far better insight into the target system than you get with gdb (such
as doing global consistency checks on the state of file systems on various
boards in a fault-tolerant redundant cluster).
/jakob