This is the mail archive of the sid@sources.redhat.com mailing list for the SID project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: modeling latency (fwd)


I recently asked about modeling memory latency, which sid does not
currently accommodate.  Let's pick up the discussion on this list.


fche wrote:

[...]

"hops" are not very meaningful.  "cycles" and "time" are, and may be
converted one-to-one by some knowledgeable component.

: If so, would I be right to think aobut modelling latency by making
: the sid::bus::status type into a first-class object that components
: may manipulate as they return their status down the chain?

I would advise against making it a big object, or giving it a
complicated API.  After all, it's passed by value/copy all the time.
Also, there is no library to put the API implementation into - see
what's done in sidtypes.h (all inline functions).  (Remember to bump
up API_MAJOR_VERSION!)

[...]

So, the idea is to allow the sid::bus::status structure to return to
the caller not just an overall "ok"/"unmapped" indication, but also a
generic latency-count value (probably a 16-bit unsigned int).  This
value would originate from memory components (who would need a
configurable parameter for this), through mappers (ditto, to
optionally account for address decoding delays), through the cache
(ditto, to account for cache hit times & miss penalties), all the way
to the CPU.  (The "sid::bus::delayed" indication can finally die.)

The CPU would accumulate all the penalty cycles incoming from *all*
GETIMEM/GETMEM/SETMEM calls (well, their lower-level counterparts), in
much the same way that basic_cpu::total_insn_count is, though in a
separate counter.  Then, after the end of the step_insns() loop, this
memory latency accumulator would be converted by the CPU to
instruction-cycles units (likely through a target-specific linear
function), and pass the sum of converted memory latencies and
instruction cycles to the target scheduler in the basic_cpu::stepped()
function.

(Tracing etc. may require hanging on to these statistics beyond their
use in target scheduling, but that's a separate matter.)

Is this a plausible story & design?


- FChE


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]