This is the mail archive of the systemtap@sourceware.org mailing list for the systemtap project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: tutorial draft checked in


Hi -

hunt wrote:

> [...]
> We have argued this again and again. I see no reason why you want the
> translator to be more complicated and slower.  [...]

You misjudge my intention.

> For the specific case of pmaps I am sure I spent more time arguing about
> it than writing it. The disadvantages of what you want to do are
>
> 1. Reader locks are slow. They don't scale as well as per-cpu spinlocks.

At least this is a quantifiable concern.

> 2. The translator holds the lock during the whole probe vs the runtime
> which holds the lock as short a time as possible.

Among other things, this guarantees ACID-style properties for probe
handlers, and prevents various race conditions.

> 3. Having the translator handle low-level locking eliminates the
> possibility of switching the runtime to a more efficient lockless
> solution later.

By removing locks from the runtime that the translator makes
redundant, we still have a "lockless" solution.  If locks can be done
away with entirely, the translator can be taught not to emit them.
It's probably one line of code change.

> > Anyway, if the advantage of having unshared per-cpu locks for the <<<
> > case was large, the translator could adopt the technique just as
> > easily.
> 
> Obviously not true.

WHAT can you possibly mean by that?  The translator could emit per-cpu
spinlocks for pmaps.  Its programmer would not even break a sweat.

> It is already done and works in the runtime pmap implementation. 

Yes, but the question is where better to put the locking.

> I ran a few benchmarks to demonstrate pmaps scalability and measure the
> additional overhead from the translator reader-writer locks. [...]

Good.

> I ran threads that were making syscalls as fast as possible.
> Results are Kprobes/sec
>            1 thread        4 threads
> Regular     340              500
> Pmaps       340              940
> Pmaps*      380             1040
> 
> Pmaps* is pmaps with the redundant reader-writer locks removed.

How about a result with the redundant spinlocks removed?

> Measured overhead of those locks is approximately 10% of the cpu
> time for this test case.

It sounds a bit high, considering all the other overhead involved.
An oprofile count of SMP type events would be interesting.

- FChE


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]