This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Lock elision test results


On Fri, 2013-07-05 at 18:27 -0700, Andi Kleen wrote:
> Torvald Riegel <triegel@redhat.com> writes:
> 
> > We need to model performance in some way
> > to be able to find robust tuning parameters,
> 
> No, no, the right way is to run lots of work loads (that is real
> applications) and see what parameters work best.

Andi, we're not in disagreement here.  I definitely agree that we need
to have lots of input data.  But any tuning algorithm that we build is
conceptually based on some model of performance.

For example, the current tuning you put in is a rather simple model, and
it makes certain assumptions about critical sections and performance
(e.g., you do assume that critical sections vary over time and that it's
not worthless to retry elision just a few times after elision failed --
thus, the skip counts are rather low; IOW, you never give up using
elision for a lock).  So there is a model, even if you don't call it
that.  When we (believe to) understand something in some way, we do
build a model.

(BTW, as a side note, that's what I often find funny when people claim
that "theory" is useless and just "practice" counts, and that therefore
they won't try to build a model but treat what happens as a black box
labeled "practice": they still use a theoretical model, it's just a very
simple one :) (ie, the black box).  Of course, there is a risk of
building a wrong model with wrong assumptions, but when all you have is
a black box you have the same risks if you need to predict behavior in
some way).

The more data we get and analyze, the easier it will be to understand
what's going on, and what we should pay attention to in the model that
ultimately determines what kind of tuning algorithm we use.

> Modern caches and CPUs and parallelism of critical sections
> are far too complex to model in simple ways.

Yes it is complex, but that's not a reason to stick to a trivial model
without at least looking for other reasonable abstractions or relevant
properties of real behavior.

> > at the aborts; it seems that for z at least, the critical section length
> > should also be considered.
> 
> I don't know about z, but in TSX the program doesn't know the critical
> section length (without expensive extra instrumentation). Only profilers
> know.

But if critical section length matters, why shouldn't we at least
investigate whether we need to pay attention to it in the tuning model?
I suspect we could measure the critical section length when we retry
after having failed at using elision (in that case we're in the slow
path anyway); if the measurement is faster than the cost of having
failed at elision, and it at least helps to avoid this in the future, it
could be worthwhile.



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]