This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Synchronizing auxiliary mutex data


On Wed, 2017-06-21 at 12:36 +0300, Alexander Monakov wrote:
> On Wed, 21 Jun 2017, Andreas Schwab wrote:
> 
> > On Jun 21 2017, Alexander Monakov <amonakov@ispras.ru> wrote:
> > 
> > > Inside LLL_MUTEX_LOCK there's an atomic operation with acquire memory ordering.
> > > The compiler and the hardware are responsible, together, for ensuring proper
> > > ordering: the compiler may not move the load of __owner up prior to that atomic
> > > operation, and must emit machine code that will cause the CPU to keep the
> > > ordering at runtime (on some architectures, e.g. ARM, this implies emitting memory
> > > barrier instructions, but on x86 the atomic operation will be a lock-prefixed
> > > memory operation, enforcing proper ordering on its own).
> > 
> > Does that mean that an atomic operation flushes the cpu caches?
> 
> No, not at all: it only means that the CPU doesn't reorder the operations (so
> the cache subsystem receives the requests in the same order they were in the
> original program), and the cache subsystem serves them in that same order.

Just to make sure this isn't misunderstood: I think Alexander is talking
about the specific example here, whereas Andreas asked the general
question ("an atomic operation").  The general answer would be that
atomic operations are atomic (ie, indivisible steps), but how they get
ordered wrt. other operations and what stores loads read from follows a
more complex set of rules (and "reordering" is a possibility); to
understand these rules, I really recommend reading the formalization of
the C11/C++11 memory model by Batty et al.



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]