This is the mail archive of the systemtap@sources.redhat.com mailing list for the systemtap project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFC] Design + prototype: Multiple kprobes at an address - take 2


> 
> > [...]
> > +static DEFINE_SPINLOCK(multprobe_lock);
> 
> It would be better for this to be used as a read/write- rather than
> mutex- type spinlock, since the hot path (handler execution) is a
> read-only operation that should before long allow concurrent
> execution.
> 

I will change it to read/write locks in my next patch.

> 
> > [...]
> > /* common kprobes fault handler that gets control when the registered probe
> >  * gets fired. This routines is wrapper over the inserted multiple handlers
> >  * at a given address and calls individual handlers.
> >  */
> 
> As we discussed, this iteration policy will likely need changes.
> Maybe the concept of a fault handler needs to be forked into
> protection *for* the pre/post-handlers, which should naturally be
> per-kprobes-client, and for the single-stepping phase, which is not.
> 
> I actually still haven't seen a useful thing that custom a fault
> handler could do for the second case.  Is there some known kprobing
> scenario where single-stepping faults are dealt with usefully *by* the
> custom handler (and not the generic unconditional kprobes code)?
> 

We need to find some real life scenario where single-stepping faults and allow
the custom handler to deal with it. Later user can use these standard custom
handler to handle them.

> Also, is there no risk of deadlock by having this fault handler
> routine also take multiprobe_lock, considering that the lock is also
> held around the entire pre/post handling iterations?
> 

By changing it to read lock now, we must not have deadlock risk.

> 
> > [...]
> > int register_multiprobe(struct mult_handler *multh)
> > [...]
> > 	spin_lock_irqsave(&multprobe_lock, flags);
> > 
> > 	spin_lock_irqsave(&kprobe_lock, flags1);
> > 	temp = get_kprobe(multh->kp.addr);
> > 	spin_unlock_irqrestore(&kprobe_lock, flags1);
> > [...]
> 
> Can you explain what potential concurrency problems are prevented by
> explicitly holding kprobe_lock here (and at the unregister call)?
> 

kprobe_lock is used by kprobes to insert/remove probe to/from the list. This list
might be in inconsistent state if kprobe_lock is not taken, hence we grab the lock
before we walk the list.


Thanks
Prasanna


-- 
Prasanna S Panchamukhi
Linux Technology Center
India Software Labs, IBM Bangalore
Ph: 91-80-25044636
<prasanna@in.ibm.com>


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]