This is the mail archive of the
systemtap@sources.redhat.com
mailing list for the systemtap project.
Re: [RFC] Design + prototype: Multiple kprobes at an address - take 2
- From: Prasanna S Panchamukhi <prasanna at in dot ibm dot com>
- To: "Frank Ch. Eigler" <fche at redhat dot com>
- Cc: systemtap at sources dot redhat dot com
- Date: Fri, 8 Apr 2005 19:18:38 +0530
- Subject: Re: [RFC] Design + prototype: Multiple kprobes at an address - take 2
- References: <20050407141946.GB13813@in.ibm.com> <20050407144152.GA9723@redhat.com>
- Reply-to: prasanna at in dot ibm dot com
>
> > [...]
> > +static DEFINE_SPINLOCK(multprobe_lock);
>
> It would be better for this to be used as a read/write- rather than
> mutex- type spinlock, since the hot path (handler execution) is a
> read-only operation that should before long allow concurrent
> execution.
>
I will change it to read/write locks in my next patch.
>
> > [...]
> > /* common kprobes fault handler that gets control when the registered probe
> > * gets fired. This routines is wrapper over the inserted multiple handlers
> > * at a given address and calls individual handlers.
> > */
>
> As we discussed, this iteration policy will likely need changes.
> Maybe the concept of a fault handler needs to be forked into
> protection *for* the pre/post-handlers, which should naturally be
> per-kprobes-client, and for the single-stepping phase, which is not.
>
> I actually still haven't seen a useful thing that custom a fault
> handler could do for the second case. Is there some known kprobing
> scenario where single-stepping faults are dealt with usefully *by* the
> custom handler (and not the generic unconditional kprobes code)?
>
We need to find some real life scenario where single-stepping faults and allow
the custom handler to deal with it. Later user can use these standard custom
handler to handle them.
> Also, is there no risk of deadlock by having this fault handler
> routine also take multiprobe_lock, considering that the lock is also
> held around the entire pre/post handling iterations?
>
By changing it to read lock now, we must not have deadlock risk.
>
> > [...]
> > int register_multiprobe(struct mult_handler *multh)
> > [...]
> > spin_lock_irqsave(&multprobe_lock, flags);
> >
> > spin_lock_irqsave(&kprobe_lock, flags1);
> > temp = get_kprobe(multh->kp.addr);
> > spin_unlock_irqrestore(&kprobe_lock, flags1);
> > [...]
>
> Can you explain what potential concurrency problems are prevented by
> explicitly holding kprobe_lock here (and at the unregister call)?
>
kprobe_lock is used by kprobes to insert/remove probe to/from the list. This list
might be in inconsistent state if kprobe_lock is not taken, hence we grab the lock
before we walk the list.
Thanks
Prasanna
--
Prasanna S Panchamukhi
Linux Technology Center
India Software Labs, IBM Bangalore
Ph: 91-80-25044636
<prasanna@in.ibm.com>