This is the mail archive of the systemtap@sourceware.org mailing list for the systemtap project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Questions on network and block device related IRQ handling


On 12/09/2015 10:13 PM, Shane Wayne wrote:
> Hi all:
> I've recently doing some research on network and block device related
> IRQ handling, so I use some database and network cache application as
> the testbed.
> I probed the following probes below:
> netdev receive
> netdev transmit
> netdev hard_transmit
> ioblock request
> ioblock end
> 
> irq_handler.entry
> irq_handler,exit
> softirq.entry
> softirq.exit
> 
> After the profiling, I notice that, when netdev receive is triggered.
> It always appears within a softirq, and the softirq has a irq_handler
> in front of it. It mean the sequence is like:
> 
> irq_handler.entry
> irq_handler.exit
> softirq.entry
>     one or several netdev receive
> softirq.exit
> 
> And I found that, the irq could happened in any process, even in my
> application's context, but in most condition, it was found in swapper
> process.
> 
> And when it comes to netdev transmit, it seems that this action do not
> relate with irq? The sequence just happens in the application context.
> 
> netdev transmit begin
>         netdev hard_transmit begin
>         netdev hard_transmit end
> netdev transmit end
> 
> And it doesn't use irq?
> 
> So here is my question:
> 1, it seems that irq_handler from network device just be responsible
> to notify the package arrival, and the actual handling was done by the
> following soft irq?

Yes, that is correct.  The irq handlers try to minimize the time they run.  The irq task mainly schedule some task to do the majority of the work in the softirq.

http://www.makelinux.net/ldd3/chp-10-sect-4
> 
> 2, the irq handling seems could happen in any process, and do not stop
> current context, but when the server is not heavily loaded, the irq
> prefer to happen in swapper process?

irqs can happen any time they are not masked.  There are sections of kernel code that disable them to make sure that certain operation are atomic.

> 
> 3, Or just because the swapper take more cpu time when the server was
> not heavily loaded?
> 
> 4, does the netdev send or ioblock request really do not need irq's assistant?

The netdev and ioblock works needs to be scheduled.  The irq are indicating that the device needs feeding/attentions.  The alternative would be to do polling which would be probably be less efficient.  However, the NAPI netork code does going in to a polling mode to reduce the number of interrupts
(https://en.wikipedia.org/wiki/New_API).

> 
> 5, if this is true, how could we determine when the netdev transmit
> finish it's job? is the time of netdev hard_transmit end correct?

Actually it can be hard to determine precisely when the transmit has finished sending the packet.  The packet is given to the device, but it could take some time for the device to complete sending it on the wire.  One could look and when the packet buffer is freed for an upper limit on the time it took to send.


-Will


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]