This is the mail archive of the ecos-discuss@sources.redhat.com mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: MBUF leak


From: "Andrew Lunn" <andrew.lunn@ascom.ch>
To: "eCos Disuss" <ecos-discuss@sourceware.cygnus.com>
Sent: Friday, October 05, 2001 7:36 AM
Subject: [ECOS] MBUF leak


> Re: the mbuf/cluster leak message...
>


Let me update this one more time....


> If you look at the trace, you see that both the number of mbufs and
> the number of clusters slowly decreases. You happen to run out of
> clusters first, but thats because there are less of them. Your real
> problem is an mbuf leak. An mbuf is used to hold part of a packet. If
> it has to hold more than about 100 bytes, a cluster is attached to the
> mbuf, which can hold up to 2K bytes. If you loose an mbuf, you also
> loose the cluster thats attached.
>
> My experience is that the basic IP stack is OK. Leaks happen in either
> the ethernet device driver or the eth_drv.c layer. Thats where i would
> start looking. Try to use a UDP test program. Fire UDP packets at the
> target and see if it leaks. If so its the receive path that has the
> problem. If not, try sending UDP packets from the target. If that
> leaks you know its the TX path that has a problem. Again from
> experience problems happen during some sort of overload
> condition. Check what happens when the RX or TX ring buffer overflows
> in the device driver etc.


So far, I have written a small server/client using UDP sockets. The ecos
target is a server receiving the packets. It does not seem to leak, even
though many packets are dropped. That might be OK. My development
is order of magnitudes faster than the ecos target.

The test is: the server creates one socket only, binds it, and recvfrom
on the same socket over and over again. I ran the test many times,
and at the end, the mbufs is always ok. So, receiving UDPs does not
seen to leak.

I'll try the TX path now.

If anybody is interested in the code, I can post it here. The client is Java
because is easy to test with many development systems.

Rosimildo.

-----------------------------------------------------------------
Misc mpool: total   65520, free   61568, max free block 60612
Mbufs pool: total   65408, free   64512, blocksize  128
Clust pool: total  131072, free  126976, blocksize 2048
SERVER : Message : Hello from Client -> 3
Network stack mbuf stats:
   mbufs 4, clusters 1, free clusters 1
   Failed to get 0 times
   Waited to get 0 times
   Drained queues to get 0 times
Misc mpool: total   65520, free   61568, max free block 60612
Mbufs pool: total   65408, free   64768, blocksize  128
Clust pool: total  131072, free  126976, blocksize 2048
SERVER : Message : Hello from Client -> 2
Network stack mbuf stats:
   mbufs 2, clusters 1, free clusters 1
   Failed to get 0 times
   Waited to get 0 times
   Drained queues to get 0 times
Misc mpool: total   65520, free   61568, max free block 60612
Mbufs pool: total   65408, free   65024, blocksize  128
Clust pool: total  131072, free  126976, blocksize 2048
SERVER : Message : Hello from Client -> 1
Network stack mbuf stats:
   mbufs 2, clusters 1, free clusters 1
   Failed to get 0 times
   Waited to get 0 times
   Drained queues to get 0 times
Misc mpool: total   65520, free   61568, max free block 60612
Mbufs pool: total   65408, free   65024, blocksize  128
Clust pool: total  131072, free  126976, blocksize 2048
SERVER : Message : Hello from Client -> 0
Network stack mbuf stats:
   mbufs 0, clusters 1, free clusters 1
   Failed to get 0 times
   Waited to get 0 times
   Drained queues to get 0 times
Misc mpool: total   65520, free   61568, max free block 60612
Mbufs pool: total   65408, free   65280, blocksize  128
Clust pool: total  131072, free  126976, blocksize 2048
















Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]