This is the mail archive of the ecos-discuss@sources.redhat.com mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: ethernet performance <TCPIP guru question>


Hi Guys,

I've also been looking at some minor performance improvements with
the current eCos TCP/IP stack. A very simple change, would be to
revert the call to ip_randomid() from ip_output(), when it fills
in the IP header, back to just incrementing the ID.  From what
I've heard this improvement was made to help increase the
security level of the stack, however, the overhead relative
to the original method is significant.

Maybe we can add an option within the eCos configuration tool
to allow users to selectively revert this change back to
the original ID increment method?

Steve


Gary Thomas wrote:
> 
> On Fri, 2002-04-26 at 05:30, David Webster wrote:
> > Hi,
> > I've also started looking at the network performance of our eCos based
> > product recently using ttcp to provide performance figures. I get the
> > following figures:
> >       Ttcp Client:    2.1 Mbps (ie eCos mainly tx'ing data)
> >       Ttcp Server:    4.0 Mbps (ie eCos mainly rx'ing data)
> >
> 
> Would you be willing to share/contribute these?  They would make very
> good test/examples to put into the networking code.
> 
> > The inconsistency and some of the poor performance can be put down to
> > our Ethernet hardware interface - we're using a Realtek 8019 which we
> > have to access thru a slow 16bit fifo.
> >
> > But there are others areas of the network stack that seem to introduce
> > inefficiencies, namely the number of memcpy's that occur in the stack.
> > I've noticed the following happen when I perform a socket write with an
> > 8K buffer:
> >
> >       sosend() - allocates 4 2K mbuf's and calls uiomove() to memcpy()
> > the data from the buffer supplied in write to the mbuf.
> >
> >       Each of these 2K mbufs is passed to tcp_output(). Tcp_output()
> > creates 2 TCP segments for this data by allocating 2 mbufs and calling
> > m_copydata() to memcpy() the data from the original mbuf into the mbuf
> > to be used by the TCP segment. This results in two Ethernet frames, one
> > with 1448 and the other with 600 bytes of data, for each of the 2K
> > mbufs.
> >
> > So there are two memcpy's within the TCP stack.
> > Has anybody looked at providing a socket write function that actually
> > provides the data already in an mbuf and bypasses uiomove()?
> > Equally providing a function that performed the memcpy in addition to
> > calculating the checksum might also provide an efficiency improvement.
> >
> > There's a similar number of memcpy's on a socket read - but I've not
> > really looked at this path - I image that a read that could bypass
> > uiomove and pass the mbuf with the received data would also be more
> > efficient.
> 
> We know about the extra data sloshing :-(  Someday, perhaps Providence
> will smile on us and provide the time necessary to do something about
> it.
> 
> >
> > I've also noticed that by default the tcp_do_rfc1323 flag (in
> > tcp_subr.c) is set. This results in TCP using the Window Scale and
> > TimeStamp options. The timestamp option in particular is sent in every
> > segment and "wastes" 12 bytes of each Ethernet frame and adds additional
> > overhead?. My understanding of rfc1323 is that the timestamp option only
> > really helps over Long Fat Pipe connections and I'm not sure it's really
> > necessary on 100 Mbps Ethernet - definitely not on 10 mbps. So disabling
> > that might help a (tiddly) bit.
> >
> 
> Have you experimented with this at all?
> 
> --
> Before posting, please read the FAQ: http://sources.redhat.com/fom/ecos
> and search the list archive: http://sources.redhat.com/ml/ecos-discuss

-- 
Before posting, please read the FAQ: http://sources.redhat.com/fom/ecos
and search the list archive: http://sources.redhat.com/ml/ecos-discuss


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]