This is the mail archive of the
ecos-discuss@sources.redhat.com
mailing list for the eCos project.
Re: Reducing memory requirements in PPP
- From: Øyvind Harboe <oyvind dot harboe at zylin dot com>
- To: Andrew Lunn <andrew at lunn dot ch>
- Cc: ecos-discuss at sources dot redhat dot com
- Date: Fri, 16 Jul 2004 09:24:42 +0200
- Subject: Re: [ECOS] Reducing memory requirements in PPP
- References: <1089899280.3167.18.camel@famine> <20040715212315.GB2306@lunn.ch>
On Thu, 2004-07-15 at 23:23, Andrew Lunn wrote:
> On Thu, Jul 15, 2004 at 03:48:00PM +0200, ?yvind Harboe wrote:
> > This is quite interesting, since it begs the question of whether
> > the current pool based allocation scheme for networking gives
> > the optimum memory utilisation when memory is tight.
>
> This is just a thumb in the air guess. Thats why there is a CDL
> control so you can optimize it. Play with CYGPKG_NET_MEM_USAGE to find
> out where it breaks for your application and them make it a little
> bigger.
There is a fundamental problem with NET_MEM_USAGE that tweaking can't
address:
- 25% is allocated for the NET_MEMPOOL_SIZE pool, which replaces
malloc(). So if you need 1 byte more in that pool, you have to increase
NET_MEM_USAGE with 4 bytes.
net/bsd_tcpip/current/src/ecos/support.c:
#define NET_MEMPOOL_SIZE roundup(CYGPKG_NET_MEM_USAGE/4,MSIZE)
#define NET_MBUFS_SIZE roundup(CYGPKG_NET_MEM_USAGE/4,MSIZE)
#define NET_CLUSTERS_SIZE roundup(CYGPKG_NET_MEM_USAGE/2,MCLBYTES)
- similarly I wonder whether or not NET_MBUFS_SIZE and NET_CLUSTERS_SIZE
suffer from "granularity loss".
Possible solutions:
- make NET_MEMPOOL_SIZE, NET_MBUFS_SIZE, NET_CLUSTERS_SIZE default to
their current values, but make them seperately controllable via CDL.
- and/or, use malloc() where approperiate.
--
Øyvind Harboe
http://www.zylin.com
--
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss