This is the mail archive of the
ecos-discuss@sources.redhat.com
mailing list for the eCos project.
Re: JFFS2 eats memory
- From: Andrew Lunn <andrew at lunn dot ch>
- To: ?yvind Harboe <oyvind dot harboe at zylin dot com>
- Cc: ecos-discuss at sources dot redhat dot com
- Date: Tue, 13 Jul 2004 09:40:52 +0200
- Subject: Re: [ECOS] JFFS2 eats memory
- References: <1089643331.3951.42.camel@famine>
On Mon, Jul 12, 2004 at 04:42:11PM +0200, ?yvind Harboe wrote:
> This issue has been discussed before, and although I have a workaround,
> I'd dearly like to have it put to bed since it is starting to cause
> problems elsewhere in my application:
>
> - My code opens a file for writing with O_TRUNC set, performs
> a single write call, closes the file.
> - After closing the file, JFFS2 has eaten memory.
> - With the attached modifcations to JFFS2, it "only" eats 24 bytes.
> - If I unmount and remount JFFS2, no memory is "lost" and JFFS2 works
> fine.
>
> Presumably when the raw nodes in the file fragement list are marked as
> obsolete, they are no longer required, but are not freed.
>
> Q: Is this fundamentally impossible or a "bad idea" to fix?
How much memory are we talking about here in this example?
The cache is there for a reason, so i would not want to
unconditionally disable it. At least you need some CDL to control
between fat & fast and slow & slim.
I would also suggest you consider extending the
CYGNUM_FS_JFFS2_RAW_NODE_REF_CACHE_POOL_SIZE concept to the inode
cache. You can then control the size of the cache.
Andrew
--
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss