This is the mail archive of the ecos-discuss@sourceware.org mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: JFFS2 bug with overwriting files bigger than 50% of the file system?


On 07/28/2011 03:15 PM, Gunnar Ruthenberg wrote:
>
> Hi,
>
> I stumbled upon a problem with JFFS2.
>
> When overwriting a relatively large file (580 kB in a 896 kB flash 
> region),
> the file system breaks.
>
What is your block size, or more precisely, the size of the erase blocks?
Often it is 128kB.
JFFS2 needs some spare blocks for garbage collection. I don't remember 
if the default is 3 or 5.
3 means 384kB, so only 512kB is available. And if you still force to use 
it, it _will_ give problems.
>
>
> Plenty of errors like these ensue when trying to read the new, 
> overwritten file
> after a reboot:
>
> BUG() at ~/ecos-2.0.40/packages/fs/jffs2/v2_0_40/src/readinode.c 381
> <5>JFFS2 notice:  read_unknown: node header CRC failed at %#08x. But 
> it must hav
> e been OK earlier.
> <4>Node totlen on flash (0xffffffff) != totlen from node ref (0x00000044)
> <4>JFFS2 warning:  jffs2_do_read_inode_internal: no data nodes found 
> for ino #3
> <5>JFFS2 notice:  jffs2_do_read_inode_internal: but it has children so 
> we fake s
> ome modes for it
> <4>JFFS2 warning:  jffs2_do_read_inode_internal: no data nodes found 
> for ino #3
> <5>JFFS2 notice:  jffs2_do_read_inode_internal: but it has children so 
> we fake s
> ome modes for it
>
> It does not matter if the new file is identical with the old file.
> Unlinking and then writing the file again causes the same result.
>
FYI: JFFS2 is a log based file system, better read about it if you don't 
know that (I can give some pointers). It means that you cannot overwrite 
a file: instead, the file is marked (in the file system meta data or 
table of contents) for removal, and the new version of the file is 
appended in the file system. If the file system is full, garbage 
collection (GC) starts to recover blocks, using those spare blocks.
And GC can take some time (and then all file system accesses are frozen, 
e.g. the TFTP server), resulting e.g. in a TFTP timeout in the client. 
And if GC can free a block, and then the client resends a TFTP packet, 
the same data portion can be present in double... I have seen several 
file system problems when stress testing it..
Also mark that on a flash you cannot delete bytes, only per erase block.

> If this is more or less a known problem, how likely is it that small 
> files (say,
> being smaller than 50% of the file system's storage space) cause this 
> behaviour
> as well?
>
> I noticed that the current version in the eCos source tree lags a bit 
> behind the
> code in the Linux kernel. Maybe this issue has been fixed there 
> already and only
> requires a port of the current code to eCos.
>
indeed, already for a long time a volunteer is needed to update the ecos 
jffs2 code.

Regards,
JÃrgen
>
>
> I'd be grateful for any ideas about this problem, even if it is just 
> pointing me
> to the linux-mtd mailing list.
>
> Thanks,
>
> Gunnar Ruthenberg.
>
>
> --
> NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zurÃck-Garantie!
> Jetzt informieren: http://www.gmx.net/de/go/freephone
>
> --
> Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
> and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss
>


-- 
JÃrgen Lambrecht
R&D Associate
Tel: +32 (0)51 303045    Fax: +32 (0)51 310670
http://www.televic-rail.com
Televic Rail NV - Leo Bekaertlaan 1 - 8870 Izegem - Belgium
Company number 0825.539.581 - RPR Kortrijk

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]