This is the mail archive of the ecos-devel@sourceware.org mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Re: NAND review


Ross Younger wrote:
Nick Garnett wrote:
<snip>
What is the typical life of a
block? 10000 erases? A developer may conceivably replace the kernel
10,000 times, but in a deployed product i doubt this will happen. So i
say forget about wear leveling for this case.

I have been thinking about lifespan and how likely we are to come across
worn-bad blocks. The Samsung chip on the EA 2468 board specifies 100k
write/erase cycles per block and data retention of at least ten years, which
seems to be pretty typical for NAND parts, and comparable to most NOR parts?
yes, but 20 years data retention (Spansion NOR).
I'm assuming here that a NAND block
goes bad because of writes and erases, not reads and age. So once the
image has been successfully written, it should be readable forever.

This is my understanding as well.
mine too.
Bad block are important. But a simple linear log structure should be
sufficient. When reading/writing if the BBT says the block is good,
use it, if it is bad, skip it.

Properly NAND-aware filesystems take both kinds of bad blocks in their stride; there's no point in adding the complexity if such a mapping if you're using such a filesystem.

Even with something simpler, factory-bad blocks are straightforward: as you
say, one can simply linear-map them into oblivion in the driver at the cost
of extra time complexity *on every NAND operation*. (I add the emphasis
because I expect the overhead will add up fast if the NAND is heavily used.)

Worn-bad blocks are harder; you can't ever add one to the mapping, because
you'd have changed the in-flash address of everything above it.

I've been thinking about this sort of question in the context of a simple
layer to make NAND look like NOR for RedBoot, and have had an interesting
idea which I'm going to propose on this list when I've written it up.


The rootfs is a more interesting issue. Linux is very unlikely to boot
into a useful system without its rootfs. I see two options here. You
could tftp boot using a initramfs loaded to RAM to boot into linux
with a minimal system to perform a rootfs on NAND installation. Or
redboot has to somehow put the rootfs into the NAND partition.

You could also embed your entire rootfs into the kernel image as a cramfs.



This leads to three things:

1) Full read/write support for the FS in Redboot. For NOR we normally
have only read support.

Do you mean jffs2?


At the moment my YAFFS layer is fully read-write, but I could add a
read-only switch if this was thought to be useful?

2) mkfs functionality.

I don't know about other filesystems, but YAFFS always scans the NAND on mount, and automatically does the Right Thing when asked to mount a fully-erased array. (My RedBoot patch already includes logic to erase a nand partition.)

The same with JFFS2 (after a bugfix in my case).

Kind regards,
Juergen


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]