This is the mail archive of the ecos-devel@sourceware.org mailing list for the eCos project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: NAND review


> Nick Garnett wrote:
>> I believe that an API that is partition-aware from the start is
>> preferable.

Andrew Lunn wrote:
> and i prefer an API which works well with and without partitions. 

The partitions are already a pretty slim layer on top of the main code; I
don't think that thinning things down would save very much. Still, it would
be little work for me to fillet out the API so that I provided two forms of
each of the main entrypoints, one which takes a device* and the other a
partition*. They probably ought to be switchable in and out by CDL, and I
suspect not both present at once. (Naturally, RedBoot and YAFFS would DTRT,
whatever was present.) If this sounds like a welcome move, I'll gladly add
it to my todo list.


> Category 1: Getting Linux booted
>
> 1) A dedicated partition which contains the kernel and then addition
> partitions for the root FS and maybe other FSs.
> 2) No dedicated kernel partition, the kernel is in /boot on the root
> FS.
> 
> Having the kernel separate could make it easier/faster to update
> during development, since you don't need to touch the rootfs. And the
> same is true the other way around, you can replace the rootfs without
> touching your known good kernel.

I'd personally steer people towards a dedicated kernel partition. RedBoot
has a small heap by default, and a full rootfs would necessarily have quite
a complex journal to replay which would take a lot of it, not to mention
take longer. Having the two partitions separate also vaguely allows the
possibility of reformatting the rootfs in the field without having to also
upgrade your boot loader at the same time, which sounds to me like a risky
"don't turn off the power or you'll brick the device" sort of operation.

> What is the typical life of a
> block? 10000 erases? A developer may conceivably replace the kernel
> 10,000 times, but in a deployed product i doubt this will happen. So i
> say forget about wear leveling for this case.

I have been thinking about lifespan and how likely we are to come across
worn-bad blocks. The Samsung chip on the EA 2468 board specifies 100k
write/erase cycles per block and data retention of at least ten years, which
seems to be pretty typical for NAND parts, and comparable to most NOR parts?

> I'm assuming here that a NAND block
> goes bad because of writes and erases, not reads and age. So once the
> image has been successfully written, it should be readable forever.

This is my understanding as well.

> Bad block are important. But a simple linear log structure should be
> sufficient. When reading/writing if the BBT says the block is good,
> use it, if it is bad, skip it. 

Properly NAND-aware filesystems take both kinds of bad blocks in their
stride; there's no point in adding the complexity if such a mapping if
you're using such a filesystem.

Even with something simpler, factory-bad blocks are straightforward: as you
say, one can simply linear-map them into oblivion in the driver at the cost
of extra time complexity *on every NAND operation*. (I add the emphasis
because I expect the overhead will add up fast if the NAND is heavily used.)

Worn-bad blocks are harder; you can't ever add one to the mapping, because
you'd have changed the in-flash address of everything above it.

I've been thinking about this sort of question in the context of a simple
layer to make NAND look like NOR for RedBoot, and have had an interesting
idea which I'm going to propose on this list when I've written it up.


> The rootfs is a more interesting issue. Linux is very unlikely to boot
> into a useful system without its rootfs. I see two options here. You
> could tftp boot using a initramfs loaded to RAM to boot into linux
> with a minimal system to perform a rootfs on NAND installation. Or
> redboot has to somehow put the rootfs into the NAND partition. 

You could also embed your entire rootfs into the kernel image as a cramfs.


> This leads to three things:
> 
> 1) Full read/write support for the FS in Redboot. For NOR we normally
>    have only read support.

Do you mean jffs2?

At the moment my YAFFS layer is fully read-write, but I could add a
read-only switch if this was thought to be useful?

> 2) mkfs functionality.

I don't know about other filesystems, but YAFFS always scans the NAND on
mount, and automatically does the Right Thing when asked to mount a
fully-erased array. (My RedBoot patch already includes logic to erase a nand
partition.)

> 3) Redboot needs to support some simple archive format, cpio, tar,
>    etc, so you can upload the rootfs as an archive and unpack it onto
>    the filesystem. And given that the NAND is likely to be bigger than
>    RAM, maybe this has to be done as a stream.
> 
> Unlike NOR, you cannot write a filesystem image to a NAND
> partition. So you need to write individual files, using the file
> system code itself. You also need to be able to create the empty file
> system, so mkfs is needed.

The original YAFFS tree includes a mkyaffs2image utility for turning a
directory tree into an image file which can then be programmed to NAND with
suitable software. (Being log-based, I suppose bad blocks wouldn't matter.)
I haven't looked into this, but it smells pretty straightforward to add
support to RedBoot to program such an image into NAND - much nicer than
teaching it how to unpack (say) a tarball.


> Once you have redboot with mkfs support, you have all you need to
> support dynamic partition sizes. Its then just of question of having
> somewhere to store this dynamic information, and FIS or NandFIS seems
> the obvious candidate.

Dynamic partitions are conceptually not very far from the current model. The
partition "table" is just an array hanging off the cyg_nand_device struct,
which could in theory be modified by any code which chose to - either from
logic, or interactively e.g. from RB. The interesting part is deciding where
and how the table is to be stored, which influences how we would provide a
call to write out a fresh partition table and whether this was accessible
via a hosted Linux partition. (We'd also have to think carefully what it
meant to change the table at runtime; what if there were mounted
filesystems? open files? Would (could) we make it safe?)


Ross

-- 
Embedded Software Engineer, eCosCentric Limited.
Barnwell House, Barnwell Drive, Cambridge CB5 8UU, UK.
Registered in England no. 4422071.                  www.ecoscentric.com


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]