8

I don't suppose there's something in the MMC/SD card specification for retrieving any information on erase counts on an MMC/SD card, is there?

My goal is to get my embedded system to avoid writing to metadata like last access or modified times, allocate moderately sized files filled with 0xFF sa needed, and only append records within that.

This is to reduce the risk of data loss, since power can be lost at any time.

However, the wear-leveling algorithms of MMC/SD cards is an unknown, and possibly implemented very poorly. I need to verify that the cards don't attempt to erase data blocks if I'm only writing data over 0xFFs. So, if there was just about any kind of erase count (total for the disk, per block, whatever) available to read... that'd be great.

I'm not entirely sure where this question lives... but since it involves SD card protocol level stuff, I figured maybe here.

EDIT

I believe I will go ahead and overcomplicate things. Disk tests proved that at least the SD cards I have will erase blocks even if the data you write is unchanged from the contents on disk. I'll store up to 128KB of data in directly controllable NAND (which I can control write behavior somewhat better on), then write 128KB chunks into a 128KB-aligned file on the VFAT partition. That should limit the exposure about as much as possible... but wow how ugly and complicated.

darron
  • 3,511
  • 2
  • 30
  • 42
  • 2
    On my next data logger, I'm considering redundantly writing everything twice, to 2 memory cards. First one, and then -- after I'm sure the first write is finished -- then write the same data to the other. No matter when the batteries fail, at worst one card will be corrupted, and all the data (except perhaps the last block) is safe on the other card. – davidcary Dec 11 '11 at 13:38

2 Answers2

5

I don't know whether particular SD cards expose wear-leveling information, but for the most part I would suggest that your desire to avoid erasing blocks that hold FF's is misplaced. Even if a virtual disk block happens to only contain FF's, it will almost certainly contain other addressing information and error-correction data which will have to be rewritten if any changes are made to the block, regardless of its previous content.

I believe SD card manufacturers are free to select their own algorithms for deciding when to rewrite blocks which haven't been accessed for awhile, and for ensuring data integrity in the event of power failure. Consequently, I don't know of any particular method of ensuring that an SD card won't get corrupted if the power fails during a write.

supercat
  • 46,736
  • 3
  • 87
  • 148
  • Ah, great point. I forgot about the ECC bits. That pretty much blows that idea. Hmm... I guess this question is done... I'll go ask the Linux board about reliable filesystems in this situation. – darron Dec 10 '11 at 07:57
  • @darron: It's an interesting question; I upvoted it (someone else downvoted it). Since MMC and CompactFlash cards put a virtual block-mapping layer on top of a raw flash device, I don't think they're likely to expose the same wear-leveling details the way something like SmartMedia does. While virtualization-based standards like SD/MMC are more adaptable to changing technologies than "raw bits" standards like SmartMedia, for some applications there are definitely advantages to knowing what's actually going on. – supercat Dec 10 '11 at 18:35
  • 1
    @darron: I don't know if you're at all familiar with how modern flash drives work, but they're generally designed to be written in pages of 528 bytes, while only being erasable in much larger blocks (I think 32KB, but perhaps 128KB or even bigger). If a request is made to write a sector, the flash-drive will find a blank page if there is one, write the new sector there, and somehow indicate that the new page is the "real" one for that sector and the old one is obsolete. If the number of blank pages available falls to being near a block's worth (or at various other times), ... – supercat Dec 10 '11 at 18:41
  • 1
    @darron: ...the system will try to find a block which has the most obsolete pages on it, copy all of the pages from that block to blank pages, and then erase the entire block. One problem with this approach is that a disk which has few blank pages but has e.g. one "obsolete" page per erase block may report itself as having lots of room available, but writing each page will require erasing a block and copying a page worth of data to the new block. S-l-o-w. – supercat Dec 10 '11 at 18:47
  • Hmm... yes, if ECC is per 512 byte block, as 528 suggests, this may still work... if I only write in 512 byte increments. I knew about the eraseblock sizes and planned on doing this in 128KB blocks. (figuring most cards eraseblocks would be equal or smaller than that) – darron Dec 10 '11 at 18:54
  • Delaying for 512 bytes in my situation would mean a maximum delay of 15-30 seconds. That's acceptable. I also determined a way to test the erase behavior of cards without access to erase counter type information... time it! I'll write an app to do writes of 512 bytes at a time in a file. One will append 0s only into a file pre-initialized to 0xFFs. The other will write in ways that force erases. After a few minutes of running, it should show a (significant) difference in time taken if the append method is not forcing erasing. I can even make this test part of a card initialization routine. – darron Dec 10 '11 at 19:06
  • I doubt any SD controller is going to try to exploit the possibility of turning an FFFF... block into something else without an erase cycle. It would be possible to design a controller and EEC algorithm in such a way ask to allow that, but consider that writing all FF's to sector 19543 really creates a page saying "Version 39191 of sector 19543 contains FFFFFF...". The ECC bit pattern for that page may not be compatible with "Version 39191 of sector 19543 contains 123456...". – supercat Dec 10 '11 at 19:16
  • 1
    Hmm. It's worse than I thought... in direct-to-disk, no-fs-buffer tests, writing the exact same contents to a block took the same time as writing altered blocks (~10 seconds for 1000 64KB records). I could also determine that my particular SD card's eraseblock is very likely 64KB. – darron Dec 10 '11 at 20:59
1

The emmcparam utility from micron gives you a view on erase block counts: emmcparm

You can use it with -E to get a summary:

# emmcparm -E /dev/mmcblk0
Device file = /dev/mmcblk0
EXT_CSD revision [192] = 1.8 (for MMC v5.1)

Feature name: Erase Count Min | Max | Ave Global erase count: 0 | 12 | 10 Enhanced area (SLC): 0 | 1 | 0 Normal area (MLC): 1 | 12 | 11

Or you can use it with -e to get a per-block print out of erase counts:

# emmcparm -e /dev/mmcblk0
Device file = /dev/mmcblk0
EXT_CSD revision [192] = 1.8 (for MMC v5.1)

Feature name: Block Erase Count Block# Erase# Type 7 30 - 8 38 - 10 16 - 11 31 - 12 28 - 13 30 - 14 25 - 15 42 - 16 1 - 17 1 - 18 0 - 19 0 - 20 0 - 21 0 - 22 0 - ...

This utility is likely tied to micron storage devices unfortunately.

The mmc utility from mmc-utils can also be used: mmc extcsd read /dev/mmcblk0 | grep -i life can report remaining eMMC life in grades of 10%, and is likely more widely applicable than emmcparam.

Neal
  • 11
  • 2