3

I have a 10-years old 320 Gb HDD which I've used as an external drive, and it traveled with me for much of those 10 years. Needless to say, it survived more than a few falls (including those while in operation), and got some bad sectors. When I actually started to get read errors, not just SMART sector relocation warnings, I moved away everything important from it (using ddrescue for some files). Sure I can't trust this drive anymore, but I still want to use it to copy once and keep some movies/FLACs, to free some space on laptop's SSD+HDD, as long as the external drive still works. I don't care losing some or all of these files, as I either have backups at home and/or can re-download easily.

The problem is, if I format this drive and start copying the files there, somewhere around 25% I get a write failure, necessitating USB cable unplug (^C is not enough!), same happens with badblocks both in read and write mode. After playing a bit with badblocks' "from" and "to" parameters, I've found that 90%+ of the drive is OK, and there are basically 3 bad block areas. A short script and I got a text file with block numbers (yes I didn't forget -b 4096 for badblocks) covering these areas and lots of extra space around to be safe. But when I did e2fsck -l badblocks.txt, it still hangs! seems like it's trying to read those bad blocks anyway, not just to mark as bad and forget. Is there any other way to get around this? Or maybe other filesystem (thought about FAT, but I don't see any way to feed badblocks.txt to fsck.vfat)? Or 4 separate partitions covering the "good" areas is the best solution for this case?

Update: some quotes from man to make the case more clear

man e2fsck:

-i input_file Read a list of already existing known bad blocks. Badblocks will skip testing these blocks since they are known to be bad.

thus, badblocks promises to skip the blocks listed (and it does, as it doesn't hang with all the suspicious ranges in badblocks.txt!)

man badblocks:

-l filename Add the block numbers listed in the file specified by filename to the list of bad blocks. The format of this file is the same as the one generated by the badblocks(8) program.

there's no promise it will not try to access these blocks, though. But why the hell it may want to access them?

Note that the block numbers are based on the blocksize of the filesystem. Hence, badblocks(8) must be given the blocksize of the filesys‐ tem in order to obtain correct results. As a result, it is much simpler and safer to use the -c option to e2fsck, since it will assure that the correct parameters are passed to the badblocks program.

I'd be happy but it hangs on a first bad block. Plus, -c is incompatible with -l - thus, I either scan the disk or manually mark the bad sectors. But why, if I choose the latter option, it still wants to access these supposedly "bad" sectors is beyond my understanding...

1 Answers1

4

The proper way to badblock your disk is either:

sudo e2fsck -fck /dev/sdc1 # read-only test

or

sudo e2fsck -fcck /dev/sdc1 # non-destructive read/write test (recommended)

The -k is important, because it saves the previous bad block table, and adds any new bad blocks to that table. Without -k, you loose all of the prior bad block information.

The -fcck parameter...

   -f     Force checking even if the file system seems clean.

   -c     This option causes e2fsck to use badblocks(8) program  to  do  a
          read-only  scan  of  the device in order to find any bad blocks.
          If any bad blocks are found, they are added  to  the  bad  block
          inode  to  prevent them from being allocated to a file or direc‐
          tory.  If this option is specified twice,  then  the  bad  block
          scan will be done using a non-destructive read-write test.

   -k     When combined with the -c option, any existing bad blocks in the
          bad blocks list are preserved, and any new bad blocks  found  by
          running  badblocks(8)  will  be added to the existing bad blocks
          list.
heynnema
  • 73,649