2

I've set up a Raid 5 system at home for the first time. I've added 3 disks into the pc, and parted them all in 1 partition of 8G, and 1 partition of the remaining 492G then I made 2 raid volumes with each partition

Mounted the 16G as swap, and the other one as the system disk.

This worked like a charm, then I added a 4rd disk to the array did the same thing (parting in 2 partitions, and added both to the previously defined raid5 arrays) (mdadm --add /dev/sdd1 etc...)

Then I did the mdadm --grow /dev/md0 --raid-devices=4 command (ref: http://zackreed.me/articles/48-adding-an-extra-disk-to-an-mdadm-array)

Both arrays got rebuild, but they're not using the new size. and the performance of my system dropped terribly.

I've tried booting in recovery mode, and doing the resize2fs /dev/md1 command, but I keep getting "read only disk" stuff as issue.

How can I fix this? because normal boot will keep my main disk in "busy" status, and I've not dared doing a forced unmount

What can I do to get my performance back up?

If any more information is needed, do tell, and I'll supply what I can. Thx in advance.

qbi
  • 19,515
Darghon
  • 21
  • 2

1 Answers1

1

Make sure you (1) unmount the file system for each array, and then (2) run:

e2fsck -f -v /dev/mdx

Where x is the array in question. Then run resize2fs. You should then be able to resize the file system.

In my experience, however, growing arrays (I grew one stepwise from 3+1 to ultimately 8+2 with 1TB disks over 2+ years) does seem to impact performance. This is possibly because things like strip size are dependent on the number of data disks (i.e., n-1 for RAID5, n-2 for RAID6) and thus will no longer be optimized for the new array.

Strip width = [Stride size] × [# Data Disks]

You'll want to address this with tune2fs afterwards, with something like:

tune2fs -E stride=n,stripe-width=m /dev/mdx

See this for more detail: https://raid.wiki.kernel.org/index.php/RAID_setup

Eliah Kagan
  • 119,640