Increasing stripe_cache_size doesn't make sense for all RAID levels. E.g. on RAID0 or RAID1. If you have RAID5 or RAID6 check the current value:
$ cat /sys/block/md1/md/stripe_cache_size
256
In order to make the change permanent, create a config file, e.g. /etc/udev/rules.d/50-mdadm.rules:
SUBSYSTEM=="block", KERNEL=="md*", ACTION=="change", TEST=="md/stripe_cache_size", ATTR{md/stripe_cache_size}="4096"
And reload changes afterwards:
udevadm control --reload-rules && udevadm trigger
Also check stripe_size, this can be modified only when creating an array.
$ cat /sys/block/md1/md/stripe_size
4096
Note that changing cache size from 256 to 8192 is rather big change. The maximum value is 327696, however in most cases you don't more than 4096.
stripe_cache_size equal to 256:
- 3 disks:
3*256*4096 = 3 MiB RAM
- 5 disks:
5*256*4096 = 5 MiB RAM
stripe_cache_size equal to 8192:
- 3 disks:
3*8192*4096 = 96 MiB RAM
- 5 disks:
5*8192*4096 = 160 MiB RAM
stripe_cache_size equal to 327696:
- 3 disks:
3*327696*4096 = 384 MiB RAM
- 5 disks:
5*327696*4096 = 640 MiB RAM
Depending on your workload, too many simultaneous requests can easily eat all you RAM. This article explains choosing optimal strip size.