How to check the performance of a hard drive (Either via terminal or GUI). The write speed. The read speed. Cache size and speed. Random speed.
8 Answers
Terminal method
hdparm is a good place to start.
sudo hdparm -Tt /dev/sda
/dev/sda:
Timing cached reads: 12540 MB in 2.00 seconds = 6277.67 MB/sec
Timing buffered disk reads: 234 MB in 3.00 seconds = 77.98 MB/sec
sudo hdparm -v /dev/sda will give information as well.
dd will give you information on write speed.
If the drive doesn't have a file system (and only then), use of=/dev/sda.
Otherwise, mount it on /tmp and write then delete the test output file.
dd if=/dev/zero of=/tmp/output bs=8k count=10k; rm -f /tmp/output
10240+0 records in
10240+0 records out
83886080 bytes (84 MB) copied, 1.08009 s, 77.7 MB/s
Graphical method
- Open the “Disks” application. (In older versions of Ubuntu, go to System -> Administration -> Disk Utility)
- Alternatively, launch the Gnome disk utility from the command line by running
gnome-disks
- Alternatively, launch the Gnome disk utility from the command line by running
- Select your hard disk at left pane.
- Now click “Benchmark Disk...” menu item under the three dots menu button, in the pane to the right.
- A new window with charts opens. Click “Start Benchmark...”. (In older versions, you will find and two buttons: one is for “Start Read Only Benchmark” and another one is “Start Read/Write Benchmark”. When you click on anyone button it starts benchmarking of hard disk.)
How to benchmark disk I/O
Is there something more you want?
Suominen is right, we should use some kind of sync; but there is a simpler method, conv=fdatasync will do the job:
dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k count=1k; rm -f /tmp/output
1024+0records in
1024+0 records out
402653184 bytes (403 MB) copied, 3.19232 s, 126 MB/s
- 1,729
If you want accuracy, you should use fio. It requires reading the manual (man fio) but it will give you accurate results. Note that for any accuracy, you need to specify exactly what you want to measure. Some examples:
Sequential READ speed with big blocks QD32 (this should be near the number you see in the specifications for your drive):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Sequential WRITE speed with big blocks QD32 (this should be near the number you see in the specifications for your drive):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=write --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Random 4K read QD1 (this is the number that really matters for real world performance unless you know better for sure):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randread --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Mixed random 4K read and write QD1 with sync (this is worst case performance you should ever expect from your drive, usually less than 1% of the numbers listed in the spec sheet):
fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randrw --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Increase the --size argument to increase the file size. Using bigger files may reduce the numbers you get depending on drive technology and firmware. Small files will give "too good" results for rotational media because the read head does not need to move that much. If your device is near empty, using file big enough to almost fill the drive will get you the worst case behavior for each test. In case of SSD, the file size does not matter that much.
However, note that for some storage media the size of the file is not as important as total bytes written during short time period. For example, some SSDs have significantly faster performance with pre-erased blocks or it might have small SLC flash area that's used as write cache and the performance changes once SLC cache is full (e.g. Samsung EVO series which have 20-50 GB SLC cache). As an another example, Seagate SMR HDDs have about 20 GB PMR cache area that has pretty high performance but once it gets full, writing directly to SMR area may cut the performance to 10% from the original. And the only way to see this performance degration is to first write 20+ GB as fast as possible and continue with the real test immediately afterwards. Of course, this all depends on your workload: if your write access is bursty with longish delays that allow the device to clean the internal cache, shorter test sequences will reflect your real world performance better. If you need to do lots of IO, you need to increase both --io_size and --runtime parameters. Note that some media (e.g. most cheap flash devices) will suffer from such testing because the flash chips are poor enough to wear down very quickly. In my opinion, if any device is poor enough not to handle this kind of testing, it should not be used to hold any valueable data in any case. That said, do not repeat big write tests for 1000s of times because all flash cells will have some level of wear with writing.
In addition, some high quality SSD devices may have even more intelligent wear leveling algorithms where internal SLC cache has enough smarts to replace data in-place if its being re-written while the data is still in SLC cache. For such devices, if the test file is smaller than total SLC cache of the device, the full test always writes to SLC cache only and you get higher performance numbers than the device can support for larger writes. So for such devices, the file size starts to matter again. If you know your actual workload it's best to test with the file sizes that you'll actually see in real life. If you don't know the expected workload, using test file size that fills about 50% of the storage device should result in a good average result for all storage implementations. Of course, for a 50 TB RAID setup, doing a write test with 25 TB test file will take quite some time!
Note that fio will create the required temporary file on first run. It will be filled with pseudorandom data to avoid getting too good numbers from devices that try to cheat in benchmarks by compressing the data before writing it to permanent storage. The temporary file will be called fio-tempfile.dat in above examples and stored in current working directory. So you should first change to directory that is mounted on the device you want to test. The fio also supports using direct media as the test target but I definitely suggest reading the manual page before trying that because a typo can overwrite your whole operating system when one uses direct storage media access (e.g. accidentally writing to OS device instead of test device).
If you have a good SSD and want to see even higher numbers, increase --numjobs above. That defines the concurrency for the reads and writes. The above examples all have numjobs set to 1 so the test is about single threaded process reading and writing (possibly with the queue depth or QD set with iodepth). High end SSDs (e.g. Intel Optane 905p) should get high numbers even without increasing numjobs a lot (e.g. 4 should be enough to get the highest spec numbers) but some "Enterprise" SSDs require going to range 32-128 to get the spec numbers because the internal latency of those devices is higher but the overall throughput is insane. Note that increasing numbjobs to high values usually increases the resulting benchmark performance numbers but rarely reflects the real world performance in any way.
The Intel 905p can do above "Mixed random 4K read and write QD1 with sync" test with following performance:[r=149MiB/s,w=149MiB/s][r=38.2k,w=38.1k IOPS]. If you try that on any other but Optane level hardware, your performance will be A LOT less. Closer to 100 IOPS instead of 38000 IOPS like Optane can do.
- 3,785
I would not recommend using /dev/urandom because it's software based and slow as pig. Better to take chunk of random data on ramdisk. On hard disk testing random doesn't matter, because every byte is written as is (also on ssd with dd). But if we test dedupped zfs pool with pure zero or random data, there is huge performance difference.
Another point of view must be the sync time inclusion; all modern filesystems use caching on file operations.
To really measure disk speed and not memory, we must sync the filesystem to get rid of the caching effect. That can be easily done by:
time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync"
with that method you get output:
sync ; time sh -c "dd if=/dev/zero of=testfile bs=100k count=1k && sync" ; rm testfile
1024+0 records in
1024+0 records out
104857600 bytes (105 MB) copied, 0.270684 s, 387 MB/s
real 0m0.441s
user 0m0.004s
sys 0m0.124s
so the disk datarate is just 104857600 / 0.441 = 237772335 B/s --> 237MB/s
That is over 100MB/s lower than with caching.
Happy benchmarking,
- 671
If you want to monitor the disk read and write speed in real-time you can use the iotop tool.
This is useful to get information about how a disk performs for a particular application or workload. The output will show you read/write speed per process, and total read/write speed for the server, similar to top.
Install iotop:
sudo apt-get install iotop
Run it:
sudo iotop
This tool is helpful to understand how a disk performs for a specific workload versus more general and theoretical tests.
- 559
- 4
- 6
Write speed
$ dd if=/dev/zero of=./largefile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.82364 s, 223 MB/s
Block size is actually quite large. You can try with smaller sizes like 64k or even 4k.
Read speed
Run the following command to clear the memory cache
$ sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"
Now read the file which was created in write test:
$ dd if=./largefile of=/dev/null bs=4k
165118+0 records in
165118+0 records out
676323328 bytes (676 MB) copied, 3.0114 s, 225 MB/s
- 497
bonnie++ is the ultimate benchmark utility I know for linux.
(I'm currently preparing a linux livecd at work with bonnie++ on it to test our windows-based machine with it!)
It takes care of the caching, syncing, random data, random location on disk, small size updates, large updates, reads, writes, etc. Comparing a usbkey, a harddisk (rotary), a solid-state drive and a ram-based filesystem can be very informative for the newbie.
I have no idea if it is included in Ubuntu, but you can compile it from source easily.
some hints on how to use bonnie++
bonnie++ -d [TEST_LOCATION] -s [TEST_SIZE] -n 0 -m [TEST_NAME] -f -b -u [TEST_USER]
bonnie++ -d /tmp -s 4G -n 0 -m TEST -f -b -u james
A bit more at: SIMPLE BONNIE++ EXAMPLE.
- 367
- 3
- 10
