21

I have hunch that a certain intermittent bug might only manifest itself when there is a slow disk read rate. Troubleshooting is difficult because I can't reliably reproduce it.

Short of simply gobbling IO with a high priority process, is there any way for me to simulate having a slow hard drive?

ændrük
  • 78,496

10 Answers10

17

Use nbd, the Network Block Device, and then rate limit access to it using say trickle.

sudo apt-get install nbd-client nbd-server trickle
poolie
  • 9,358
4
# echo 1 > /proc/sys/vm/drop_caches

That'll slow you down :)

It'll force you to read from disk, instead of taking advantage of the cached page.

If you really wanted to get sophisticated you could do something like fake a read error every nth time using the scsi fault injection framework.

http://scsifaultinjtst.sourceforge.net/

ppetraki
  • 5,531
3

Have a USB 1.1 hub? Or a slow SD card? They'll get you down to under 10mbps.

Oli
  • 299,380
3

This is by no means a complete solution, but it may help in conjunction with other measures: There is an I/O scheduler much like a process scheduler, and it can be tweaked.

Most notably, you can actually choose amongst different schedulers:

~# cat /sys/block/sda/queue/scheduler 
noop anticipatory deadline [cfq] 
~# echo "deadline" > /sys/block/sda/queue/scheduler
~# cat /sys/block/sda/queue/scheduler 
noop anticipatory [deadline] cfq 
~# 

deadline may help you get more strongly reproducible results.

noop, as its name implies, is insanely dumb, and will enable you to wreck absolute havoc on I/O performance with little effort.

anticipatory and cfq both try to be smart about it, though cfq is generally the smarter of the two. (As I recall, anticipatory is actually the legacy scheduler from right before the kernel started supporting multiple schedulers.)

3

You can use a Virtual Machine and throttle disk access ... here are some tips about how do it in Virtualbox 5.8. Limiting bandwidth for disk images https://www.virtualbox.org/manual/ch05.html#storage-bandwidth-limit

Adi Roiban
  • 2,942
1

Apart from trying to slow down the hard drive itself, you could try using filesystem benchmarking tools such as bonnie++ which can cause a great deal of disk I/O.

sudo apt-get install bonnie++
Zanna
  • 72,312
ajmitch
  • 18,999
1

You could try running a copy of a large file, such as an iso of the Ubuntu install cd, and run it twice. That should slow your drive down quite a bit.

RolandiXor
  • 51,797
1

I have recently figured out a setup where I've

  • moved the directory to my Google Drive
  • mounted it via the super-duper-slow client google-drive-ocamlfuse
  • created a symlink from the original path to the new one

If 16 seconds latency is not slow enough, you can just unplug your router.

For reference, here is the original use case, where I got the idea for this: https://github.com/goavki/apertium-apy/pull/76#issuecomment-355007128

0

Why not run iotop and see if the process that you are trying to debug is causing lots of disk reads/writes?

Eliah Kagan
  • 119,640
-1

how about make -j64? in articles describing that new 200line performance patch, make -j64 was a task eating a lot of computer resources

Praweł
  • 6,536