34

I want to create a large file ~10G filled with zeros and random values. I have tried using:

dd if=/dev/urandom of=10Gfile bs=5G count=10

It creates a file of about 2Gb and exits with a exit status '0'. I fail to understand why?

I also tried creating file using:

head -c 10G </dev/urandom >myfile

It takes about 28-30 mins to create it. But I want it created faster. Anyone has a solution?

Also i wish to create multiple files with same (pseudo) random pattern for comparison. Does anyone know a way to do that?

No Time
  • 1,073
egeek
  • 441

6 Answers6

28

How about using fallocate, this tool allows us to preallocate space for a file (if the filesystem supports this feature). For example, allocating 5GB of data to a file called 'example', one can do:

fallocate -l 5G example

This is much faster than dd, and will allocate the space very rapidly.

21

You can use dd to create a file consisting solely of zeros. Example:

dd if=/dev/zero of=zeros.img count=1 bs=1 seek=$((10 * 1024 * 1024 * 1024 - 1))

This is very fast because only one byte is really written to the physical disc. However, some file systems do not support this.

If you want to create a file containing pseudo-random contents, run:

dd if=/dev/urandom of=random.img count=1024 bs=10M

I suggest that you use 10M as buffer size (bs). This is because 10M is not too large, but it still gives you a good buffer size. It should be pretty fast, but it always depends on your disk speed and processing power.

Kaz Wolfe
  • 34,680
xiaodongjie
  • 2,874
8

Using dd, this should create a 10 GB file filled with random data:

dd if=/dev/urandom of=test1 bs=1M count=10240

count is in megabytes.

Source: stackoverflow - How to create a file with a given size in Linux?

Alaa Ali
  • 32,213
3

This question was opened 5 years ago. I just stumbled across this and wanted to add my findings.

If you simply use

dd if=/dev/urandom of=random.img count=1024 bs=10M

it will work significantly faster as explained by xiaodongjie. But, you can make it even faster by using eatmydata like

eatmydata dd if=/dev/urandom of=random.img count=1024 bs=10M

What eatmydata does is it disables fsync making the disc write faster.

You can read more about it at https://flamingspork.com/projects/libeatmydata/.

1

Answering the first part of your question:

Trying to write a buffer of 5GB at a time is not a good idea as your kernel probably doesn't support that. It won't give you any performance benefit in any case. Writing 1M at a time is a good maximum.

cprn
  • 1,209
0

Old topic - but gold topic :-)

For the ones looking for an easy recipe:

The fast way to create image files is to opt for sparse files:

8 GiB example:

dd if=/dev/zero of=sparse_gib.img bs=1G count=0 seek=8

8 GB example:

dd if=/dev/zero of=sparse_gb.img bs=100000 count=0 seek=$[10000*8]

Note: Using /dev/random makes no sense in this case as there are no bytes actually written - just empty blocks are reserved.

If sparse is not an option and you need all the bytes of an image file physically written to your disk, you'll have to take the slow route:

8 GiB example:

dd if=/dev/zero of=image_gib.img bs=1M count=$[1024*8]

8 GB example:

dd if=/dev/zero of=image_gb.img bs=10000 count=$[100000*8]

If your actually writing all the bytes to disk you can opt to write random bytes instead of zeroes. Just replace /dev/zero with /dev/urandom.

Keep in mind, that the empty space within an image file created with /dev/urandom can't be effectively compressed.

Keko
  • 26