16

df

 Filesystem     1K-blocks     Used Available Use% Mounted on
/dev/vda1       30830588 22454332   6787120  77% /
none                   4        0         4   0% /sys/fs/cgroup
udev             1014124        4   1014120   1% /dev
tmpfs             204996      336    204660   1% /run
none                5120        0      5120   0% /run/lock
none             1024976        0   1024976   0% /run/shm
none              102400        0    102400   0% /run/user

That 77% was just 60% yesterday and it will fill up to 100% in a few days.

I've been monitoring filessizes for a while now:

sudo du -sch /*


9.6M    /bin
65M     /boot
224K    /build
4.0K    /dev
6.5M    /etc
111M    /home
0       /initrd.img
0       /initrd.img.old
483M    /lib
4.0K    /lib64
16K     /lost+found
8.0K    /media
4.0K    /mnt
4.0K    /opt
du: cannot access ‘/proc/21705/task/21705/fd/4’: No such file or directory
du: cannot access ‘/proc/21705/task/21705/fdinfo/4’: No such file or directory
du: cannot access ‘/proc/21705/fd/4’: No such file or directory
du: cannot access ‘/proc/21705/fdinfo/4’: No such file or directory
0       /proc
21M     /root
336K    /run
12M     /sbin
8.0K    /srv
4.1G    /swapfile
0       /sys
4.0K    /tmp
1.1G    /usr
7.4G    /var
0       /vmlinuz
0       /vmlinuz.old
14G     total

It's been giving me (more or less) the same numbers every day. That 14G total is less than half the disk size. Where is the rest going?

My Linux knowledge does not go a lot deeper.

Is it possible for files to no show up here? Is it possible to have space allocated in any other way?

nizzle
  • 275

3 Answers3

30

If there's an invisible growth in disk space, a likely culprit would be deleted files. In Windows, if you try to delete a file opened by something, you get an error. In Linux, the file will be marked as deleted, but the data will be retained until the application lets go. In some cases, this can be used as a neat way to clean up after yourself - application crashes won't prevent temporary files from being cleaned.

To look at deleted, still-used files:

lsof -b 2>/dev/null | grep deleted

You may have a large number of deleted files - that in itself is not a problem. A single deleted file getting large is a problem.

A reboot should fix this, but if you don't want to reboot, check the applications involved (first column in lsof output) and restart or close reasonable looking ones.

If you ever see something like:

zsh   1724   muru   txt   REG   8,17   771448   1591515  /usr/bin/zsh (deleted)

Where the application and the deleted files are the same, that probably means the application was upgraded. You can ignore those as a source of large disk usage (but you should still restart the program so that bug-fixes apply).

Files in /dev/shm are shared memory objects and don't occupy much space on disk (an inode number at most, I think). They can also be safely ignored. Files named vteXXXXXX are log files from a VTE-based terminal emulator (like GNOME Terminal, Terminator, etc.). These could be large, if you have a terminal window open with lots (and I mean lots) of stuff being output.

muru
  • 207,228
3

To add to the excellent answer by muru :

  • df shows the size on the disk,
  • and du shows the total size of the files content.

Maybe what you don't see with du is the appearance of many, many small files... (look on the last column of df -i and see if the number of inodes (ie, of files) increases a lot overtime too)

If you happen to have, say, 1'000'000 (1 million) tiny 1-byte files, du will count that as 1'000'000 bytes total, let's say 1Mb (... purists, please don't cringe)

But on disk, each file is made of 2 things:

  • 1 inode (pointing to the file's data), and that inode can by itself be 16kb(!),
  • And each file's data (= the file's content) is put on disk blocks, and those blocks can't contain several file's data (usually...), so your 1 byte of data will occupy at least 1 block

Thus, a million files 1-byte files will occupy 1'000'000'000 * size_of_a_block total space for the data, plus 1'000'000'000 * size_of_an_inode of inode's size... That can amount to several Gb of disk usage for 1 million "1-byte" files.

If you have 1024-byte blocks, and another 256 bytes of inode size, your 1'000'000 files will be reported as roughly 1Mb by du, but will count as roughly 1.25Gb on disk (as seen by df) ! (or even 2Gb if each inode also has to be on 1 dedicated disk block... I don't know if that's the case)

0

If /dev/vda1 is filling up, it might be caused by Jenkins or Docker (or etc) and you might have to use lsof command in order to clean logs and set it's size.

T.Todua
  • 573