1

I got the following problem:

df -h shows:

ubuntu@:~$ df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/xvda1              32G  9.6G   21G  33% /
udev                   819M   12K  819M   1% /dev
tmpfs                  331M  200K  331M   1% /run
none                   5.0M     0  5.0M   0% /run/lock
none                   827M     0  827M   0% /run/shm

Here i can see that I have 21GB available in the 32G / partition.

However, when I try df -i I get

ubuntu@:~$ df -i
Filesystem             Inodes   IUsed  IFree IUse% Mounted on
/dev/xvda1            2097152 2096964    188  100% /
udev                   209564     382 209182    1% /dev
tmpfs                  211573     274 211299    1% /run
none                   211573       4 211569    1% /run/lock
none                   211573       1 211572    1% /run/shm

Here I see the usage is 100%. I'm not sure why the usage is shown differently in both.

Secondly the /root folder looks very strange to me:

ubuntu@:/$ ls -al
total 132240
drwxr-xr-x  24 root root      4096 Jan  8 15:04 .
drwxr-xr-x  24 root root      4096 Jan  8 15:04 ..
drwxr-xr-x   2 root root      4096 Mar 27  2013 bin
drwxr-xr-x   3 root root      4096 Mar 21  2013 boot
drwx------   5 root root 135319552 Apr  1 14:11 root
drwxr-xr-x  18 root root       640 Apr  1 14:03 run
drwxr-xr-x   2 root root      4096 Mar 27  2013 sbin

Why the size of the root directory is so huge? If I go into the /root directory and type in ls the terminal does not even respond. So I'm confused with this.

Any suggestions will be helpful.

Thanks.

jobin
  • 28,567

2 Answers2

2

For your first question, df -h reports the file system disk space usage in human readable form. Human readable because it reports size as in kb, mb or gb and not purely in bytes, whereas df -i reports the inode information instead of block usage.

inode in layman's terms is data about a file. Some space is required to store the data about the files in your filesystem. So if this space is full, you can't store files in your filesystem even though you may have space because there is no space to store data about the file you want to store.

Secondly, you need superuser permissions to see what is stored in /root folder, so I am not pretty sure how you got into the /root folder. The reason why ls is failing is because it doesn't have read permissions to read the contents of /root to report it to you. To see why /root has occupied to much space, you'll need superuser permissions to investigate and unless you have that, I am not sure you can even read what is inside that.

The size of folder is 4096 because that is the minimum size it requires to store information about its contents. If it exceeds this size, it means it has a lot of files whose information requires that amount of space.

I would recommend you investigate the contents of /root folder since that seems to be the reason why your df -i shows 100% utilization.

jobin
  • 28,567
1

Alright, I figured out the problem and here is the explanation:

We have some crob jobs running that run PHP scripts and we used wget -q. This for some reason creates a log file by the name of the php script in /root and the file name would be just 0 bytes.

Regarding the files being *.php. not sure why that happens. But we need to fix the way the cron is running. Since I fixed the problem yesterday already few thousand files already created :/

The cron job had about 10 scripts that it would run every minute. Which means 10 line entry every minute. So over the year it created: 10 x 60 x 24 x 365 = over 5.2 million files. This caused that exceptionally big directory size (130MB) for /root

This also explains why df -u showed that there were available "space", but no more nodes were free to write the filenames itself.

Then using find . -name '*.php*' | xargs rm -v I deleted all the php log files. Took almost 30 mins and everything is perfect. Disk usage is down to just 9%!

/root is normal, however the 130MB directory size remains unchanged. Not sure this will ever change. But it's not a major issue for now. Got to fix the crontab bit and we got a year to fix it :)

Anwar
  • 77,855