3

My root disk is full to the brim because of, I suspect, disk space locked up by a ballooning .xsession-errors file. The ballooning is caused by running processes that keep the error file open and dumping data into it, i.e., PID from several different applications e.g., chromium being the largest culprit. I suspect this is the case because lsof | grep deleted returns lines like:

chromium- 27607  user  2w  REG 8,1 1809493864448  108527952 /home/user/.xsession-errors (deleted)
chromium- 27762  user  2w  REG 8,1 1809493864448  108527952 /home/user/.xsession-errors (deleted)

The twist here is that I have a cron job set to delete the file home/user/.xsession-errors` as per a suggested work around to this issue. You can imagine how this situation runs a mock quickly when chromium opens up umpteen processes! I am using a 64bit UBUNTU 12.04 machine with the following HD (EXT4) config:

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1       1.8T   34G  1.7T   2% /
udev             12G  4.0K   12G   1% /dev
tmpfs           4.8G  1.2M  4.8G   1% /run
none            5.0M   16K  5.0M   1% /run/lock
none             12G  2.1M   12G   1% /run/shm
/dev/sde1       1.8T  450G  1.3T  26% /media/SEA2T
/dev/sdd1       2.7T  201M  2.6T   1% /media/BUFF3T
/dev/sdb        3.6T  118G  3.3T   4% /media/INDAR
/dev/sdc        3.6T  3.0T  469G  87% /media/ALAYA

What I've done so far to resolve in vain:

  1. Is it possible to reclaim this space? Apparently not in my case, though others have managed to truncate the file to free up the the disk.
  2. As this seems to be a sort of virtual occurrence, with no real file(s) as culprit, rebooting was the working option for me.
  3. How To ensure this doesn't happen again? I still don't know. The current workaround is setting the ERRFILE variable in the file /etc/X11/Xsession to /tmp/$USER-xsession-errors

in order to figure out what is being dumped to this error file. I appreciate any suggestions as to how to deal with the run away xsession-errors file once and for all! Thanks in advance.

Kambiz
  • 141
  • 6

1 Answers1

1

You might be able to access the file through ls -l /proc/<PID>/fd/* (dangerous) and once you've determined the fd number, truncate it with truncate /proc/<PID>/fd/<fd> --size 0 (even more dangerous). That's the alternative to rebooting or killing the process. However it's hard to tell what will happen on subsequent writes to such a mutilated file.

What you really should do is find out what is writing to that file and why and take whatever steps necessary to stop it from doing that. Even ignoring storage issues, writing debug logs is expensive and hurts performance. Thus you should find the root cause.