A couple weeks ago I was working on an older development VM that was setup with a smaller hard drive and I started getting unexpected end of file error messages. I couldn't find the problem in my file so I was going to run
git diff on the file to see what I've changed. I started do this and ran into a problem:
user@VM:/var/www/$ ls -l <tab>bash: cannot create temp file for here-document: No space left on device bash: cannot create temp file for here-document: No space left on device
Well, that explains why I couldn't find the problem...
df and du to the Rescue
There are two tools that are helpful for troubleshooting space usage at the command line in linux. The first is
df which displays the mount points of a system and the used and available space. This tool is useful for quickly getting a read out on the amount of space available on your system. The second is
du which displays the amount space used by each file/directory. This is useful for tracking down the specific file/directory that's using a lot of space.
The first thing I ran was df to see what mount points were affected.
user@VM:/$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VM-root 7.2G 6.8G 4.0K 100% / udev 241M 4.0K 241M 1% /dev tmpfs 100M 1.1M 99M 2% /run none 5.0M 0 5.0M 0% /run/lock none 249M 0 249M 0% /run/shm none 100M 0 100M 0% /run/user /dev/sda1 228M 28M 188M 13% /boot
Okay so it's just the root mount point. The next step is to find what file is eating so much space. In order to do this I'll employ
du the first step is to
cd to / so we look at the whole drive:
user@VM:/$ cd /
Then we can run
sudo du -h --max-depth=1 this displays the current size of all the directories in the current path.
user@VM:/$ sudo du -h --max-depth=1 4.0K ./mnt 16K ./lost+found 4.0K ./nonexistent 4.0K ./dev 1.1M ./run 4.0K ./opt 4.0K ./srv 28M ./boot 964M ./usr 5.5G ./var 0 ./proc 16K ./root 164M ./lib 600K ./home 8.3M ./bin 0 ./sys 11M ./sbin 3.5M ./build 8.0K ./media 4.0K ./selinux 16K ./tmp 6.4M ./etc 6.7G .
Quickly we can see that the problem is with the /var directory so we
cd into that directory and repeat the process:
user@VM:/$ cd /var user@VM:/var$ sudo du -h --max-depth=1 4.0K ./metrics 524M ./mail 4.0K ./opt 2.7G ./log 4.0K ./local 437M ./lib 4.0K ./crash 1.7M ./backups 80K ./spool **1.8G ./www** 4.0K ./tmp 129M ./cache 5.5G .
Two items jump out here. One is /var/www which has a large website with a lot of PDFs in it so it's not unexpected. The other is /var/log which we're going to look at because it shouldn't be so large:
user@VM:/var$ cd log user@VM:/var/log$ sudo du -h --max-depth=1 40K ./ConsoleKit 8.0K ./landscape 2.4G ./apache2 4.0K ./news 14M ./installer 4.0K ./dist-upgrade 364K ./upstart 76K ./apt 2.1M ./mysql 12K ./fsck 6.4M ./samba 2.7G .
Again, we see that one folder is taking up most of the space so we'll
cd into that and run
user@VM:~$ cd /var/log/apache2 user@VM:/var/log/apache2$ ls -l total 2512856 -rw-r----- 1 root adm 2334720 May 12 09:26 access.log -rw-r----- 1 root adm 1560145 May 9 06:54 access.log.1 -rw-r----- 1 root adm 2567327744 May 12 09:22 error.log -rw-r----- 1 root adm 1577785 May 9 06:55 error.log.1 -rw-r----- 1 root adm 80565 May 11 12:07 other_vhosts_access.log -rw-r----- 1 root adm 238906 May 10 15:46 other_vhosts_access.log.1 -rw-r----- 1 root adm 0 May 11 06:41 ssl_access.log -rw-r----- 1 root adm 67 May 10 11:09 ssl_access.log.1
Bingo! The error log is full (it turns out I created an infinite loop earlier in the day that almost filled up the log). It's a good thing it's an easy fix:
user@VM:/var/log/apache2$ sudo rm error.log user@VM:/var/log/apache2$ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VM-root 7.2G 4.4G 2.4G 65% / udev 241M 4.0K 241M 1% /dev tmpfs 100M 1.1M 99M 2% /run none 5.0M 0 5.0M 0% /run/lock none 249M 0 249M 0% /run/shm none 100M 0 100M 0% /run/user /dev/sda1 228M 28M 188M 13% /boot
**Note: ** If you run into this same problem, I also had to restart apache (
sudo service apache2 restart) for the space to be freed.