Compressing VMWare images
Wow, I thought I've posted this stuff before but could not find it when searching earlier today.
But that's OK because I've done something new today versus the many previous years (the "special case" below).
Anyway, the quickest way to reduce the size of a VMWare image (note: I am not talking about the physical space, I mean the size when you compress the image in the host OS, for example with
Reducing VM Image Size (Standard)
telinit 1# Drops you to single user and shuts down most other stuff
- Delete any files you don't need. This includes most of
dd if=/dev/zero of=delme bs=102400 || rm -rf delme# This will fill the unused space on the hard drive image with zeros. If you have VMWare set to expand-on-the-fly, it will maximize the size on the host OS, which may not be what you want. Use
mountto show which partitions are being used - you need to do this for each partition (e.g.
/boot). This is the "meat" of the issue. Do not background this process and then try to do the other partitions in parallel - remember, they are the same physical disk on the host OS and you will thrash your hard drive like crazy (been there).
- Check where your swap space is defined - it's in
swapoff -a# Turns off all swap space (you don't need it right now)
dd if=/dev/zero of=/dev/yourswappartition bs=1024
/etc/fstabmounted the swap by label:
mkswap -L SWAPLabel /dev/yourswappartition
/etc/fstabmounted by partition alone:
- You don't need to turn the swap back on, on the next boot of the VM it will be handled since you ran
shutdown -h now
Reducing VM Image Size (Special Case)
The special case is what I ran into today. I backed up my work trac/svn VM server as usual. However, I told another customer that I would give them a server. So I need to remove the subversion repository and trac environment. Option 1: Delete them, and then redo all the
dd stuff from above, which would be O(free space) vs O(repository). Since "free space" >> "repository", I was trying to avoid that. Option 2: Zero out the files that I don't want anymore. This has the advantage of still reclaiming the gigabytes of space while not waiting for all empty space to be removed. The secret was using the
find -type f | xargs shred -u -v -n 1 --random-source=/dev/zero
For those trying to understand it better, that is "find all files (that are files) and then pass the list to
shred as parameters along with: delete the file when done (
-u), tell me everything you do (
-v), overwrite the file only once instead of the usual 25 (
-n 1), and instead of using
/dev/random for your "random" data, just use
--random-source=/dev/zero). Note that using
dd directly would have been a pain because I would have to know the size of each file (hundreds) but also it would truncate-on-write meaning that the data being written is not guaranteed to be the actual data we wanted to blank out. That defeats the purpose!
Making This Obsolete
I need to check out this Zerotools package as soon as I can since it seems to do a similar thing all the time.