Migrated Again
Wow. I've had this blog since 2002. Waaay back in the day, it was some proprietary format, and I migrated it 13 years ago to trac
.
At that time, it was on a dedicated Red Hat box that also acted as my firewall.
At some point since then, I migrated it to vmware - see that topic for some of the problems.
Originally that VM image ran on my CentOS server (cf. https://bugs.centos.org/view.php?id=3884 ) and some point it was migrated to my Windows 7 desktop.
Since it was in a VM, I could always snapshot and really beat on it. I had files as far back as 2001 and GPG signatures for my RPMs from Red Hat OS before it was Fedora/RHEL split. Over the years, I've managed to beat it into submission to the point I had it running Fedora 31; of course it's built-in now with dnf system-upgrade
. But that's not the point. Fedora 32 broke Python2, and trac
isn't there yet. (Side note - the VM has been called webmail
for years, but I uninstalled SquirrelMail and moved to Google-hosting many years ago.)
With the COVID-19 quarantine, I decided to migrate this blog to containers so I can just use a pre-defined trac
container and go on my merry way. Hopefully less maintenance in the future.
So, that's where it is now. As I tested the site from my wife's iPad (on cellular) I just had to marvel at how data travels to get this post out of my house:
(you) <=> (Cloudflare) <=> OpenWrt <=> Win10 Pro <=> Hyper-V Docker VM <=> Container [Ephemeral] <=> Docker Volume
RHEL/CentOS/VMWare pissed me off
(Originally posted 25 Oct 09, lost in server mishap, found in Google's cache of this page)
I cannot believe that a point release would hose me up so badly…
- http://bugs.centos.org/view.php?id=3884
- You can see what I did to fix it listed at the bottom
VMWare Client Running on Fedora 9
What a pain! You have to get the latest open-vm-tools from SourceForge. Do a configure
and make && make check
. But then you cannot actually install the files or VMWare gets pissy.
After the make
you need to pack up the kernel files you have created and patch the real VMWare installer with them:
for i in *; do mv ${i} ${i}-only; tar -cf ${i}.tar ${i}-only; done cp *tar /usr/lib/vmware-tools/modules/source/
Then you can run the standard vmware-tools-config.pl
and it will use the source you just upgraded.
This page was assembled from various net resources…
Today I created a yum repository
This is what happens when you are at work and you have ISOs for the 6 CDs of CentOS 5.2 but NOT the DVD, and no connection to the 'net… I couldn't use the 5.2 installer thanks to this bug (it's an embedded Celeron 650). Since I went thru all the work, I also then imported the directory as a "shared folder" under VMWare player and then did the same upgrade path on that machine (I want it to mirror the embedded machine for all versions of everything, except it also has the gcc suite, etc).
One Time Only
(This can be done on any Linux machine with the external drive connected)
- I mounted the external drive under Linux and there are the 6 ISO CDs. I mounted each and then upgraded what was on it that we already had installed.
cd /media/ext_drive/<install dir>
mkdir mnt
mount -o ro,loop <CDFILE>.iso mnt
cp -urv mnt/CentOS .
- If I were doing this again, I may mount the 6 as
/mnt1
thru/mnt6
and then try to usecp -l
to make links? - (Optionally in another window to watch progress:
watch -d 'lsof -c cp -s | cut -c37- | grep rpm '
)
- If I were doing this again, I may mount the 6 as
umount mnt
- (Repeat for all 6 - this gives us a CentOS subdir with all the RPMs. If I had the DVD instead of the 6 CDs, this would've been easier)
- Now we will make this new directory into an "official" repository
cd CentOS
rpm -i createrepo*rpm
(glad that was there!)mkdir repo_cache
createrepo -v -p -d -c repo_cache --update --skip-stat .
- This step takes forever (even longer than the copying above)
- With a DVD image, this is most likely not even needed!
Every Target Machine
- We need to disable all the remote repositories:
- Edit
/etc/yum.repos.d/CentOS-Base.repo
and addenabled=0
to every section - Edit
/etc/yum.repos.d/CentOS-Media.repo
and change toenabled=1
- Depending on where the external hard drive is,
baseurl
will need an added path to it- When I did it, it was
file:///media/ext_drive/LinuxInstallers/CentOS-5.2-i386-bin-1to6/CentOS/
- When I did it, it was
- There is a known bug in 5.1 - the GPG signature key should be
RPM-GPG-KEY-CentOS-5
(not "beta
")
- Depending on where the external hard drive is,
- Edit
yum clean all
yum install yum-protect-packages
yum upgrade yum
yum clean all
yum upgrade --exclude=kernel\* -y | tee upgrade.log
- (Optionally in another window to watch progress:
watch -n1 'lsof -c yum -s | cut -c43- | grep rpm '
)
- (Optionally in another window to watch progress:
grep warn upgrade.log
- For this, you need to
diff
each file with the.rpmnew
file or.rpmold
file and merge them together.
- For this, you need to
- Reboot!
Compressing VMWare images
Wow, I thought I've posted this stuff before but could not find it when searching earlier today.
But that's OK because I've done something new today versus the many previous years (the "special case" below).
Anyway, the quickest way to reduce the size of a VMWare image (note: I am not talking about the physical space, I mean the size when you compress the image in the host OS, for example with tar
with bzip2
):
Reducing VM Image Size (Standard)
telinit 1
# Drops you to single user and shuts down most other stuff- Delete any files you don't need. This includes most of
/tmp/
dd if=/dev/zero of=delme bs=102400 || rm -rf delme
# This will fill the unused space on the hard drive image with zeros. If you have VMWare set to expand-on-the-fly, it will maximize the size on the host OS, which may not be what you want. Usemount
to show which partitions are being used - you need to do this for each partition (e.g./boot
). This is the "meat" of the issue. Do not background this process and then try to do the other partitions in parallel - remember, they are the same physical disk on the host OS and you will thrash your hard drive like crazy (been there).- Check where your swap space is defined - it's in
/etc/fstab
swapoff -a
# Turns off all swap space (you don't need it right now)dd if=/dev/zero of=/dev/yourswappartition bs=1024
- If
/etc/fstab
mounted the swap by label:mkswap -L SWAPLabel /dev/yourswappartition
- If
/etc/fstab
mounted by partition alone:mkswap /dev/yourswappartition
- You don't need to turn the swap back on, on the next boot of the VM it will be handled since you ran
mkswap
. shutdown -h now
Reducing VM Image Size (Special Case)
The special case is what I ran into today. I backed up my work trac/svn VM server as usual. However, I told another customer that I would give them a server. So I need to remove the subversion repository and trac environment. Option 1: Delete them, and then redo all the dd
stuff from above, which would be O(free space) vs O(repository). Since "free space" >> "repository", I was trying to avoid that. Option 2: Zero out the files that I don't want anymore. This has the advantage of still reclaiming the gigabytes of space while not waiting for all empty space to be removed. The secret was using the shred
command:
find -type f | xargs shred -u -v -n 1 --random-source=/dev/zero
For those trying to understand it better, that is "find all files (that are files) and then pass the list to shred
as parameters along with: delete the file when done (-u
), tell me everything you do (-v
), overwrite the file only once instead of the usual 25 (-n 1
), and instead of using /dev/random
for your "random" data, just use /dev/zero
(--random-source=/dev/zero
). Note that using dd
directly would have been a pain because I would have to know the size of each file (hundreds) but also it would truncate-on-write meaning that the data being written is not guaranteed to be the actual data we wanted to blank out. That defeats the purpose!
Making This Obsolete
I need to check out this Zerotools package as soon as I can since it seems to do a similar thing all the time.
Obnoxious VMWare console beeps
The VMWare machines on my work laptop would chime whenever I did things like filename completion. The problem was, it was BIOS-level super obnoxious beep that was likely pissing off my cube-neighbors (no volume control nor mute worked). I had turned off all sounds in the Windows host OS, which seemed to be the problem. So if your console beeps are out of hand, set the Windows' "Default Beep
" to something. That will have Windows re-intercepting the beep and then the system volume control / mute will work again.
Shutting down VMWare clients
In /etc/services on CLIENT:
# Local services shutdown 6666/tcp
In /etc/inetd.conf on CLIENT:
shutdown stream tcp nowait root /sbin/shutnow
In /sbin/shutnow of CLIENT: (you can prolly get rid of this and move this all into inetd.conf above, but I used to do other things too…)
#!/bin/bash /sbin/shutdown -h now
On the CLIENT's iptables rules, I have:
0 0 DROP tcp -- eth1 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:6666
So nobody can reach that port from eth1 (internet). The host will be hitting the port on eth2 which is the host-only virtual network.
Then on the HOST in /etc/init.d/kill_vms (new file):
#!/bin/sh # # chkconfig: 4 90 08 # description: Kills VMs on shutdown/reboot /usr/bin/telnet 192.168.90.201 6666 < /dev/zero PIDS=fake while [ "foo$PIDS" != "foo" ] do { echo "Delaying shutdown... VMWare still on $PIDS" sleep 10 PIDS=`pidof vmware-vmx` }; done
So then on the server you install the "kill_vms" with chkconfig (fix the IP from 192.168.90.201 to your virtual client IP of course!).
It won't work the first time you reboot, sorry. If you 'touch' the file /var/lock/subsys/kill_vms (at least on my ancient RH based system) then it should. Also, it will hang forever if you don't have the virtual machine set to 'Close on shutdown' and I think maybe another option in VMWare about closing if all clients close.