Temporary File Descriptors (FDs) in Bash
Found this useful the other day when I needed an FD and not a file name… in my example, I was testing some python code where C++ was doing the heavy lifting and was going to pass an open FD to python.
exec {tempo}<> scratchfile echo ${tempo} ls -halF /proc/self/fd command --fd=${tempo}
Migrated Again
Wow. I've had this blog since 2002. Waaay back in the day, it was some proprietary format, and I migrated it 13 years ago to trac
.
At that time, it was on a dedicated Red Hat box that also acted as my firewall.
At some point since then, I migrated it to vmware - see that topic for some of the problems.
Originally that VM image ran on my CentOS server (cf. https://bugs.centos.org/view.php?id=3884 ) and some point it was migrated to my Windows 7 desktop.
Since it was in a VM, I could always snapshot and really beat on it. I had files as far back as 2001 and GPG signatures for my RPMs from Red Hat OS before it was Fedora/RHEL split. Over the years, I've managed to beat it into submission to the point I had it running Fedora 31; of course it's built-in now with dnf system-upgrade
. But that's not the point. Fedora 32 broke Python2, and trac
isn't there yet. (Side note - the VM has been called webmail
for years, but I uninstalled SquirrelMail and moved to Google-hosting many years ago.)
With the COVID-19 quarantine, I decided to migrate this blog to containers so I can just use a pre-defined trac
container and go on my merry way. Hopefully less maintenance in the future.
So, that's where it is now. As I tested the site from my wife's iPad (on cellular) I just had to marvel at how data travels to get this post out of my house:
(you) <=> (Cloudflare) <=> OpenWrt <=> Win10 Pro <=> Hyper-V Docker VM <=> Container [Ephemeral] <=> Docker Volume
WiFi Checker for OpenWrt
It's been a while since I dumped anything here… hope I can post…
I have OpenWRT on my home router and it's using a secondary chipset to run a guest-only network that sometimes randomly drops out. I've been told they no longer support it in that manner, which explains a lot. Anyway, in case my config gets nuked, here's what I did:
# cat /etc/crontabs/root * * * * * /etc/custom_scripts/guest_check.sh # cat /etc/custom_scripts/guest_check.sh #!/bin/sh if iwinfo | grep -iq ragnarok_guest; then rm -f /tmp/guest_down exit 0 fi if [ -e /tmp/guest_down ]; then echo "$(date) -- REBOOTING" > /var/log/guest_check reboot fi touch /tmp/guest_down echo "$(date) -- DOWN" > /var/log/guest_check #service network stop #service network start wifi down wifi up
Chrome on CentOS 7
So my Google Chrome on my CentOS 7 box updated, and SELinux doesn't like it.
There's an official bug for it - https://bugzilla.redhat.com/show_bug.cgi?id=1251996 - but I don't know when that will propagate down.
Until then, here's what I did, with some plaintext showing what was happening:
$ sudo grep chrome /var/log/audit/audit.log | grep setcap | audit2allow #============= chrome_sandbox_t ============== #!!!! This avc is allowed in the current policy allow chrome_sandbox_t self:process setcap; $ sudo grep chrome /var/log/audit/audit.log | grep setcap | audit2allow -M chrome.pol ******************** IMPORTANT *********************** To make this policy package active, execute: semodule -i chrome.pol.pp $ cat chrome.pol.te module chrome.pol 1.0; require { type chrome_sandbox_t; class process setcap; } #============= chrome_sandbox_t ============== #!!!! This avc is allowed in the current policy allow chrome_sandbox_t self:process setcap; $ sudo semodule -i chrome.pol.pp
Upgrading to Fedora 21
These are mostly my personal note-to-self, but in case it helps somebody else…
fedup
- I've used this a few times, and man does it make upgrades easy. I had some key problems but those were easy enough to fix.
My web server was "down" and I was looking at iptables
and saw all this stuff about zones, etc. I checked /etc/sysconfig/iptables
and it looked good so when I ran system-config-firewall-tui
it told me that "FirewallD is active, please use firewall-cmd
" - of course, now I see that in the FAQ (I used nonproduct
).
It looks like they have a new Firewall Daemon. In the end, all I had to do was:
firewall-cmd --add-service=http --zone=public --permanent firewall-cmd --reload
There are other commands I used in between like --get-services
to see what was predefined and --list-services
to ensure http
was added after the reload.
Since it's in a VM, I do have a screenshot of the happy hot dog that mysteriously isn't in my /var/log/fedup.log
file.
Fixing sudo timeouts
So at work, a script needs to download a set of large RPMs and then install them. This is in a Makefile
, so if sudo
returns a negative, it fails and you need to find the temporary directory, or re-run. sudo
can be told to change the timeout, but that seems to only be by modifying /etc/sudoers
, not via a commandline option. So if the user walks away during the download and doesn't come back within five minutes (by default) after the download is complete, no dice.
Here's the applicable section of the Makefile
:
# We are passed the RPM_BASE_NAME - we will pull down the entire matching directory ifdef RPM_BASE_NAME TMP_DIR:=$(shell mktemp -d) endif rpminstall: echo Fetching $(RPM_BASE_NAME) RPMs... # -r=recursive, -nv=non-verbose (but not quiet), -nd=make no directories, -nH=make no host names # -P=move to path first, -Arpm=accept only RPM files wget -r -nv -nd -nH -P $(TMP_DIR) -Arpm -np $(DLSITE)/$(RPM_BASE_NAME)/ # If you walk away and come back, your download was wasted after sudo's five minute timeout! sudo -n ls /tmp > /dev/null 2>/dev/null || read -n1 -sp "sudo credentials have expired. Press any key when you are ready to continue." dontcare echo " " sudo -p "Please enter %u's password to install required RPMs: " rpm -Uvh $(TMP_DIR)/*rpm -rm -rf $(TMP_DIR)
Raspberry Pi and a BT Keyboard
I bought a Favi Bluetooth keyboard to use with the Raspberry Pi.
I wish I documented better how I got it running. I followed somebody else's page, but don't have the details…
Some of the root
user's history:
update-rc.d -f dbus defaults apt-get install bluetooth bluez-utils blueman hcitool scan hcitool dev hcitool lescan hcitool inq hciconfig -a bluez-simple-agent hci0 54:46:6B:xx:xx:xx bluez-test-device trusted 54:46:6B:xx:xx:xx yes bluez-test-input connect 54:46:6B:xx:xx:xx
I added /etc/init.d/bluetooth restart
to /etc/rc.local
I possibly added blacklist hci_usb
to /etc/modprobe.d/raspi-blacklist.conf
I can't get it to work again, so maybe some day…
Scripting konsole and tabs
At work I want to launch two programs in separate tabs in konsole
from a script, so I whipped this one up:
#!/bin/bash checkfile() { if [ ! -f $1 ]; then echo "could not find $1" exit 99 else echo "OK" fi } # Check for App1 XML echo -n "Checking for App 1 XML... " XMLA=/domain/DM.xml checkfile ${DEVROOT}/${XMLA} # Check for App2 XML echo -n "Checking for App 2 XML... " hostname=$(hostname) XMLB=/domain/DM_${hostname}.xml checkfile ${DEVROOT}/${XMLB} # Launch Konsole echo -n "Launching konsole... " K=$(dcopstart konsole-script) [ -z "${K}" ] && exit 98 # Create second tab and resize SDA=$(dcop $k konsole currentSession) SDB=$(dcop $k konsole newSession) dcop $K $SDA setSize 121x25 # Let bash login, etc. sleep 1 # Rename the tabs dcop $K $SDA renameSession "App 1" dcop $K $SDB renameSession "App 2" # Start services, letting user watch echo -n "starting app1... " dcop $K konsole activateSession $SDA dcop $K $SDA sendSession "echo -ne '\033]0;DEV (${hostname})\007' && clear && starter $XMLA" sleep 2 echo -n "starting app2... " dcop $K konsole activateSession $SDB dcop $K $SDB sendSession "echo -ne '\033]0;DEV (${hostname})\007' && clear && starter $XMLB" echo done.
The funky echo
commands will set the application title to "DEV (hostname)" while the tab title is set with renameSession
.
sudo: sorry, you must have a tty to run sudo
Ran into this the other day at work on RHEL5. Unfortunately, net searches come up with a not-so-great answer - "just comment out Defaults requiretty
." Don't you think it's there for a reason?
The reason is that without having TTY control characters interpreted, unless you are using the "NOPASSWD
" option, sudo
cannot mask out your password and it will be printed to your screen for all to see!
The simplest (and most proper IMHO) work-around is to simply use "ssh -t
" instead of "ssh
" in your command line that is calling sudo
on the remote machine.
Screen
Screen is a useful tool when remoting into a Unix box. It's been around forever, and I'm just going to document the .screenrc
that I use here:
altscreen autodetach on hardstatus alwayslastline "%{= bY}%3n %t%? @%u%?%? [%h]%?%=%c" vbell on startup_message off pow_detach_msg "Screen session of $LOGNAME $:cr:$:nl:ended." defscrollback 10000 nethack on zmodem off caption always '%{= bY}%-Lw%{= bm}%50>%n%f %t%{-}%+Lw%<' dinfo fit defmonitor on verbose on
Using Kompare
So I was using Python to do some pretty diff
s (will post that some time soon) and a coworker pointed out the program Kompare
on Linux. I don't normally use Linux as a desktop, only servers. Anyway, it gives me an awesome interface, almost as good as the TortoiseSVN diff viewer I am used to. It can take the output of svn diff
and then will find the original files and show you graphically all the changes.
The command I have been using:
svn diff | kompare -o -
RHEL/CentOS/VMWare pissed me off
(Originally posted 25 Oct 09, lost in server mishap, found in Google's cache of this page)
I cannot believe that a point release would hose me up so badly…
- http://bugs.centos.org/view.php?id=3884
- You can see what I did to fix it listed at the bottom
More than 4 serial ports under Linux
(Originally posted 24 Oct 09, lost in server mishap, found in Google's cache of this page)
So at work I am using a PCI104 octal serial port board. It's pretty cool that Linux now supports those OOB, but I had problems; I only saw the first two ports!
After doing a bunch of research; I finally found the problem. I had assumed it was something with the chipset itself. However, it is a problem with the default kernel build from RHEL/CentOS. They only allow up to four by default! To get more (up to 32 with the RHEL/CentOS kernel), you have to add to the command line in GRUB:
8250.nr_uarts=12
Again, that can be up to 32. I chose 12 because "traditionally" the mobo could have up to four. That made the two on the mobo ttyS0
and ttyS1
, so the octal card has ttyS4
to ttyS11
. So ttyS2
and ttyS3
are unused. A quick check with dmesg | grep ttyS
will show them being allocated.
Side note: You can check how many the default is by doing grep CONFIG_SERIAL_8250 /boot/config-`uname -r`
and looking for CONFIG_SERIAL_8250_RUNTIME_UARTS
. CONFIG_SERIAL_8250_NR_UARTS
is the maximum you can have without rebuilding the kernel.
Maybe I'll get inspired and blog sometime about the cool stuff I did with udev
so that I can configure a box and then the software would just "know" where which serial port the (external device) was on by calling /dev/projectname/extdevice
.
RPMBuild stupidity
OK, this just pisses me off. It's plain dumb - rpmbuild
expands macros before stripping comments. So what happened to me today (and I wasted an hour on) was that a multi-line macro was being inserted into a comment!
Based on a quick search of bugzilla at readhat, I'm not the only one - see 74949 and 99100. They say WONTFIX
but that's just dumb in my not-so-humble opinion.
So now you know. If you are making a spec file for an RPM:
%files # %build
will screw you over like it did me. You need to:
%files # %%build
VMWare Client Running on Fedora 9
What a pain! You have to get the latest open-vm-tools from SourceForge. Do a configure
and make && make check
. But then you cannot actually install the files or VMWare gets pissy.
After the make
you need to pack up the kernel files you have created and patch the real VMWare installer with them:
for i in *; do mv ${i} ${i}-only; tar -cf ${i}.tar ${i}-only; done cp *tar /usr/lib/vmware-tools/modules/source/
Then you can run the standard vmware-tools-config.pl
and it will use the source you just upgraded.
This page was assembled from various net resources…
Today I created a yum repository
This is what happens when you are at work and you have ISOs for the 6 CDs of CentOS 5.2 but NOT the DVD, and no connection to the 'net… I couldn't use the 5.2 installer thanks to this bug (it's an embedded Celeron 650). Since I went thru all the work, I also then imported the directory as a "shared folder" under VMWare player and then did the same upgrade path on that machine (I want it to mirror the embedded machine for all versions of everything, except it also has the gcc suite, etc).
One Time Only
(This can be done on any Linux machine with the external drive connected)
- I mounted the external drive under Linux and there are the 6 ISO CDs. I mounted each and then upgraded what was on it that we already had installed.
cd /media/ext_drive/<install dir>
mkdir mnt
mount -o ro,loop <CDFILE>.iso mnt
cp -urv mnt/CentOS .
- If I were doing this again, I may mount the 6 as
/mnt1
thru/mnt6
and then try to usecp -l
to make links? - (Optionally in another window to watch progress:
watch -d 'lsof -c cp -s | cut -c37- | grep rpm '
)
- If I were doing this again, I may mount the 6 as
umount mnt
- (Repeat for all 6 - this gives us a CentOS subdir with all the RPMs. If I had the DVD instead of the 6 CDs, this would've been easier)
- Now we will make this new directory into an "official" repository
cd CentOS
rpm -i createrepo*rpm
(glad that was there!)mkdir repo_cache
createrepo -v -p -d -c repo_cache --update --skip-stat .
- This step takes forever (even longer than the copying above)
- With a DVD image, this is most likely not even needed!
Every Target Machine
- We need to disable all the remote repositories:
- Edit
/etc/yum.repos.d/CentOS-Base.repo
and addenabled=0
to every section - Edit
/etc/yum.repos.d/CentOS-Media.repo
and change toenabled=1
- Depending on where the external hard drive is,
baseurl
will need an added path to it- When I did it, it was
file:///media/ext_drive/LinuxInstallers/CentOS-5.2-i386-bin-1to6/CentOS/
- When I did it, it was
- There is a known bug in 5.1 - the GPG signature key should be
RPM-GPG-KEY-CentOS-5
(not "beta
")
- Depending on where the external hard drive is,
- Edit
yum clean all
yum install yum-protect-packages
yum upgrade yum
yum clean all
yum upgrade --exclude=kernel\* -y | tee upgrade.log
- (Optionally in another window to watch progress:
watch -n1 'lsof -c yum -s | cut -c43- | grep rpm '
)
- (Optionally in another window to watch progress:
grep warn upgrade.log
- For this, you need to
diff
each file with the.rpmnew
file or.rpmold
file and merge them together.
- For this, you need to
- Reboot!
Random Unix/Linux Tips
Yeah, that's pretty much the subtitle of this blog, but I found another that has similar stuff:
http://users.softlab.ece.ntua.gr/~ttsiod/tricks.html
My favs (I'm copying in case the site goes down):
Convert a static lib (.a) into a dynamic one (.so)
gcc -shared -o libXxf86vm.so.1.0 \ -Wl,-soname,libXxf86vm.so.1 \ -Wl,--whole-archive,libXxf86vm.a,--no-whole-archive
Create PNGs from a pdf presentation
gs -dSAFER -dBATCH -dNOPAUSE -dTextAlphaBits=4 \ -dGraphicsAlphaBits=4 \ -r85 -q -sDEVICE=png16m -sOutputFile=icfp-pg%02d.png \ PhDPresentation.pdf
Read a damaged CD/DVD valid parts and get the rest with rsync
As is often the case, when I bring some burned CD/DVD from work, I find out that its bad at some offset. I came up with this Perl script: --------------------------------------- #!/usr/bin/perl -w use strict; my $i=0; select((select(STDOUT), $| = 1)[0]); unlink("data"); system("dd if=/dev/zero of=zero bs=2K count=1"); my $step = 1; print "Sector: "; while(1) { system("dd if=/cdrom/BadSector of=sector bs=2K skip=$i". "count=1 >/dev/null 2>&1"); if ($? == 0) { print sprintf("\b\b\b\b\b\b\b\b%08d", $i); system("cat sector >> data"); $step = 1; $i += $step; } else { system("cat zero >> data"); $step += $step; $i += $step; print "\nJumped over $step\nSector: "; } } ----------------------------- With the CD/DVD mounted on /cdrom/, it will slowly but effectively copy sector by sector of the file mentioned in the 'dd' line into the file called 'data'. Reading sector-sector proves to be enough to correct a multitude of read-errors on my DVD reader, but even if that isn't enough, it will quickly jump over the problem areas in the DVD, writing blank 'sectors' to mark the jumps. After that, rsync can save the day: rsync -vvv -B 131072 -e ssh \ [email protected]:/path/todata data
Compressing Subversion checkouts
I'll let the file intro explain itself:
# "Disk space is cheap!" - SVN Authors # "Except when backing it up!" - RevRagnarok # This script will drastically reduce the size of an svn checkout so you can back up the folder. # It will include a script to re-expand the directory with NO interaction with the server. # It will also (optionally) write a script /tmp/svn_uncompress.sh that will restore all compressed # folders.
Something to check out in the future: http://scord.sourceforge.net/
Update 3 Apr 08: Updated the script to touch a file saying it ran so it won't run again. Also have it dump a tiny "readme" file to let somebody else know what is going on.
Update 4 Apr 08: Fixed bug with deep paths.
Update 22 Aug 08: Huge changes, now using "proper" native calls.
Compressing VMWare images
Wow, I thought I've posted this stuff before but could not find it when searching earlier today.
But that's OK because I've done something new today versus the many previous years (the "special case" below).
Anyway, the quickest way to reduce the size of a VMWare image (note: I am not talking about the physical space, I mean the size when you compress the image in the host OS, for example with tar
with bzip2
):
Reducing VM Image Size (Standard)
telinit 1
# Drops you to single user and shuts down most other stuff- Delete any files you don't need. This includes most of
/tmp/
dd if=/dev/zero of=delme bs=102400 || rm -rf delme
# This will fill the unused space on the hard drive image with zeros. If you have VMWare set to expand-on-the-fly, it will maximize the size on the host OS, which may not be what you want. Usemount
to show which partitions are being used - you need to do this for each partition (e.g./boot
). This is the "meat" of the issue. Do not background this process and then try to do the other partitions in parallel - remember, they are the same physical disk on the host OS and you will thrash your hard drive like crazy (been there).- Check where your swap space is defined - it's in
/etc/fstab
swapoff -a
# Turns off all swap space (you don't need it right now)dd if=/dev/zero of=/dev/yourswappartition bs=1024
- If
/etc/fstab
mounted the swap by label:mkswap -L SWAPLabel /dev/yourswappartition
- If
/etc/fstab
mounted by partition alone:mkswap /dev/yourswappartition
- You don't need to turn the swap back on, on the next boot of the VM it will be handled since you ran
mkswap
. shutdown -h now
Reducing VM Image Size (Special Case)
The special case is what I ran into today. I backed up my work trac/svn VM server as usual. However, I told another customer that I would give them a server. So I need to remove the subversion repository and trac environment. Option 1: Delete them, and then redo all the dd
stuff from above, which would be O(free space) vs O(repository). Since "free space" >> "repository", I was trying to avoid that. Option 2: Zero out the files that I don't want anymore. This has the advantage of still reclaiming the gigabytes of space while not waiting for all empty space to be removed. The secret was using the shred
command:
find -type f | xargs shred -u -v -n 1 --random-source=/dev/zero
For those trying to understand it better, that is "find all files (that are files) and then pass the list to shred
as parameters along with: delete the file when done (-u
), tell me everything you do (-v
), overwrite the file only once instead of the usual 25 (-n 1
), and instead of using /dev/random
for your "random" data, just use /dev/zero
(--random-source=/dev/zero
). Note that using dd
directly would have been a pain because I would have to know the size of each file (hundreds) but also it would truncate-on-write meaning that the data being written is not guaranteed to be the actual data we wanted to blank out. That defeats the purpose!
Making This Obsolete
I need to check out this Zerotools package as soon as I can since it seems to do a similar thing all the time.
How to restore a Windows restore point from Linux
Of course, the Ubuntu chest-thumping is kinda annoying and irrelevant. In fact, I would try to find any Live CD with NTFS support (Fedora maybe?) That's all you really need.
In fact it would've been easier for him to check the GRUB config file
Pretty "Diff" in Linux
OK, this is a sad day. I like my Windows utility more than my Linux one. Hopefully somebody out there can point me in the right direction.
I want a graphical diff that looks good. I am used to the beautiful ones in trac
and TortoiseMerge
(part of TortoiseSVN
). I am hoping for something that is available in a default Fedora install that I just haven't seen, or one I can yum
install from a virgin install (rules out xxdiff
and fldiff
). colordiff
shows the lines in color but doesn't do colors within the lines.
This is the closest I have found:
pdiff file1 file2 -o - | kghostview -
If anybody has a better one that is easy to use from the command line (so not something like from within emacs or eclipse) then please let me know.
VGA Console modes for laptops
Finally found a table explaining the vga "modes" that the Linux kernel handles. For some reason, on my work laptop (Dell M1710) only gets 0x31B
but 0x31F
doesn't work.
I wanna rsync all night long...
I use rsync a lot on my PC boxes (Cygwin) and Linux servers. I keep forgetting what I do where, so usually I have a file called "command
" so I can just ". command
" with bash
.
Anyway, here are a few so I remember what I did. Maybe you will find them helpful too:
Backup music drive (M:
, ext3) to external USB drive (X:\!Music
, NTFS)
cd /cygdrive/x/\!Music rsync -av --size-only --exclude command --exclude '!Books on MP3' --exclude '!FTP' /cygdrive/m/ ./
Backup external USB drive (X:
, NTFS) to external FireWire drive (I:
, FAT32)
(Yes, I backup my backup, long story…)
cd /cygdrive/i rsync -av --size-only --exclude command --exclude *iso --exclude '/System Volume Information' /cygdrive/x/ ./
Keep my Cygwin mirror up to date on Mjolnir (Linux server)
cd /share/shared/Cygwin_mirror rsync -va --stats --delete --exclude command --exclude /mail-archives --progress rsync://mirrors.xmission.com/cygwin/ ./ && touch date wget http://www.cygwin.com/setup.exe mv -f setup.exe ../
Getting Tcl/Tk stuff to work on Fedora 8
I have a third party app that has an installer that is written in Tcl/Tk. Apparently, my Fedora 8 install has a busted install of wish
, the shell it uses. Here's the ass-backwards way I got it to run.
The error was "Application init failed … This probably means that Tcl wasn't installed properly."
First, I ran tclsh
manually to figure out where it was looking - puts $auto_path
shows me. It was looking in /build/xfndry8/J.36/env/TDS/Tcl/dist/export/lib
instead of, oh, I dunno, maybe /usr/share/tcl8.4/tcl8.4
. However, Tk looks at /usr/lib/tk8.4
instead of /usr/share/tk8.4
…
mkdir -p /build/xfndry8/J.36/env/TDS/Tcl/dist/export/lib cd /build/xfndry8/J.36/env/TDS/Tcl/dist/export/lib ln -s /usr/share/tcl8.4/tcl8.4 /build//J.36/env/TDS/Tcl/dist/export/lib cd /usr/lib mv tk8.4 tk8.4_orig ln -s /usr/share/tk8.4 tk8.4 cp tk8.4_orig/pkgIndex.tcl tk8.4
Note: This is all scratched on the back of a Dilbert calendar sheet so there may be a typo or two.
Follow-up
Apparently, one of my yum
updates broke it. I needed to re-run the mv
and ln
commands. For the record, I'm now at:
tcl-8.4.17-1.fc8 tk-8.4.17-2.fc8
I decided to file a bug…
Follow-up 2
I should've known - when all else, fails, blame Xilinx. If I had paid attention, I would've realized that Xilinx tools version 8 would be xfndry8 and J.36 is their internal build number (IIRC 9.2SP4 is J.40).
Xilinx BIT files and the Linux/Unix/BSD "file" command
The attached file will ID a Xilinx BIT file and tell you when it was compiled, the original NCD
file name, and most importantly the chip it is for. It doesn't give a speed grade, but it gives all the other stuff.
All credit goes to the FPGA FAQ Question 26.
To install on a machine that already has file
installed (yours probably does) you need to find your magic
file. I will present what I did on a Cygwin box as an example, season to taste:
cd /usr/share/file/
rm -rf magic.mgc
cat /tmp/xilinx-magic >> magic
file -C
The last command "compiles" the magic
file into magic.mgc
. To make sure it all worked, you can grep -i xilinx magic*
and see a few spots.
Example output:
admaras@brisingamen ~/projects/ss/trunk/vhdl $ file */*bit BenADDA/benadda.bit: Xilinx BIT file - from BenADDA.ncd - for 2v6000ff1152 - built 2007/ 6/27(13:19:26) - data length 0x23d014ff BenADDAV4/benadda.bit: Xilinx BIT file - from BenADDA.ncd - for 4vsx55ff1148 - built 2008/01/07(15:37:49) - data length 0x1f3928ff BenADDAV4_Clock/mybenaddaclock.bit: Xilinx BIT file - from MyBenADDAClock.ncd -for 2v80cs144 - built 2008/01/11(14:18:37) - data length 0x1652cff BenDATAV4/bendatadd.bit: Xilinx BIT file - from BenDATADD.ncd - for 4vlx160ff1148 - built 2008/01/11(17:53:27) - data length 0x4cf4b0ff BenNUEY/bennuey.bit: Xilinx BIT file - from BenNUEY.ncd - for 2vp50ff1152 - built 2008/01/10(17:14:41) - data length 0x2447c4ff
This file has been submitted to the maintainer of the file
command so some day may come with a default build.
Makefile dependencies
So I inherited a project that apparently had Makefile
s written by people who had no idea how Makefile
s are supposed to work.
When I edited one of the header files and reran make
, nothing happened. So I read some stuff online about making gcc
auto-create dependency files. However, they seemed to be outdated pages that referred to using sed
and stuff when gcc
seems to automagically dump the proper format already. And this is a RHEL3 box, so not exactly cutting edge.
Anyway, I went with a hybrid approach of a few different pages I found, so figured I would share it here.
These reply on the "main" Makefiles
defining OBJS
as the target files (.o
, e.g. SocketHandler.o
).
In the main Makefile
, I redefined their CLEAN_CMD
to add $(OBJS:.o=.d)
to the $(RM)
command. Then, after the OBJS
was defined, I include Makefile.dep
.
Also note that this is a little convoluted because the project actually combines a bunch of .o
files into .a
library files. For non-library compilations, the targets would just be %o
and the gcc
command line would be slightly tweaked. I just wanted to show the newer-than-any-webpage-I-saw -MF
command. No more sed
or redirects needed.
Makefile.dep
:
# This Makefile will generate .d files for every source file found. # This allows make to handle header files properly. # This file must be included AFTER $OBJS has been defined, not in # the global Makefile # Include existing .d if they are present: -include $(OBJS:.o=.d) # Override default cpp and c compiling instructions: $%(%.o) : %.c $(CC) -MM -MP $(CPPFLAGS) $(CFLAGS) $*.c -MT "$@($*.o)" -MF $*.d $(CC) -c $(CPPFLAGS) $(CFLAGS) $*.c -o $*.o $(AR) cr $@ $*.o $%(%.o) : %.cpp $(CXX) -MM -MP $(CPPFLAGS) $(CXXFLAGS) $*.cpp -MT "$@($*.o)" -MF $*.d $(CXX) -c $(CPPFLAGS) $(CXXFLAGS) $*.cpp -o $*.o $(AR) cr $@ $*.o
Obnoxious VMWare console beeps
The VMWare machines on my work laptop would chime whenever I did things like filename completion. The problem was, it was BIOS-level super obnoxious beep that was likely pissing off my cube-neighbors (no volume control nor mute worked). I had turned off all sounds in the Windows host OS, which seemed to be the problem. So if your console beeps are out of hand, set the Windows' "Default Beep
" to something. That will have Windows re-intercepting the beep and then the system volume control / mute will work again.
Mail relaying fun
For the second time, my damned ISP has blocked port 25. No real reason, I have secure Linux boxes as the only things sending email; I would get 5-6 server reports a day. That's all. But they blocked me again, and I am sick of arguing with the scripted Indian morons.
Main machine
Anyway, on the main Linux box, ragnarokjr
(named so because it is a VM box), I was able to finally get qmail
to send all mail thru TLS:
- Unpacked
netqmail
- Applied the full netqmail TLS patch to get
base64.h
andbase64.c
- Erased it all and re-unzipped
- Copied in the
base64.h
andbase64.c
- Applied the remote auth patch only
- "
make qmail-remote
" - Copied
qmail-remote
over/var/qmail/bin/qmail-remote
- Edited
/var/qmail/control/smtproutes
to include name and password::smtp.isp.net:587 username password
- Made
smtproutes
owned by "qmailr
" andchmod og-r
so it's kinda secured
So now qmail-remote
will use TLS on port 587 as needed to get around the stupid block…
Other machines
One of my other machines runs CentOS which uses exim
instead of qmail
and it took me a while to find this FAQ.
/etc/exim/exim.conf, in the "routers" section: send_to_gateway: driver = manualroute transport = remote_smtp route_list = * ragnarokjr.revragnarok.com no_more
And of course, /etc/init.d/exim restart
Hopefully, this can help somebody else. I was searching all kinds of terms like "exim upstream" (lots of Debian stuff saying what changed from standard) and it took a bit…
Very odd subversion issue
So, I had a file that refused to be checked in. Seems the file had previously been checked in, but then deleted with the TortoiseSVN checkbox "Keep Locks" checked. So it wouldn't go away. Anyway, the solution was svnadmin rmlocks /data/svn_repos/vhdl_bn /bn.bit
http://paste.lisp.org/display/49343
http://colabti.de/irclogger/irclogger_log/svn?date=2007-10-17,Wed;raw=on
(not really "trac" but I don't have a "svn" tag…)
FAT32 perl utilities
As noted before, my work laptop dual boots into WinXP and Fedora Core 7. They share a large FAT32 partition. Yesterday I finally got a 500GB external drive at work to back up my stuff. It's also FAT32. So I whipped up this quick script that splits a large data stream (using redirection or cat
would make files work) and dumps it in 1GB slices. The second has some modifications to instead fill up the hard drive with zeroes, which is needed to make a backup of it more compressable. On a Linux box, I normally just do dd if=/dev/zero of=delme bs=102400 || rm -rf delme
but that would exceed the file size limitations of FAT32. The first iteration of the filler
was simply cat /dev/zero | perl splitter.pl fill
but then realized that there was a lot of actual reading going on, instead of just dumping zeros, so I changed some stuff.
In filler
, I tried to pre-allocate the 2GB slice file and then fill it with zero to try to avoid even more fragmentation and FAT table manipulations. However, when I re-opened the file and then seeked to zero it would change the size back down - I didn't have time to research it further; if anyone has a good solution please let me know.
I've also run filler
under Cygwin to fill another partition.
splitter.pl
:
#!/usr/bin/perl -w # This program splits incoming data into ~1GB chunks (for dumping a file # on the fly to FAT32 partitions for example). # Data is STDIN, and first argument is prefix of output (optional). # # To recombine the output, simply: # cat FILE_* > /path/to/better/fs/OriginalFile BEGIN { push(@INC, "/mnt/hd/usr/lib/perl5/5.8.8/"); push(@INC, "/mnt/hd/usr/lib/perl5/5.8.8/i386-linux-thread-multi/"); } use strict; use Fcntl; # import sysread flags binmode(STDIN); use constant FULL_SIZE => (2*1024*1024*1024); # 2 GB my $chunk_byte_count = FULL_SIZE+1; # Force an open on first output byte my $chunk_file_count = 0; # Start at file 0 my ($read_count, $buffer); my $blksize = 1024; # This might get overwritten later my $prefix = $ARGV[0] || "FILE"; # The framework of this is from camel page 231 while ($read_count = sysread STDIN, $buffer, $blksize) { if (!defined $read_count) { next if $! =~ /^Interrupted/; die "System read error: $!\n"; } # Decide if we need another file if ($chunk_byte_count >= FULL_SIZE) { # Need a new file close OUTFILE if $chunk_file_count; sysopen OUTFILE, (sprintf "${prefix}_%02d", $chunk_file_count++), O_WRONLY | O_TRUNC | O_CREAT | O_BINARY or die "Could not open output file for write!\n"; $blksize = (stat OUTFILE)[11] || 16384; # Get preferred block size # print STDERR "(New output file from $0 (blksize $blksize))\n"; $chunk_byte_count = 0; } # New file my $wr_ptr = 0; # Pointer within buffer while ($read_count) { # This handles partial writes my $written = syswrite OUTFILE, $buffer, $read_count, $wr_ptr; die "System write error: $!\n" unless defined $written; $read_count -= $written; $wr_ptr += $written; } # Writing a chunk $chunk_byte_count += $wr_ptr; #print "(\$wr_ptr = $wr_ptr), (\$chunk_byte_count = $chunk_byte_count), (\$chunk_file_count = $chunk_file_count)\n"; } # Main read loop # Report on it print "Wrote out $chunk_file_count chunk files.\n";
filler.pl
:
#!/usr/bin/perl -w # This program fills a hard drive with 2GB files all NULL. # (This makes compressed images of the hard drive smaller.) # First argument is prefix of output (optional). # BEGIN { push(@INC, "/mnt/hd/usr/lib/perl5/5.8.8/"); push(@INC, "/mnt/hd/usr/lib/perl5/5.8.8/i386-linux-thread-multi/"); } use strict; use Fcntl qw(:DEFAULT :seek); # import sysread flags use constant FULL_SIZE => 2*(1024*1024*1024); # 2 GB my $chunk_byte_count = FULL_SIZE+1; # Force an open on first output byte my $chunk_file_count = 0; # Start at file 0 my ($read_count, $buffer); my $blksize = 16384; # This might get overwritten later my $prefix = $ARGV[0] || "FILL"; my $last_show = -1; $| = 1; # always flush # The framework of this is from camel page 231 $buffer = "\0" x $blksize; # Without pre-alloc: #real 1m20.860s #user 0m10.155s #sys 0m32.531s # With pre-alloc: #real 8m56.391s #user 0m16.359s #sys 1m11.921s # Which makes NO sense, but hey, that's Cygwin... maybe because FAT32? # Note: It was O_RDWR but switching to O_WRONLY didn't seem to help. # However, maybe if Norton is disabled? while (1) { # Decide if we need another file if ($chunk_byte_count >= FULL_SIZE) { # Need a new file close OUTFILE if $chunk_file_count; print STDERR "\rNew fill output file ($prefix)... \n"; sysopen OUTFILE, (sprintf "${prefix}_%02d", $chunk_file_count++), O_WRONLY | O_TRUNC | O_CREAT | O_BINARY | O_EXCL or die "Could not open output file for write!\n"; # Pre-allocate the file # print STDERR "New fill output file ($prefix) pre-allocating, expect freeze... \n"; # sysseek OUTFILE, FULL_SIZE-1, SEEK_SET; # syswrite OUTFILE, $buffer, 1, 0; # close OUTFILE; # print STDERR "\tdone, now blanking out the file.\n"; # sysopen OUTFILE, (sprintf "${prefix}_%02d", $chunk_file_count++), # O_WRONLY | O_BINARY or die "Could not re-open output file for write!\n"; # sysseek OUTFILE, 0, SEEK_SET; # This might just be ignored? # Done pre-allocating my $blk = $blksize; $blksize = (stat OUTFILE)[11] || 16384; # Get preferred block size if ($blksize != $blk) { # new block size, should only happen once $buffer = "\0"x$blksize; } $chunk_byte_count = 0; $last_show = -1; } # New file $read_count = $blksize; while ($read_count) { # This handles partial writes my $written = syswrite OUTFILE, $buffer, $read_count, 0; die "System write error: $!\n" unless defined $written; $read_count -= $written; $chunk_byte_count += $written; } # Writing a chunk # End of a chunk my $new_show = int ($chunk_byte_count/(1024*1024)); if ($new_show > $last_show) { print STDERR "\r${new_show}MB"; $last_show = $new_show; } # print "(\$chunk_byte_count = $chunk_byte_count), (\$chunk_file_count = $chunk_file_count)\n"; } # Main while loop # Report on it [think it always crashes before this ;)] print "\rWrote out $chunk_file_count chunk files.\n";
Offline Wikipedia
As seen on a million and one websites (/. etc al), a smart geek put together some offline Wikipedia stuff. I had some problems on my work laptop (Fedora Core 7) because of the OS (PHP executable wrongly named) and because of the way I have it partitioned (on a FAT32 partition). Anyway, here's my email to the original poster (wiki-fied):
- Thanks, you're a geek hero.
- I had a few problems with relative paths, I had to edit the places that pointed at quickstart* executables.
- Fedora Core 7's "
php5
" executable is actually only named "php
" - no big deal with a "ln -s /usr/bin/php /usr/bin/php5
" - My machine is dual-boot, and the only partition big enough was FAT32. Had some problems with the too many split files. I threw together a perl script (I had done the split by hand before downloading the Makefile ). It's pasted below.
Anyway, thanks again. Feel free to add any of this stuff to your page (like the FC7 notes). If you do, please don't include my email, just credit to RevRagnarok is fine.
- RevRagnarok
#!/usr/bin/perl -w # This was written by RevRagnarok (I'm on Wikipedia) # I was having problems with all the split files on a FAT32 partition. I assume # it is because there were so many plus two entries for each (LFNs). # This simply combines all the rec* files again into large chunks of N where # I used 5, but you can set below with $combine. # Verification info below. # Lastly, I needed to modify the Makefile and remove the "split" from the # "wikipedia" target. use strict; # Using: rec13778enwiki-20070802-pages-articles.xml.bz2 my $last = 13778; my $lastd = 5; # How many digits in above (yes, I can compute this, but why?) my $date = 20070802; my $suffix = "enwiki-${date}-pages-articles.xml.bz2"; my $combine = 5; # This will combine every 5 into a group # (If this number makes > 4 digit results, it will not sort nicely) my $outputdir = '/data/wikipedia/'; # Don't make it the same place... my $joinstr = ''; my $fcount = 0; for (1 .. $last) { my $num = sprintf "%0${lastd}d", $_; $joinstr .= "rec${num}${suffix} "; if (($_ % $combine) == 0) { &catthem($joinstr, $fcount++); $joinstr = ''; } } &catthem($joinstr, $fcount++) if ($joinstr ne ''); print "All done!\n"; sub catthem ($$) { my $ofile = sprintf "rec%04d.bz2", $_[1]; `/bin/cat $_[0] >${outputdir}${ofile}`; # Lazy again, there are more Perl-ish ways. print "."; } __DATA__ To make sure they were all taken in, you can do this: bash$ bzip2 -tvv *bz2 2>&1 | grep -v ok | grep -v bz2 | wc -l 13778 ...which is equal to the number of start blocks, so I know nothing is missing now.
Download files on a very crappy connection (work PCs)
wget -Sc -T 10 (URL) ex: wget -Sc -T 10 ftp://ftp.symantec.com/public/english_us_canada/antivirus_definitions/symantec_antivirus_corp/20051107-019-x86.exe
The opposite of chmod
Well, I've been using Unix for since 1993, and I just realized I don't know what the opposite of chmod
is - how do I just show the permissions, not set them? lsmod
is Linux kernel module listing. So, I used stat
which is likely overkill. But sadly that took me a while to think of.
Rsync over ssh for backing up some files
- Make a file with the ssh user and password called
/etc/backupcredentials_ssh
- Make a directory on the backup server called
/mnt/backup/this_hostname/
- Make sure it is owned by
backup_user
- Make sure it is owned by
- Make a directory on the backup server called
rsync -av --delete -e ssh -l backup_user -i /etc/backupcredentials_ssh /mnt/backup backup_server:/mnt/backup/this_hostname/
This way I was able to avoid using a samba mount between the two.
Some interesting httpd rewriting with perl
<VirtualHost *> ServerName svn.whatever ServerAlias svn <Perl> #!/usr/bin/perl my $svn_path = "/var/svn"; my $svn_location = ""; my $trac_path = "/var/trac"; opendir(SVN_ROOT, $svn_path) or die "Cannot open $svn_path"; while (my $name = readdir(SVN_ROOT)) { if ($name =~ /^[[:alnum:]]+$/) { $Location{"$svn_location/$name"} = { DAV => "svn", SVNPath => "$svn_path/$name", AuthType => "Basic", AuthName => "\"Subversion login\"", AuthUserFile => "$trac_path/access.user", AuthGroupFile => "$trac_path/access.group", Require => "group $name", }; } } closedir(SVN_ROOT); __END__ </Perl> </VirtualHost>
ntpd troubles
Moving server, having trouble. Found this. Seems to fix the windows client, not the linux one.
http://mail-index.netbsd.org/current-users/2004/01/16/0023.html
Subject: ntpd default change
To: None
From: Christos Zoulas
List: current-users
Date: 01/16/2004 17:56:40
Hello,
People who use windows ntp clients [and other unauthenticated clients] served by netbsd ntp servers will notice after upgrading ntpd to current their clients cannot sync anymore.
So that others don't spend time debugging this:
- Authentication in the new version of ntp is turned on by default;
you'll have to turn it off. The option to do this has also changed. If you had in your config file "authenticate no" should change it to "disable auth".
- "restrict notrust" means don't trust unauthenticated packets, so remove
notrust from your restrict line. This seemed to work fine before with "authenticate no".
Of course, you should only do this if you really need to. If your clients can authenticate, you should keep authentication on.
I hope this is useful,
christos
Using qmail in an RPM (or YUM) world...
I was trying to run 'yum update' on my webmail server, which runs the highly recommended qmailrocks.org distro. Well, to get it to stop trying to install exim / sendmail / postfix I was able to create a "fake_mta.spec" based on one I found online (but had to transcribe into a VM box):
# Use rpmbuild -bb to install this Buildarch: noarch Conflicts: sendmail, exim, postfix, qmail Group: Productivity/Networking/Email/Servers License: GPL Name: fake_mta Provides: smtp_daemon Release: 1adm Summary: Fake package to protect qmail's sendmail files. Version: 1 %description A fake package so that qmail won't get hosed, even tho RPM/YUM don't know about it. %changelog * Sun Sep 25 2005 Aaron D. Marasco - Shamelessly stolen from SuSE security list (Chris Mahmood) %files /usr/sbin/sendmail
!GreaseMonkey Rocks! :)
Finally playing with it (instead of sleeping)
Anyway, I like the auto-linkifier, but I changed one line to make it actually tell me it did the autolinkify.
Original: http://mozdev.sweetooth.org/greasemonkey/linkify.user.js
Changed line 39:
a.appendChild(document.createTextNode(match[0] + " (autolinked)"));
Mjolnir, Ghost, Backup, etc
NOTE TO EVERYBODY WHO IS NOT ME: You can ignore this post. It's just random notes that I need somewhere safe. However, as of Dec 2006, it no longer applies, I use a modified version of SaraB.
Ghost really doesn't like my combo of SATA, IDE, LVM, and whatnot. To get the IDE (hdb) backup copied to the SATA (sda) I had to: 1. GHOST /FDSZ /IAL image /boot from hdb (Drive 4) to an image. 2. Put a partition on sdb (good thing it was empty) 3. Re-run ghost, copy image to sdb (not sda2 where it should be!) 4. Boot from CentOS install DVD with kernel command line: linux rescue hdb=noprobe noexec=off noexec32=off (took ~2 hours to find out the last two stop grub 0.95 from segfaulting on an install!) 5. mount up /dev/sda2 (destination) somewhere. Go into that directory and wipe it clean. 6. From that directory: dump -0 -f - /dev/sdb1 | restore -rf - 7. Relabel sdb1 so we don't have two /boot partitions!: tune2fs -Ljunk /dev/sdb1 8. unmount /dev/sda2 9. chroot /mnt/sysimage 10. grub --no-floppy root (hd0,1) setup (hd0) <== kept segfaulting here on SATA drive until noexec flags from above. [Don't use grub-install that bombed too!]
OK, that's all the scraps of paper I can find. I really need to get a backup system running on this server before I migrate to it!
How to edit RAM disk images
I really want to read more about how this nash/lvm stuff works on my server…
Editing Ramdisks (initrds)
Following are the steps that enable one to edit a ramdisk for any changes:
gunzip -c /boot/initrd-.img >initrd.img
mkdir tmpDir
mount -o loop initrd.img tmpDir/
cd tmpDir
- Make all necessary changes (copy over modules, edit linuxrc etc)
umount tmpDir
gzip -9c initrd.img >/boot/initrd-.img
(stolen from http://openssi.org/cgi-bin/view?page=docs2/1.2/README.edit-ramdisk )
Follow-up
At some point, they stopped making them loopback ISOs and now they are just a compressed cpio
archive:
mkdir initrd cd initrd/ gzip -dc /boot/initrd-2.6.23-0.104.rc3.fc8.img | cpio -id
The cpio is in the "new" format, so when recompressing, you need to use --format='newc'
.
(stolen from http://fedoraproject.org/wiki/KernelCommonProblems )
My Kingdom for a Command Line!
I need to disable a drive in Linux (the BIOS hides it, but the kernel is 'too smart'). I searched high and low for "linux kernel command line" etc, etc.
Anyway, what I was looking for:
hdb=noprobe
From: http://linux.about.com/library/cmd/blcmdl7_bootparam.htm
One of my 3 Linux boxes has that man page. The other 2 DON'T. That, and it seems ancient but helpful nonetheless.
Ghost no likee LVM
Yet more problems with Ghost… LVM crashes it with some nasty error about sectors that were in the negative billions. Anyway, tell it to just blindly copy Linux partition sectors (which sucks):
ghost /FDSZ /IAL
Shutting down VMWare clients
In /etc/services on CLIENT:
# Local services shutdown 6666/tcp
In /etc/inetd.conf on CLIENT:
shutdown stream tcp nowait root /sbin/shutnow
In /sbin/shutnow of CLIENT: (you can prolly get rid of this and move this all into inetd.conf above, but I used to do other things too…)
#!/bin/bash /sbin/shutdown -h now
On the CLIENT's iptables rules, I have:
0 0 DROP tcp -- eth1 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:6666
So nobody can reach that port from eth1 (internet). The host will be hitting the port on eth2 which is the host-only virtual network.
Then on the HOST in /etc/init.d/kill_vms (new file):
#!/bin/sh # # chkconfig: 4 90 08 # description: Kills VMs on shutdown/reboot /usr/bin/telnet 192.168.90.201 6666 < /dev/zero PIDS=fake while [ "foo$PIDS" != "foo" ] do { echo "Delaying shutdown... VMWare still on $PIDS" sleep 10 PIDS=`pidof vmware-vmx` }; done
So then on the server you install the "kill_vms" with chkconfig (fix the IP from 192.168.90.201 to your virtual client IP of course!).
It won't work the first time you reboot, sorry. If you 'touch' the file /var/lock/subsys/kill_vms (at least on my ancient RH based system) then it should. Also, it will hang forever if you don't have the virtual machine set to 'Close on shutdown' and I think maybe another option in VMWare about closing if all clients close.
GPart - Guess partitions
Friend has his partition table hosed. He's starting with this, and it seems to be working very well. Looks awesome, figured I would virtually jot it down…
http://www.stud.uni-hannover.de/user/76201/gpart/
http://www.stud.uni-hannover.de/user/76201/gpart/gpart-man.html
Follow-up: He was able to get back his data. He actually ended up using this: http://www.cgsecurity.org/index.html?testdisk.html
mydiff - INI style diff
Well, needed to compare two 300MB directories at work yesterday. Unfortunately, 'regular' diff just wasn't cutting it. A file would be declared different even if it was an INI style moved section… Example:
File 1: [a] Setting1=a Setting2=b [b] Setting3=c Setting4=d File 2: [b] Setting3=c Setting4=d [a] Setting1=a Setting2=b
Obviously, these two files are EFFECTIVELY the same, but diff will show the first as having the entire [a] section only, then [b] common, then file 2 only having… the same exact [a] section. So I whipped up a perl script to tell me that those two files are the same. This script may have problems and might not do what you want (it was quick and dirty) but it may help others (and me later, which is what this blog is more for anyway)… Looking at it this morning I can see a handful of places to easily condense it, but oh well… and if you care, these were Quartus project files and associated files (CSF, PSF, etc). Note: It fails when there is a < > or | in the text file. But if usually dumps so little you can eyeball it and decide if it is OK.
#!/usr/bin/perl -w use Data::Dumper; my $textdump; my %lhash; my %rhash; my $debug = 0; my $file = $ARGV[0]; # Some filenames have () in them that we need to escape: $file =~ s/\(/\\(/g; $file =~ s/\)/\\)/g; open (INPUT, "diff -iEbwBrsty --suppress-common-lines Projects/$file Folder\\ for\\ Experimenting/Projects/$file|"); while (<INPUT>) { if ($_ =~ /Files .*differ$/) { #Binary files print "Binary file comparison - they differ.\n"; exit; } if ($_ =~ /Files .*identical$/) { print "No diff!\n"; exit; } my $a = 0; # For some reason chomp was giving me problems (cygwin, win2k) s/\n//g; s/\r//g; $_ =~ /^(.*)([<>\|])(.*)$/; my $left = $1; my $dir = $2; my $right = $3; $left =~ /^\s*(.*?)\s*$/; $left = $1; $right =~ /^\s*(.*?)\s*$/; $right = $1; # print "1: '$left'\n2: '$dir'\n3: '$right'\n"; # OK, now we have all we wanted... if ($dir eq '<') { $lhash{$left}++; $a++; }; if ($dir eq '>') { $rhash{$right}++; $a++; } if ($dir eq '|') { $lhash{$left}++; $rhash{$right}++; $a++; } print "Missed this: $left $dir $right\n" unless $a; } # while close(INPUT); foreach (sort keys %lhash) { if (not exists $rhash{$_}) { # No Match... print "Only in left: '$_'\n"; } else { if ($lhash{$_} != $rhash{$_}) { print "Left count not equal to Right, $_\n"; } } } foreach (sort keys %rhash) { if (not exists $lhash{$_}) { # No Match... print "Only in right: '$_'\n"; } else { if ($lhash{$_} != $rhash{$_}) { print "Left count not equal to Right, $_\n"; } } } print Dumper(\%rhash) if $debug; print Dumper(\%lhash) if $debug;
Print everything after the last occurrence
This may be long and convoluted but it is the first thing that came to mind and it worked.
Had a log file that would delimit with "As Of nn/nn/nnnn" which could be multimegabytes. Didn't feel like doing a perl solution that day, so:
grep -n 'As Of' sourcefile | tail -1 | awk -F":" '{print $1}' | xargs -r -iX awk 'FNR>=X' sourcefile > outfile
Again, likely an easier solution, but this was Q&D.
More cpio tricks
Cleaning out my desk and came across these notes…
find /mnt/old_root -depth -print | cpio -odv | gzip -c -v -1 > /opt/bad_disk/old_root.cpio.gz find -depth -print | cpio -odv > tempo.cpio cpio -idvm < tempo.cpio
Neat trick:
tar cf - . | (cd /usr/local ; tar xvf - )
Peeking on a process
We have a problem at work with an ssh failing, but I don't have access to the source of the program running ssh. So I replaced ssh with this script, renaming the old ssh to ssh_orig. The extra flags on ssh were just me seeing if I can get the thing to work, you can ignore them:
#!/bin/bash echo $* >> /tmp/ssh-$$-in ssh_orig -C -2 -x $* | tee /tmp/ssh-$$-out
This creates a bunch of files in /tmp/ with the input command line string given to the script in the -in files and the output in the -out files.
Note, this will probably cause a security audit problem since you messed with ssh.
Do it until it works!
#!/bin/bash # From LinuxGazette.com (Ben Okopnik) # Rerun command line until successful until $*; do sleep 1; done
cpio Cheat Sheet
http://www.intencorp.com/karmilow/share/howto-cpio.html
Bernie's abbreviated Solaris/Linux cpio How-To
1. Backing up files to a cpio file
cd to the directory you want to archive, and issue the command
solaris-$ find . -depth -print | cpio -ocBdum > filename.cpio
-or-
linux-$ find . -depth -print | cpio -o -H newc > filename.cpio
2. Restoring files from a cpio file
cd to the directory you want the archived files written to, and issue the command
solaris-$ cpio -icBdum < filename.cpio
-or-
linux-$ cpio -idum -H newc < filename.cpio
3. Backing up files to a cpio tape
cd to the directory you want to archive, and issue the command
solaris-$ find . -depth -print | cpio -ocBdum > /dev/rmt/0
-or-
linux-$ find . -depth -print | cpio -o -H newc > /dev/rmt0
4. Restoring files from a cpio tape
cd to the directory you want the archived files written to, and issue the command
solaris-$ cpio -icBdum < /dev/rmt/0
-or-
linux-$ cpio -idum -H newc < /dev/rmt0
5. Restoring a particular file from a cpio tape
cd to the directory you want the archived file (/etc/hosts in this example) written to, and issue the command
solaris-$ cpio -icBdum < /dev/rmt/0 "/etc/hosts"
-or-
linux-$ cpio -idum -H newc < /dev/rmt0 "/etc/hosts"
6. Some other local (Linux) examples
local out:
find etc -depth -print | cpio -o -H newc > cpios/etc.cpio
find include -depth -print | cpio -o -H newc > cpios/include.cpio
local in:
cpio -idum -H newc < /mnt/home/cpios/etc.cpio
cpio -idum -H newc < /mnt/home/cpios/include.cpio
7. Some network (Linux) examples
net out:
pull: remote cpio -> local archive
rsh -n remote_host "cd /remote_dir ; find remote_file -depth -print | cpio -o -H newc" > local_archive
push: local cpio -> remote archive
find local_file -depth -print | cpio -o -H newc -F remote_host:/remote_dir/remote_archive
net in:
pull: remote archive -> local cpio
cpio -idum -H newc -F remote_host:/remote_dir/remote_archive
rsh -n remote_host dd if=/remote_dir/remote_archive | cpio -idum -H newc
push: local archive -> remote cpio
dd if=/local_dir/local_archive | rsh -n remote_host "cd /remote_dir ; cpio -idum -H newc"
Makefile notes
Checking tabs: cat -v -t -e makefile Macro substitution: SRCS = defs.c redraw.c calc.c ... ls ${SRCS:.c=.o} result: calc.o defs.o redraw.o Second string can be nothing too to truncate Suffix Rule: default begavior for a suffix: .SUFFIXES : .o .c .s .c.o : $(CC) $(CFLAGS) -c $< .s.o : $(AS) $(ASFLAGS) -o $@ $< $< is what triggered (only valid in suffixes) Forcing rebuilds: all : make enter testex "CFLAGS=${CFLAGS}" "FRC=${FRC}" enter : ${FRC} make ${ENTER_OBJS} "CFLAGS=${CFLAGS}" "FRC=${FRC}" ${CC} -o $@ ${ENTER_OBJS} ${LIBRARIES} testex : ${FRC} make ${TESTEX_OBJS} "CFLAGS=${CFLAGS}" "FRC=${FRC}" ${CC} -o $@ ${TESTEX_OBJS} ${LIBRARIES} force_rebuild: [nothing here] Then normal "make all" does normal. "make all FRC=force_rebuild" will do all Debugging make files: Try "make -d" Misc notes: A line starting with a hyphen ignores errors resulting from execution of that command Macros: $? = List of prereqs that have changed $@ = Name of current target, except for libraries, which it is the lib name $$@ = Name of current target if used AFER colon in dependency lines $< = Name of current prereq only in suffix rules. $* = The name (no suffix) of the current prereq that is newer. Only for suffixes. $% = The name of the corresponding .o file when the current target is a library Macro Mods: (not all makes support) D = directory of any internal mac, ex: ${@D} F = File portion of any internal except $? Special Tagets: .DEFAULT : Executed if make cannot find any descriptions or suffix rules to build. .IGNORE : Ignore error codes, same as -i option. .PRECIOUS : Files for this target are NOT removed if make is aborted. .SILENT : Execute commands but do not echo, same as -s option. .SUFFIXES : See above.
smtproutes.pl
Many (stupid) mail servers are now assuming all cable modem users are SPAMmers, so more and more are refusing to accept my mail. Here's a script that I run to regenerate QMail's 'smtproutes' whenever I need to add new ISPs… Start:
#!/usr/bin/perl open OUTFILE, ">smtproutes"; $s = ":smtp.comcast.net\n"; # Replace with your ISP's outbound server foreach (<DATA>) { chomp; next if /^\w*$/; next if /#/; print OUTFILE "$_$s.$_$s"; } __DATA__ aol.com pipeline.com earthlink.net comcast.net ix.netcom.com netcom.com hut.fi t-3.cc earthengineering.com usa.com #CS is old compuserv, now AOL cs.com stanfordalumni.org erasableinc.org sbcglobal.net hp.com abs.net juno.com sourcenw.com yahoogroups.com msn.com
Using bBlog on your own server.
The installation program asks for your MySQL database name and password. I couldn't get the to work by myself, because I run my own so no admin just handed me the info. If you're in the same boat, here's all you need to do:
/usr/local/mysql/bin/mysql -p
Enter password: mypassword
mysql> CREATE database blog; mysql> GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,ALTER,INDEX -> ON blog.* -> TO blog@localhost -> IDENTIFIED BY 'myblogpassword';
Obviously, change the stuff in bold. Then in bBlog's setup use the user and database name of 'blog' with the password set above.
ipchains to iptables quickie
Here's my mini write-up for people to lazy too read the HOWTOs. ;)
See also: http://netfilter.kernelnotes.org/unreliable-guides/index.html
The attached file goes in your startup stuff.
In-line comments for ya to understand this stuff, I hope... bold stuff isn't in the real file. If you have any Q's lemme know. LEARN BY DOING. ;)
<lotsa snips in here>
Do a "iptables --help" to see the quick commands of P, F, N, I, A, X, and D...
# Set policies
./iptables -P INPUT ACCEPTI allow anything not specifically blocked thru the firewall... oh well.
./iptables -P FORWARD ACCEPTJeff claims to block everything he doesn't accept, and then accepts
./iptables -P OUTPUT ACCEPT1024 and above.... 6 of one, 1/2 dozen the other ;)
# User defined tables
# Shared by INPUT and FORWARD
./iptables -N Protect"N" means new chain creation. But in case we run this multiple times....
./iptables -F Protect"F" flushes the chain if it already existed
#
# Now set up the INPUT chain There are three default chains - INPUT, FORWARD, OUTPUT.
*** UNLIKE 2.2/IPCHAINS the INPUT and OUTPUT chains are only if the packet is destined for the firewall. FORWARD means the packet is gonna be FORWARDed (duh). ***
#
./iptables -A INPUT -j Protect # Spoofs, etcEverything coming IN goes thru Protect
./iptables -A INPUT -p tcp --dport 20:21 -j In_FTP # FTP inTCP w/ destination ports 20-21 go thru In_FTP
./iptables -A INPUT -p tcp -j In_Mail # Mail in (port can be 25 or 110)ANY TCP packet goes to In_Mail
./iptables -A INPUT -p udp --dport 123 -j In_TimeSrv # Time ServersUDP with port 123 destination goes
./iptables -A INPUT -j In_New # Any new extIF connections not specified above are blocked (telnet, ssh, etc)All check
#
# The FORWARD chain
#
./iptables -A FORWARD -j Protect # Spoofs, etcEverything FORWARDED goes thru Protect also
(this is why Protect is separate from others)
#
# The Protect chains
#
./iptables -A Protect -j Protect_HackersAll go here...
./iptables -A Protect -i $extIF -p udp --sport 53 -j Protect_DNSuUDP source port 53 coming IN ppp+ (any ppp)
Bill would put eth2 or whatever cable modem set to
*** UNLIKE 2.2/ipchains *** -i means INPUT interface NOT 'INTERFACE'. -o means OUTPUT interface now. -i can only match INPUT and FORWARD chains, -o can only match in OUTPUT chains...
./iptables -A Protect -p icmp -j Protect_ICMPICMP packets go to Protect_ICMP
#
These next ones get complicated. "-d" is DESTINATION IP. "-m limit" limits the number of matches of a rule. Check the HOWTO for more info. That stops it to one log entry per second. The "--log-prefix" is required for fireparse 2.0. The "Hackers" part tells me what chain matched, and the ":1" says what rule number matched. **NOTE** that you need TWO rules to LOG and then do something!!! (I am not happy with that) Oh yeah the a= part is for fireparse too... tells what its action was.
./iptables -A Protect_Hackers -d 204.116.1.232 -m limit --limit 1/s -j LOG --log-prefix "fp=Hackers:1 a=DROP "
./iptables -A Protect_Hackers -d 204.116.1.232 -j DROPDROP the packet (vs. ACCEPT, REJECT, LOG, RETURN)
[RETURN = Fall off the end of the chain. New to 2.4/IPTables. YAY!!!]
./iptables -A Protect_Hackers -s 204.116.1.232 -j DROP-s is source IP
This next line is just a little combo adding the input interface
./iptables -A Protect_Spoofs -s 192.168.0.0/255.255.0.0 -i $extIF -m limit --limit 1/s -j LOG --log-prefix "fp=Spoofs:3 a=DROP "
./iptables -A Protect_Spoofs -s 192.168.0.0/255.255.0.0 -i $extIF -j DROP
NOTE this next line! The new system combines NAT and packet filtering - by time the filter sees the packet, it HAS ALREADY BEEN MASQ'D BACK - meaning the destination can EASILY be the internal address of your other machines!!!
# Destination of 192.168.x.x is NOT a spoof because packet filter sees MASQ answers coming back with that!
Just showing that you can do subnetting on the matches (some above too):
./iptables -A Protect_DNSu -s 151.196.0.38/255.255.255.254 -j ACCEPT
This line logs that DNS came thru that didn't come from my "normal" DNS sources. Note there is no related action, so it falls off the end of the chain and back to where it started (in the INPUT or FORWARD chain)
./iptables -A Protect_DNSu -m limit --limit 1/s -j LOG --log-prefix "fp=DNS:1 a=ACCEPT "
Just like TCP/UDP have ports, ICMP has types.... numeric or words:
./iptables -A Protect_ICMP -p icmp --icmp-type 5 -i $extIF -m limit --limit 1/s -j LOG --log-prefix "fp=ICMP:1 a=DROP "
./iptables -A Protect_ICMP -p icmp --icmp-type 5 -i $extIF -j DROP
./iptables -A Protect_ICMP -p icmp --icmp-type echo-request -m limit --limit 2/s -j ACCEPT # Stop ping floods
./iptables -A Protect_ICMP -p icmp --icmp-type echo-request -m limit --limit 1/s -j LOG --log-prefix "fp=ICMP:2 a=DROP "
./iptables -A Protect_ICMP -p icmp --icmp-type echo-request -j DROP
These are for future use (I may open FTP some day)... states can be NEW, INVALID, RELATED, CONNECTED. This stops any NEW or bad connections (note I don't waste processor time checking the protocol or port since that was already done to get here!!!) Note that FTPs from my internal network will be let thru:
./iptables -A In_FTP -i $extIF -m state --state NEW,INVALID -m limit --limit 1/s -j LOG --log-prefix "fp=In_FTP:1 a=DROP "
./iptables -A In_FTP -i $extIF -m state --state NEW,INVALID -j DROP
Some day I may do POP3 (port 110) so I have my 'mail' rule handle 25 and 110:
./iptables -A In_Mail -p tcp --dport 25 -i $extIF -j ACCEPT
./iptables -A In_Mail -p tcp --dport 110 -i $extIF -m limit --limit 1/s -j LOG --log-prefix "fp=In_Mail:1 a=DROP "
./iptables -A In_Mail -p tcp --dport 110 -i $extIF -j DROP
This stops any NEW connections from ppp+ to ports 0 to 1023 (the classical Unix "reserved" ports) - combo of state, limit, LOG:
./iptables -A In_New -i $extIF -p tcp --dport 0:1023 -m state --state NEW,INVALID -m limit --limit 1/s -j LOG --log-prefix "fp=In_New:1 a=DROP "
./iptables -A In_New -i $extIF -p tcp --dport 0:1023 -m state --state NEW,INVALID -j DROP
Now comes Part II - NAT:
# Just masq everything outbound
IPTables is extensible. One extension is NAT - "-t nat" says to load the NAT table. It must be FIRST on the line. For NAT, there are a few internal chains, the most important being PREROUTING and POSTROUTING (entering and leaving the machine). MASQUERADE means SNAT - Source Network Address Translation - what we want to do to hide a network behind a single IP for outbound data. Note the use of "-o" vs. the "-i" above. iptables actually has primitive load balancing for both SNAT and DNAT...
./iptables -t nat -A POSTROUTING -o $extIF -j MASQUERADE
# Set some hooks for the port forwarding scripts
./iptables -t nat -N PortFW
Seems odd, but I made a chain called PortFW. That way my firewall setup scripts can just wipe it without worrying about other things that may be in the PREROUTING chain.
./iptables -t nat -A PREROUTING -i $extIF -j PortFW
The "PortFW" chain is "DNAT" - Destination NAT - we hide from the internet the DESTINATION of the packet. AKA "Port Forwarding" in its simplest form. Again, this also allows load balancing if we wanted to run web server clusters. I will give you those other scripts some other time.
echo 1 > /proc/sys/net/ipv4/ip_forwardTurns on the kernel packet forwarding
echo "."
# Make sure we get called to stop later
touch /var/lock/subsys/packetfilterThe shutdown script ("/etc/rc.d/rc" sees this file and tells us to "stop")
Unix commands you shouldn't do at 3 AM....
…well, ever.
rm -rf .* rpm -qa | xargs rpm --erase
I've done them both and felt the pain. I did the first on a 2TB machine. In 1996, when 2TB was a lot more impressive.
Strange Compression Comparisons
Well, if you're desparate to do bzip2 under windows, or pretty much any other cool GNU thing (find, grep, less, wget, etc) you can download them at http://gnuwin32.sourceforge.net/packages.html
C:\Documents and Settings\me>bzip2 —version bzip2, a block-sorting file compressor. Version 1.0.1, 23-June-2000.
- adm
Aaron D. Marasco wrote: > OK, a quick test. I just got a PowerPoint presentation. I am not going to mess with dictionary sizes or anything, leaving those default. > > PPT: 1,440,768 bytes (Original file) > ZIP: 1,311,093 (Dunno what did it, I received it this way) > RAR: 1,303,276 (RAR 3.20 beta 4, which does the 'new' RAR compression, default setting) > RAR: 1,303,241 (Same version, told it MAX compress "m5" command line) > ACE: 1,305,286 (2.0 compression, normal) > ACE: 1,309,770 (1.0 compression, normal) > ACE: 1,305,274 (2.0 compression, max) > GZ: 1,311,109 (Created by WinACE 2.5 max compression) > LZH: 1,440,901 (Created by WinACE 2.5 max compression) (-- this is BIGGER. This surprises me and tells me that PPT may already be compressed? > .TAR.GZ: 1,311,614 (Created by WinACE 2.5 max compression) > CAB: 1,304,092 (Created by WinACE 2.5 max compression) > ZIP: 1,310,299 (Created by WinACE 2.5 max compression) > JAR: 1,310,299 (Created by WinACE 2.5 max compression -- I think .JAR are just renamed .ZIP anyway) > BZ2: 1,337,976 (bzip2 Version 1.0.2 - I couldn't see a command line to change compression) > GZ: 1,310,209 (gzip -9 gzip 1.3 [1999-12-21]) (-- I've never seen GZIP be smaller than BZ2?!?!? > > And now sorted: > [root@neuvreidaghey shared]# sort -t' ' +1 tempo > RAR: 1,303,241 (Same version, told it MAX compress "m5" command line) > RAR: 1,303,276 (RAR 3.20 beta 4, which does the 'new' RAR compression, default setting) > CAB: 1,304,092 (Created by WinACE 2.5 max compression) > ACE: 1,305,274 (2.0 compression, max) > ACE: 1,305,286 (2.0 compression, normal) > ACE: 1,309,770 (1.0 compression, normal) > GZ: 1,310,209 (gzip -9 gzip 1.3 [1999-12-21]) (-- I've never seen GZIP be smaller than BZ2?!?!? > ZIP: 1,310,299 (Created by WinACE 2.5 max compression) > JAR: 1,310,299 (Created by WinACE 2.5 max compression -- I think .JAR are just renamed .ZIP anyway) > ZIP: 1,311,093 (Dunno what did it, I received it this way) > GZ: 1,311,109 (Created by WinACE 2.5 max compression) > .TAR.GZ: 1,311,614 (Created by WinACE 2.5 max compression) > BZ2: 1,337,976 (bzip2 Version 1.0.2 - I couldn't see a command line to change compression) > PPT: 1,440,768 bytes (Original file) > LZH: 1,440,901 (Created by WinACE 2.5 max compression) (-- this is BIGGER. This surprises me and tells me that PPT may already be compressed?
I think these are slightly skewed, but RAR just edged out ACE. Again, I think this is a recompression on compressed data. I would doubt that MS-CAB would normally beat ACE. This is not a directory of plaintext. You can even see that ACE can make GZip compat archives, but it was slightly larger than GZip itself. And ACE also made a smaller ZIP file than what I assume was WinZip.
And since I already bought WinACE, it's good enough.
binfmt_misc - Teach the Linux kernel to do neat stuff
Stolen from http://www.tat.physik.uni-tuebingen.de/~rguenth/linux/binfmt_misc.html
This is binfmt_misc - the generic 'wrapper'-binary handler!
WARNING: if you use recent kernel versions from Alan Cox (2.4.2acXX and later) or versions 2.4.13 and up you need to mount binfmt_misc using
mount -t binfmt_misc none /proc/sys/fs/binfmt_misc
Please dont ask me for reasons - ask Alexander Viro who did this during the stable series.
Abstract
Binfmt_misc provides the ability to register additional binary formats to the Kernel without compiling an additional module/kernel. Therefore binfmt_misc needs to know magic numbers at the beginning or the filename extension of the binary.
You can get a patch to include binfmt_misc into your Kernel here. There is a patch to 2.0.30, a patch to 2.0.33/34 and a patch to 2.0.35. Binfmt_misc is integrated to Kernel 2.1.43, so there is no need to get a patch for them, just upgrade to the latest (stable ) 2.1.xx Kernel.
Read Documentation/binfmt_misc.txt and Documentation/java.txt for more information on how to use binfmt_misc (or continue reading this page). The 'magic' behind binfmt_misc
binfmt_misc works as follows:
- it maintains a linked list of structs, that contain a description of a binary format, including a magic with size (or the filename extension), offset and mask, and the interpreter name.
- on request it invokes the given interpreter with the original program as argument, as binfmt_java and binfmt_em86 and binfmt_mz do.
- binfmt_misc does not define any default binary-formats, you have to register an additional binary-format via the /proc interface (see below).
The /proc interface of binfmt_misc
You can find the following binfmt_misc related files/directories below /proc/sys/fs/binfmt_misc:
- register
To register a new binary format do an echo :name:type:offset:magic:mask:interpreter: > register with appropriate name (the name for the /proc-dir entry), offset (defaults to 0, if omitted), magic and mask (which can be omitted, defaults to all 0xff) and last but not least the interpreter that is to be invoked (for example and testing '/bin/echo'). Type can be 'M' for usual magic matching or 'E' for filename extension matching (give extension in place of magic).
- status
If you do a cat status you will get the current status (enabled/disabled) of binfmt_misc. Change the status by echoing 0 (disables) or 1 (enables) or -1 (caution: clears all previously registered binary formats) to status. I.e. for example echo 0 > status to disable binfmt_misc (temporarily).
- name (where name is the name you gave to register)
This file does exact the same thing as status except its scope is limited to the actual binary format. By cating this file you also recieve information about the interpreter/magic, etc. of the binfmt.
Example usage of binfmt_misc (emulate binfmt_java):
cd /proc/sys/fs/binfmt_misc echo ':Java:M::\xca\xfe\xba\xbe::/usr/local/java/bin/javawrapper:' > register echo ':HTML:E::html::/usr/local/java/bin/appletviewer:' > register echo ':Applet:M::<!--applet::/usr/local/java/bin/appletviewer:' > register echo ':DEXE:M::\x0eDEX::/usr/bin/dosexec:' > register
These three lines add support for Java executables and Java applets (like binfmt_java, additionally recognising the .html extension with no need to put '<--applet>' to every applet file). You have to install the JDK amd the shell-script /usr/local/java/bin/javawrapper, too. It works around the brokeness of the Java filename handling. To add a Java binary, just make a link to the .class-file somewhere in the path.
For full featured wrapping of deeply nested class files you will have to use the wrapper script created by Colin Watson (cjw44@…) /usr/local/java/bin/javawrapper and the additionally needed little c-proggy javaclassname.c (just compile it and stick it to /usr/local/java/bin/). These C/Script combination handles nested classes properly by looking up the fq classname from the class file.
Configuration of binfmt_misc ideally takes place in one of your init scripts (see init manual to find out where they resist, usually /etc/rc.d/). This is my personal binfmt_misc configuration script, it gets called in /etc/rc/boot.local .
Wireless Performance Stats
I cannot get this thing to connect with Encryption On using the SMC software, so I cannot turn on/off this 'Nitro' thing…
http://www.stanford.edu/~preese/netspeed/ was used to test. This program ROCKS!!! Server Linux, client WinXP Pro. 30 seconds per test, 8KB window (default) You can see there is a push and a pull test.
Server: iperf --format K --port 999 -s Client: iperf -c neuvreidaghey -r -t 30 -p 999 --format K 802.11g, with encryption on. ------------------------------------------------------------ [ ID] Interval Transfer Bandwidth [1892] local 192.168.1.169 port 2212 connected with 192.168.1.251 port 999 [1892] 0.0-30.0 sec 56408 KBytes 1880 KBytes/sec [1868] local 192.168.1.169 port 999 connected with 192.168.1.251 port 32785 [1868] 0.0-30.0 sec 60832 KBytes 2026 KBytes/sec 802.11g, with encryption off. ------------------------------------------------------------ [ ID] Interval Transfer Bandwidth [1888] local 192.168.1.169 port 2318 connected with 192.168.1.251 port 999 [1888] 0.0-30.0 sec 70120 KBytes 2337 KBytes/sec [1868] local 192.168.1.169 port 999 connected with 192.168.1.251 port 32787 [1868] 0.0-30.0 sec 81504 KBytes 2716 KBytes/sec
So I am getting 15-21 Mbps of 54 theoretical - THAT SUCKS!!! [27-38% efficiency]
802.11b, encryption off. ------------------------------------------------------------ [ ID] Interval Transfer Bandwidth [1888] local 192.168.1.169 port 2353 connected with 192.168.1.251 port 999 [1888] 0.0-30.0 sec 14176 KBytes 472 KBytes/sec [1868] local 192.168.1.169 port 999 connected with 192.168.1.251 port 32788 [1868] 0.0-30.0 sec 12640 KBytes 421 KBytes/sec
So that is 3.3-3.7Mbps of 11 theoretical - guess that G ain't so bad - it IS about 5x faster! [30-34%]
Just for shits, I switched the G router into B/G mixed mode environment… 802.11g (with b compat) [no encryption] [results about same as G only] I tried putting both NICs in the laptop at once, and things just got ugly. Guess with only one laptop I cannot get the B to interfere enough with the G… I will try that NITRO stuff again… maybe tomorrow, maybe this weekend.
Don't put a WD drive in a RAID system
If it's between 40GB and 120GB… http://wdc.custhelp.com/cgi-bin/wdc.cfg/php/enduser/std_adp.php?p_admin=1&p_faqid=913&p_created=1047068027
Ghost, ext2 and ext3, convert ext3 -> ext2
- Ghost 7 doesn't do ext3. I had a hint of this moving from 30G to 40G but it is now verified. It makes
fsck.ext3
core dump. Not a pretty sight. I was very afraid that I lost my firewall forever.- You can fool 'mount' but not 'fsck'. You can do '
mount -t ext2
' to mount ext3 as ext2. If you do 'fsck -t ext2
' on corrupted ext3, it will still crash knowing it is ext3. - '
tune2fs -O ^has_journal /dev/hda7
' will convert an ext3 partition (eg hda7) to ext2. Since Ghost copies ext3 as ext2 (which corrupts the special journal) this is required.
- You can fool 'mount' but not 'fsck'. You can do '
Ghost and ext3
bad = I am about to try to resize an ext3 partition. Ghost hosed that job up so bad that fsck actually SegFault'd!
http://www.gnu.org/software/parted/parted.html
http://www.gnu.org/manual/parted-1.6.1/html_mono/parted.html
Stupid Linux trick - virtual partition
Stupid Linux trick number ###? - Making a virtual partition. Kinda the opposite of what I want to do.
mkdir /home/foo dd if=/dev/zero of=/home/foo/image bs=1024 count=10240 mkfs.ext3 -F /home/foo/image mount -t ext3 /home/foo/image /home/foo -o loop chown foo /home/foo
This creates a 10MB home 'directory' for the user named foo. No way for them to get bigger (well, /tmp but that is beyond the scope of this example).
Samba WinXP Pro Mapped Drive Problem
You can see my original ordeal here.
Problem:
I am sure everyone is sick of hearing Samba questions, but this is driving me nuts. I have had a samba share setup for almost two years but I upgraded my desktop to WinXP Pro and it is driving me nuts!
Some background:
Running Samba 2.2.7
I am not doing anything fancy on samba - no PDC or anything. Plain ol' Workgroup. Example section:
[shared] comment = Shared Files path = /share/shared public = yes writable = yes write list = me, acekitty printable = no force user = me force group = me
I have this share mapped to my drive S on the WinXP machine. For some reason, the first time I double click in Explorer on the drive (after 'n' minutes of not accessing recently), explorer just hangs for like 2 minutes. Then it comes back like nothing was wrong! No errors on the server side. I have done the "Scheduled Tasks" possible problem fix I have seen many places:
When you connect to another computer, Windows checks for any Scheduled tasks on that computer. This can take up to 30 seconds. Open regedit and browse to key:
HKEY_LOCAL_MACHINE/Software/Microsoft/Windows/Current Version/Explorer/RemoteComputer/NameSpace
Delete the {D6277990-4C6A-11CF-8D87-00AA0060F5BF} sub-key and reboot.
So, are you ready for the weirdest part of this problem? I am also running WinXP Pro on my laptop and it has NEVER had this problem from day 1!?!?
Absolutely any help would be greatly appreciated.
- RR
Solution:
After tons and tons of research I learned that Samba was NOT the problem!
It was just plain ol' Windows being stupid. First, I found this page: here which I am linking here because if anyone else ever stumbles upon this post it may fix their problem, and it shows some control panel that there seems to be no other way to reach!?!?
Next I found (part of) my solution here.
It seems that the service "WebClient" is what is screwing me over. However… combining these two solutions STILL didn't quite work for me… every time I booted up I had to re-enter my share passwords!
Which brings me to why the problem is so original (IMO). I have only one user on this machine, with no password. So WinXP was automagically booting into that account. To get the mounted share to work, I had to do the following fun:
- Run 'control userpasswords2' and wipe all remembered passwords.
- Add a password to my WinXP account (whose name has always matched the samba server account).
- Use TweakUI to automatic login the user on bootup.
- Disable the 'WebClient' service.
- Reboot
I tried various permutations, and this was the only one that worked, so all the steps seemed to be required. This does not explain why my laptop has worked fine from day one. They both have the same NetBIOS settings, host files, mapped drives, etc, etc…
Thanx for trying! I now have my M, P, S, and W drives back!
- RR
How to fix /dev/null if it is broken
mknod /dev/null c 1 3 chmod a+w /dev/null
Move MySQL database from one machine to another
I just moved my ENTIRE MySQL database from one machine to another using netcat!
On the original firewall:
./mysqldump -A -a --opt -p | bzip2 | nc -w 3 192.168.1.55 2000
And then on the new virtual one:
nc -l -p 2000 | bunzip2 | /usr/local/mysql/bin/mysql -p
So, on the fly compresion and everything!