Don't end filenames or directories with spaces
Yeah… my Ruby script that I posted before made a bunch of directories with spaces at the end. And then when I fixed the bug, I had directories with and without spaces at the end. Bad things happen…
Cannot delete file: Cannot read from the source file or disk.
PAR2 file mover
This is my first Ruby script… please feel free to contact me with feedback.
It will read a PAR2 parity file and move all the files in it to a subdirectory. The original thing I asked for:
I know you can currently download into subdirectories based on ID3 tags. I would love to be able to have downloads go into subdirectories based on the PAR2 file that "covers" them.
Example, all files downloaded overnite into a huge directory:
AAA.PAR2 AAA 01-16.mp3 AAA 02-16.mp3 ... AAA 16-16.mp3 AAA.nfo AnnoyinglyNamedNothingLikeAAAButInThePARFileAnyway.nfo BBB.PAR2 BBB 01-92.mp3 BBB 02-92.mp3 ... BBB 92-92.mp3 BBB.nfo ... XXY.PAR2 XXY.mp3 XXY.txt
So I would want them moved into the subdirectories "AAA", "BBB", "XXY", etc. It wouldn't be perfect but it would be a great start for what I do to process my inbound stuff.
If not, how about dumping some log file I can parse in perl or ruby that gives me "AAA.LOG" which lists the files that went with AAA.PAR2 ?
Of course, being a borderline OCD engineer, I didn't just solve the problem but also put together a full object-oriented infrastructure to parse and process PAR2 files. I've only handled one packet type defined in the PAR2 specifications, but put in place a Par2BasePacket
with an interface that all the other packet types can be derived from.
Without further delay, see the attached code (sorry, my install doesn't support syntax highlighting of ruby and I don't have time now - but you can paste the code into this online highlighter).
Extra Secure Remote Desktop
I have been using Microsoft's Remote Desktop for a few years now (pretty much since I switched to XP). It's decent, and I like that it retains sessions unlike TightVNC, which is what I had used before. By using a SSH tunnel, I feel confident that my connection is secure and I am willing to visit my banks, etc. using my desktop machine via my laptop on a wireless network. Here's what I did:
On the server (in this case my desktop):
- Installed the Cygwin build of SSHD as a service
- Many guides are available, just search with Google above keywords like "cygwin sshd". But in a nutshell:
- Install cygwin (duh) - include
openssh
(under Net) andcygrunsrv
(under Admin) - Set up
sshd
: "ssh-host-config -y
" - Start it as a service: "
cygrunsrv -S sshd
"
- Install cygwin (duh) - include
- Many guides are available, just search with Google above keywords like "cygwin sshd". But in a nutshell:
- Allow Remote Desktop usage:
- Right click on "My Computer" and choose "Properties"
- Go to the "Remote" tab
- Make sure that "Allow users to connect remotely to this computer." is checked
On the client (in this case my laptop):
- You need a way to setup an ssh tunnel. You can use Cygwin's
ssh
,PuTTY
, or my favorite MyEnTunnel. I previously selected MyEnTunnel for some work-related stuff, so I was able to simply add a "Profile". - The key is that you want a tunnel from your machine's 3390 to the remote (server) machine's port 3389 (or any other port on your local machine of your choosing). With MyEnTunnel, it is this simple:
- Once MyEnTunnel has established a connection (a green padlock), you can connect.
- Start → Run → "mstsc"
- As you can see,
mstsc
has no idea what machine you are connecting to. As far as it knows, you connect tolocalhost
which is your machine, which MyEnTunnel then encrypts (usingPuTTY
'splink
) and sends to the remote machine'ssshd
daemon which forwards it to Windows XP listening for the Remote Desktop connection.(Client:Outgoing port) -> (Client:3390) -> (Client:22) -> [encrypted link] -> (Server:22) -> (Server:3389)
Compressing Subversion checkouts
I'll let the file intro explain itself:
# "Disk space is cheap!" - SVN Authors # "Except when backing it up!" - RevRagnarok # This script will drastically reduce the size of an svn checkout so you can back up the folder. # It will include a script to re-expand the directory with NO interaction with the server. # It will also (optionally) write a script /tmp/svn_uncompress.sh that will restore all compressed # folders.
Something to check out in the future: http://scord.sourceforge.net/
Update 3 Apr 08: Updated the script to touch a file saying it ran so it won't run again. Also have it dump a tiny "readme" file to let somebody else know what is going on.
Update 4 Apr 08: Fixed bug with deep paths.
Update 22 Aug 08: Huge changes, now using "proper" native calls.
Mounting ISO images with Windows
I have been trying to do a whole mount -o loop
thing under Windows for years - didn't know that Microsoft actually has a Virtual CD driver for free! No need for Alcohol 120% or anything. Here's the link.
I wanna rsync all night long...
I use rsync a lot on my PC boxes (Cygwin) and Linux servers. I keep forgetting what I do where, so usually I have a file called "command
" so I can just ". command
" with bash
.
Anyway, here are a few so I remember what I did. Maybe you will find them helpful too:
Backup music drive (M:
, ext3) to external USB drive (X:\!Music
, NTFS)
cd /cygdrive/x/\!Music rsync -av --size-only --exclude command --exclude '!Books on MP3' --exclude '!FTP' /cygdrive/m/ ./
Backup external USB drive (X:
, NTFS) to external FireWire drive (I:
, FAT32)
(Yes, I backup my backup, long story…)
cd /cygdrive/i rsync -av --size-only --exclude command --exclude *iso --exclude '/System Volume Information' /cygdrive/x/ ./
Keep my Cygwin mirror up to date on Mjolnir (Linux server)
cd /share/shared/Cygwin_mirror rsync -va --stats --delete --exclude command --exclude /mail-archives --progress rsync://mirrors.xmission.com/cygwin/ ./ && touch date wget http://www.cygwin.com/setup.exe mv -f setup.exe ../
Xilinx BIT files and the Linux/Unix/BSD "file" command
The attached file will ID a Xilinx BIT file and tell you when it was compiled, the original NCD
file name, and most importantly the chip it is for. It doesn't give a speed grade, but it gives all the other stuff.
All credit goes to the FPGA FAQ Question 26.
To install on a machine that already has file
installed (yours probably does) you need to find your magic
file. I will present what I did on a Cygwin box as an example, season to taste:
cd /usr/share/file/
rm -rf magic.mgc
cat /tmp/xilinx-magic >> magic
file -C
The last command "compiles" the magic
file into magic.mgc
. To make sure it all worked, you can grep -i xilinx magic*
and see a few spots.
Example output:
admaras@brisingamen ~/projects/ss/trunk/vhdl $ file */*bit BenADDA/benadda.bit: Xilinx BIT file - from BenADDA.ncd - for 2v6000ff1152 - built 2007/ 6/27(13:19:26) - data length 0x23d014ff BenADDAV4/benadda.bit: Xilinx BIT file - from BenADDA.ncd - for 4vsx55ff1148 - built 2008/01/07(15:37:49) - data length 0x1f3928ff BenADDAV4_Clock/mybenaddaclock.bit: Xilinx BIT file - from MyBenADDAClock.ncd -for 2v80cs144 - built 2008/01/11(14:18:37) - data length 0x1652cff BenDATAV4/bendatadd.bit: Xilinx BIT file - from BenDATADD.ncd - for 4vlx160ff1148 - built 2008/01/11(17:53:27) - data length 0x4cf4b0ff BenNUEY/bennuey.bit: Xilinx BIT file - from BenNUEY.ncd - for 2vp50ff1152 - built 2008/01/10(17:14:41) - data length 0x2447c4ff
This file has been submitted to the maintainer of the file
command so some day may come with a default build.
Using SVK for Roaming SVN Users
I have a Trac/SVN server on my work laptop (in a VMWare box). Others are needing access to the files more, so I needed a way to have two way merging. Of course, others have had this problem already, and svk was the result. However, there are certain aspects of svk that I'm not too fond of. Mainly, I didn't want to lose my TortoiseSVN capabilities or all my subversion know-how. However, I'm going to exploit the fact that an svk "depot" is under the hood a svn repository.
Here's what I did:
- I needed to get svk running under Cygwin. That was a real PITA, but luckily, somebody was nice enough to put all the instructions on this wiki page.
- Now I need to get a local copy of the entire svn repository under svk:
svk mkdir svnrepo
svk mirror http://svnserver/svn/path/to/repo //svnrepo/reponame
svk sync -a
(This took FOREVER)svk mkdir local
svk cp //svnrepo/reponame //local/reponame
OK, so now, we have a local svk "depot" which has in it /svnrepo/
and /local/
but it is all kept in a single svn repository on your hard drive. Now, the magic: we check out from that file using TortoiseSVN to create a subversion working copy. Using TortoiseSVN, I checked out "from" file:///E:/cygwin/home/user/.svk/local/local/reponame
- you'll note that the first part is my cygwin home directory (username of 'user'), and the double local
is not a typo - the first is a "real" directory on my E: drive, the second is at the root level of the repository (that we made above).
Now, when I'm offline, I can just use my local working copy, and am able to check in as much as I want without any worries. Another use for this I read about was if you want to check in a lot more than your coworkers do and want to keep the "master" repository "clean."
To perform the actual sync with the master server:
svk pull //local/reponame
(This makes sure the local svk depot is in sync)svk push --verbatim //local/reponame
- The
verbatim
flag prevents svk from inserting its own header which was causing problems with trac by pointing to revision numbers in the future which just made no sense.
- The
Drawbacks
- One of the files I tried to
push
was locked on the master repository, but that information doesn't seem to be propagated properly, so the push failed until I unlocked the file manually on the master server. - Need to do the
push
andpull
manually. - svn's keyword substitution now replaces info with local information, like revision number of the file in the local svk depot, not the master repository (which means printouts aren't going to match). - It seems that all svn properties may be iffy.
Resources
FAT32 perl utilities
As noted before, my work laptop dual boots into WinXP and Fedora Core 7. They share a large FAT32 partition. Yesterday I finally got a 500GB external drive at work to back up my stuff. It's also FAT32. So I whipped up this quick script that splits a large data stream (using redirection or cat
would make files work) and dumps it in 1GB slices. The second has some modifications to instead fill up the hard drive with zeroes, which is needed to make a backup of it more compressable. On a Linux box, I normally just do dd if=/dev/zero of=delme bs=102400 || rm -rf delme
but that would exceed the file size limitations of FAT32. The first iteration of the filler
was simply cat /dev/zero | perl splitter.pl fill
but then realized that there was a lot of actual reading going on, instead of just dumping zeros, so I changed some stuff.
In filler
, I tried to pre-allocate the 2GB slice file and then fill it with zero to try to avoid even more fragmentation and FAT table manipulations. However, when I re-opened the file and then seeked to zero it would change the size back down - I didn't have time to research it further; if anyone has a good solution please let me know.
I've also run filler
under Cygwin to fill another partition.
splitter.pl
:
#!/usr/bin/perl -w # This program splits incoming data into ~1GB chunks (for dumping a file # on the fly to FAT32 partitions for example). # Data is STDIN, and first argument is prefix of output (optional). # # To recombine the output, simply: # cat FILE_* > /path/to/better/fs/OriginalFile BEGIN { push(@INC, "/mnt/hd/usr/lib/perl5/5.8.8/"); push(@INC, "/mnt/hd/usr/lib/perl5/5.8.8/i386-linux-thread-multi/"); } use strict; use Fcntl; # import sysread flags binmode(STDIN); use constant FULL_SIZE => (2*1024*1024*1024); # 2 GB my $chunk_byte_count = FULL_SIZE+1; # Force an open on first output byte my $chunk_file_count = 0; # Start at file 0 my ($read_count, $buffer); my $blksize = 1024; # This might get overwritten later my $prefix = $ARGV[0] || "FILE"; # The framework of this is from camel page 231 while ($read_count = sysread STDIN, $buffer, $blksize) { if (!defined $read_count) { next if $! =~ /^Interrupted/; die "System read error: $!\n"; } # Decide if we need another file if ($chunk_byte_count >= FULL_SIZE) { # Need a new file close OUTFILE if $chunk_file_count; sysopen OUTFILE, (sprintf "${prefix}_%02d", $chunk_file_count++), O_WRONLY | O_TRUNC | O_CREAT | O_BINARY or die "Could not open output file for write!\n"; $blksize = (stat OUTFILE)[11] || 16384; # Get preferred block size # print STDERR "(New output file from $0 (blksize $blksize))\n"; $chunk_byte_count = 0; } # New file my $wr_ptr = 0; # Pointer within buffer while ($read_count) { # This handles partial writes my $written = syswrite OUTFILE, $buffer, $read_count, $wr_ptr; die "System write error: $!\n" unless defined $written; $read_count -= $written; $wr_ptr += $written; } # Writing a chunk $chunk_byte_count += $wr_ptr; #print "(\$wr_ptr = $wr_ptr), (\$chunk_byte_count = $chunk_byte_count), (\$chunk_file_count = $chunk_file_count)\n"; } # Main read loop # Report on it print "Wrote out $chunk_file_count chunk files.\n";
filler.pl
:
#!/usr/bin/perl -w # This program fills a hard drive with 2GB files all NULL. # (This makes compressed images of the hard drive smaller.) # First argument is prefix of output (optional). # BEGIN { push(@INC, "/mnt/hd/usr/lib/perl5/5.8.8/"); push(@INC, "/mnt/hd/usr/lib/perl5/5.8.8/i386-linux-thread-multi/"); } use strict; use Fcntl qw(:DEFAULT :seek); # import sysread flags use constant FULL_SIZE => 2*(1024*1024*1024); # 2 GB my $chunk_byte_count = FULL_SIZE+1; # Force an open on first output byte my $chunk_file_count = 0; # Start at file 0 my ($read_count, $buffer); my $blksize = 16384; # This might get overwritten later my $prefix = $ARGV[0] || "FILL"; my $last_show = -1; $| = 1; # always flush # The framework of this is from camel page 231 $buffer = "\0" x $blksize; # Without pre-alloc: #real 1m20.860s #user 0m10.155s #sys 0m32.531s # With pre-alloc: #real 8m56.391s #user 0m16.359s #sys 1m11.921s # Which makes NO sense, but hey, that's Cygwin... maybe because FAT32? # Note: It was O_RDWR but switching to O_WRONLY didn't seem to help. # However, maybe if Norton is disabled? while (1) { # Decide if we need another file if ($chunk_byte_count >= FULL_SIZE) { # Need a new file close OUTFILE if $chunk_file_count; print STDERR "\rNew fill output file ($prefix)... \n"; sysopen OUTFILE, (sprintf "${prefix}_%02d", $chunk_file_count++), O_WRONLY | O_TRUNC | O_CREAT | O_BINARY | O_EXCL or die "Could not open output file for write!\n"; # Pre-allocate the file # print STDERR "New fill output file ($prefix) pre-allocating, expect freeze... \n"; # sysseek OUTFILE, FULL_SIZE-1, SEEK_SET; # syswrite OUTFILE, $buffer, 1, 0; # close OUTFILE; # print STDERR "\tdone, now blanking out the file.\n"; # sysopen OUTFILE, (sprintf "${prefix}_%02d", $chunk_file_count++), # O_WRONLY | O_BINARY or die "Could not re-open output file for write!\n"; # sysseek OUTFILE, 0, SEEK_SET; # This might just be ignored? # Done pre-allocating my $blk = $blksize; $blksize = (stat OUTFILE)[11] || 16384; # Get preferred block size if ($blksize != $blk) { # new block size, should only happen once $buffer = "\0"x$blksize; } $chunk_byte_count = 0; $last_show = -1; } # New file $read_count = $blksize; while ($read_count) { # This handles partial writes my $written = syswrite OUTFILE, $buffer, $read_count, 0; die "System write error: $!\n" unless defined $written; $read_count -= $written; $chunk_byte_count += $written; } # Writing a chunk # End of a chunk my $new_show = int ($chunk_byte_count/(1024*1024)); if ($new_show > $last_show) { print STDERR "\r${new_show}MB"; $last_show = $new_show; } # print "(\$chunk_byte_count = $chunk_byte_count), (\$chunk_file_count = $chunk_file_count)\n"; } # Main while loop # Report on it [think it always crashes before this ;)] print "\rWrote out $chunk_file_count chunk files.\n";
Download files on a very crappy connection (work PCs)
wget -Sc -T 10 (URL) ex: wget -Sc -T 10 ftp://ftp.symantec.com/public/english_us_canada/antivirus_definitions/symantec_antivirus_corp/20051107-019-x86.exe
A Better "Bash Here"
There are a lot of ways to do a "bash window here" in WinXP, but this is my fav by far. It gives you the proper unix style mouse buttons since it is a terminal window not just cmd
. After installing it, right click on any directory:
Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\Directory\shell\0cmd-rxvt] @="Bash Window Here" [HKEY_CLASSES_ROOT\Directory\shell\0cmd-rxvt\command] @="C:\\cygwin\\bin\\rxvt.exe -bg black -fg white -sr -sl 1000 -fn \"Fixedsys\" -ls -e /usr/bin/bash --login -c \"cd \\\"`cygpath -u '%1'`\\\"; exec bash\""
You may need to change the beginning of the last line if you installed in a different path, e.g. e:\cygwin\
.
Updated March 2009
My buddy Bill told me there is now a Cygwin command xhere
and a setup command chere
that will do all the Registry insertions for you. So launch Cygwin in the default crappy shell (with admin privs) and you can type:
chere -i -af -t rxvt -o "-bg black -fg white -sr -sl 1000 -fn \"FixedSys\" -ls" -s bash -e "Bash prompt here"
You can change -af
to -cf
for current user only if you don't have admin on the machine.
Because he was kind enough to give it to me, I will give you his command which seriously hurts my eyes. I also prefer the default size and expand it if needed.
chere -i -af -t rxvt -o "-ls -sr -sl 1000 -bg grey70 -fg black -geometry 120x65+300+15 -fn 10x16 -title Bash" -s bash -e "Bash prompt here"
Roll your own subversion
Under Cygwin…
Waiting for an official subversion 1.4 - but until then, I lost my command line svn because Tortoise SVN (awesome BTW) updated my working copy to the new format.
Abbreviated setup (I'm not telling you how to use tar, etc). It's also good idea to disable your virus scanner for a few minutes. Spawning processes under cygwin is painful to start with…
- Download both subversion and it's deps. Currently:
http://subversion.tigris.org/downloads/subversion-deps-1.4.0.tar.bz2 http://subversion.tigris.org/downloads/subversion-1.4.0.tar.bz2
These are from http://subversion.tigris.org/servlets/ProjectDocumentList?folderID=260&expandFolder=74
- Extract them both
$ ./configure --prefix=/usr/local/svn14 --enable-all-static --disable-mod-activation
make all
make check
(You can usually just do "make check" but there seems to be a bug where it won't properly build the dependencies)
make install
- add to .profile:
alias svn14="/usr/local/svn14/bin/svn"
I have done this on two machines now. On both, some symlink stuff failed under one of the python checks. Oh well…
mydiff - INI style diff
Well, needed to compare two 300MB directories at work yesterday. Unfortunately, 'regular' diff just wasn't cutting it. A file would be declared different even if it was an INI style moved section… Example:
File 1: [a] Setting1=a Setting2=b [b] Setting3=c Setting4=d File 2: [b] Setting3=c Setting4=d [a] Setting1=a Setting2=b
Obviously, these two files are EFFECTIVELY the same, but diff will show the first as having the entire [a] section only, then [b] common, then file 2 only having… the same exact [a] section. So I whipped up a perl script to tell me that those two files are the same. This script may have problems and might not do what you want (it was quick and dirty) but it may help others (and me later, which is what this blog is more for anyway)… Looking at it this morning I can see a handful of places to easily condense it, but oh well… and if you care, these were Quartus project files and associated files (CSF, PSF, etc). Note: It fails when there is a < > or | in the text file. But if usually dumps so little you can eyeball it and decide if it is OK.
#!/usr/bin/perl -w use Data::Dumper; my $textdump; my %lhash; my %rhash; my $debug = 0; my $file = $ARGV[0]; # Some filenames have () in them that we need to escape: $file =~ s/\(/\\(/g; $file =~ s/\)/\\)/g; open (INPUT, "diff -iEbwBrsty --suppress-common-lines Projects/$file Folder\\ for\\ Experimenting/Projects/$file|"); while (<INPUT>) { if ($_ =~ /Files .*differ$/) { #Binary files print "Binary file comparison - they differ.\n"; exit; } if ($_ =~ /Files .*identical$/) { print "No diff!\n"; exit; } my $a = 0; # For some reason chomp was giving me problems (cygwin, win2k) s/\n//g; s/\r//g; $_ =~ /^(.*)([<>\|])(.*)$/; my $left = $1; my $dir = $2; my $right = $3; $left =~ /^\s*(.*?)\s*$/; $left = $1; $right =~ /^\s*(.*?)\s*$/; $right = $1; # print "1: '$left'\n2: '$dir'\n3: '$right'\n"; # OK, now we have all we wanted... if ($dir eq '<') { $lhash{$left}++; $a++; }; if ($dir eq '>') { $rhash{$right}++; $a++; } if ($dir eq '|') { $lhash{$left}++; $rhash{$right}++; $a++; } print "Missed this: $left $dir $right\n" unless $a; } # while close(INPUT); foreach (sort keys %lhash) { if (not exists $rhash{$_}) { # No Match... print "Only in left: '$_'\n"; } else { if ($lhash{$_} != $rhash{$_}) { print "Left count not equal to Right, $_\n"; } } } foreach (sort keys %rhash) { if (not exists $lhash{$_}) { # No Match... print "Only in right: '$_'\n"; } else { if ($lhash{$_} != $rhash{$_}) { print "Left count not equal to Right, $_\n"; } } } print Dumper(\%rhash) if $debug; print Dumper(\%lhash) if $debug;
Print everything after the last occurrence
This may be long and convoluted but it is the first thing that came to mind and it worked.
Had a log file that would delimit with "As Of nn/nn/nnnn" which could be multimegabytes. Didn't feel like doing a perl solution that day, so:
grep -n 'As Of' sourcefile | tail -1 | awk -F":" '{print $1}' | xargs -r -iX awk 'FNR>=X' sourcefile > outfile
Again, likely an easier solution, but this was Q&D.
More cpio tricks
Cleaning out my desk and came across these notes…
find /mnt/old_root -depth -print | cpio -odv | gzip -c -v -1 > /opt/bad_disk/old_root.cpio.gz find -depth -print | cpio -odv > tempo.cpio cpio -idvm < tempo.cpio
Neat trick:
tar cf - . | (cd /usr/local ; tar xvf - )
Do it until it works!
#!/bin/bash # From LinuxGazette.com (Ben Okopnik) # Rerun command line until successful until $*; do sleep 1; done
cpio Cheat Sheet
http://www.intencorp.com/karmilow/share/howto-cpio.html
Bernie's abbreviated Solaris/Linux cpio How-To
1. Backing up files to a cpio file
cd to the directory you want to archive, and issue the command
solaris-$ find . -depth -print | cpio -ocBdum > filename.cpio
-or-
linux-$ find . -depth -print | cpio -o -H newc > filename.cpio
2. Restoring files from a cpio file
cd to the directory you want the archived files written to, and issue the command
solaris-$ cpio -icBdum < filename.cpio
-or-
linux-$ cpio -idum -H newc < filename.cpio
3. Backing up files to a cpio tape
cd to the directory you want to archive, and issue the command
solaris-$ find . -depth -print | cpio -ocBdum > /dev/rmt/0
-or-
linux-$ find . -depth -print | cpio -o -H newc > /dev/rmt0
4. Restoring files from a cpio tape
cd to the directory you want the archived files written to, and issue the command
solaris-$ cpio -icBdum < /dev/rmt/0
-or-
linux-$ cpio -idum -H newc < /dev/rmt0
5. Restoring a particular file from a cpio tape
cd to the directory you want the archived file (/etc/hosts in this example) written to, and issue the command
solaris-$ cpio -icBdum < /dev/rmt/0 "/etc/hosts"
-or-
linux-$ cpio -idum -H newc < /dev/rmt0 "/etc/hosts"
6. Some other local (Linux) examples
local out:
find etc -depth -print | cpio -o -H newc > cpios/etc.cpio
find include -depth -print | cpio -o -H newc > cpios/include.cpio
local in:
cpio -idum -H newc < /mnt/home/cpios/etc.cpio
cpio -idum -H newc < /mnt/home/cpios/include.cpio
7. Some network (Linux) examples
net out:
pull: remote cpio -> local archive
rsh -n remote_host "cd /remote_dir ; find remote_file -depth -print | cpio -o -H newc" > local_archive
push: local cpio -> remote archive
find local_file -depth -print | cpio -o -H newc -F remote_host:/remote_dir/remote_archive
net in:
pull: remote archive -> local cpio
cpio -idum -H newc -F remote_host:/remote_dir/remote_archive
rsh -n remote_host dd if=/remote_dir/remote_archive | cpio -idum -H newc
push: local archive -> remote cpio
dd if=/local_dir/local_archive | rsh -n remote_host "cd /remote_dir ; cpio -idum -H newc"
Makefile notes
Checking tabs: cat -v -t -e makefile Macro substitution: SRCS = defs.c redraw.c calc.c ... ls ${SRCS:.c=.o} result: calc.o defs.o redraw.o Second string can be nothing too to truncate Suffix Rule: default begavior for a suffix: .SUFFIXES : .o .c .s .c.o : $(CC) $(CFLAGS) -c $< .s.o : $(AS) $(ASFLAGS) -o $@ $< $< is what triggered (only valid in suffixes) Forcing rebuilds: all : make enter testex "CFLAGS=${CFLAGS}" "FRC=${FRC}" enter : ${FRC} make ${ENTER_OBJS} "CFLAGS=${CFLAGS}" "FRC=${FRC}" ${CC} -o $@ ${ENTER_OBJS} ${LIBRARIES} testex : ${FRC} make ${TESTEX_OBJS} "CFLAGS=${CFLAGS}" "FRC=${FRC}" ${CC} -o $@ ${TESTEX_OBJS} ${LIBRARIES} force_rebuild: [nothing here] Then normal "make all" does normal. "make all FRC=force_rebuild" will do all Debugging make files: Try "make -d" Misc notes: A line starting with a hyphen ignores errors resulting from execution of that command Macros: $? = List of prereqs that have changed $@ = Name of current target, except for libraries, which it is the lib name $$@ = Name of current target if used AFER colon in dependency lines $< = Name of current prereq only in suffix rules. $* = The name (no suffix) of the current prereq that is newer. Only for suffixes. $% = The name of the corresponding .o file when the current target is a library Macro Mods: (not all makes support) D = directory of any internal mac, ex: ${@D} F = File portion of any internal except $? Special Tagets: .DEFAULT : Executed if make cannot find any descriptions or suffix rules to build. .IGNORE : Ignore error codes, same as -i option. .PRECIOUS : Files for this target are NOT removed if make is aborted. .SILENT : Execute commands but do not echo, same as -s option. .SUFFIXES : See above.