Problems with Perl 5's switch
I was trying to use perl's __DATA__
subsection and was receiving the following warnings/errors:
readline() on unopened filehandle DATA main::DATA used only once
Well, after fighting with it for like 20 minutes, I did a Google search. After wandering around different mailing list archives, etc, I finally found a hint of a solution.
Apparently, using "Switch" (via "use Switch;
") does some kind of behind-the-scenes magic that breaks __DATA__
section stuff.
I didn't find anything in perldoc Switch
that indicated it was a known bug. However, it is v5.8.5 so fairly old (2004, from RHEL 4.5) and the Switch
documentation says it is from 2003.
Annoying tho.
Subversion from Perl
So I spent about three hours trying to get my perl svn_compress
script running and I ran kept running into the damned thing segfaulting!
svn_path_basename: Assertion `is_canonical (path, len)' failed.
I finally found the answer. Apparently, libsvn
wants '' as the current directory, not '.' like every other program since the beginning of time (Jan 1, 1970, right?).
Three hours.
Where I read the answer. Pasted here:
Peter Samuelson wrote: > [Julian Gilbey] >> burnside:~/debian/tex/tetex-bin $ perl -MSVN::Client -e \ >> 'sub print_names { print "<$_[0]>\n"; } $ctx=new SVN::Client; >> $ctx->status("", "BASE", \&print_names, 1, 1, 0, 1);' | head -5 >> <> >> <.pc> >> <.pc/.version> >> <configure> >> <INSTALL.generic> > > I reproduced your bugs with subversion 1.3.0-5, so I can properly > discuss it with upstream. Unfortunately, I don't know much about the > Perl bindings. But comparing "svn status" with your command, it does > seem to correspond to the "." entry. I wonder if that is even > considered a bug. I mean, if you prefix each entry with $(pwd)/, it is > fine. > > >> perl: /tmp/buildd/subversion-1.2.3dfsg1/subversion/libsvn_subr/path.c:377: >> svn_path_basename: Assertion `is_canonical (path, len)' failed. > > Right, that's definitely a bug. Even if this isn't something the perl > bindings can fix on their own, they should carp or something. Hi. This is an issue that gets kicked around on dev at svn from time to time, and usually ends up with the thread fizzling out sooner or later, with no concrete action being taken, due to a lack of an obvious right way to proceed. I'll sum up the situation... There exist a number of path manipulation functions (svn_path_*) which do not conform to the usual style returning an error status as the return value, and output data via output pointer parameters. Instead they return data directly as the return value, and are supposed to only ever be able to experience errors that merit assert() or abort(). Subversion defines a 'canonical path' format, and most of the functions (apart from the canonicalizer itself, obviously!) assert that the input path looks like a canonical path. Various groups of C programmers will conduct heated debates on whether this is good programming practice, or an annoyance, but that is irrelevant where the bindings are concerned, since assert()-ing out of a scripting language interpreter is clearly bad. There is a fairly obvious, though non-trivial, solution: Make the bindings test all input paths (presumably using a path-specific SWIG "in" typemap) using the same logic as is_canonical, and canonicalize if necessary. The problem, though, is that discussions of this nature tend to get intertwined with the parallel issue of whether the C code is being generally unhelpful in this situation, and should be changed too. OK, now you know the background. Feel free to prod dev at svn to raise awareness of this problem which has sadly lain dormant for far too long. Max. Aside: The canonical form (IIRC) is mostly common sense stuff: * no repeated slashes * no internal /../ sequences * no trailing slash BUT it has one weird rule: * the canonical form of "." is ""
Reminder email and wiki page for trac
I wrote this a long time ago, never had a chance to update it, and somebody asked on the trac
mailing list today…
You can get ADMMailHandler
in the ADMLogger package (sorry, too lazy today).
You need to configure a few places too. Like I implied, not quite ready for public consumption, but at my rate never will be. Please feel free to "upstream" me any changes.
Also, if there is a trac project that matches the user name (that's how my company does trac sandboxes) then it will also create a page MyTickets with a dump of links.
#!/usr/bin/perl -w use strict; # Script to send email reminders in a cron job. # Email trac at revragnarok <.> com # I should've made it DBI compliant - but I'm not getting paid for this ;) BEGIN { push @INC, "/usr/src/my_scripts/mail_reminder"; # Where ADMMailHandler is hiding } use Data::Dumper; use ADMMailHandler; use DirHandle; use constant URLbeforeproj => 'http://webmail.revragnarok.com/'; use constant URLafterproj => '/ticket/'; use constant Nodomain => '@yourdomain.com'; use constant Subject => 'Outstanding tickets'; chdir("/var/www/trac"); my (%proj, %user); my @projs; # Look in directory for project names my $dh = DirHandle->new('.') or die "Can't open trac directory! : $!\n"; while (defined ($_ = $dh->read)) { # Does it have a trac DB? if (-e "$_/db/trac.db") { push @projs, $_; } # -e } foreach my $projname (@projs) { # Open each one open(SQL,"/usr/bin/sqlite3 $projname/db/trac.db \"select id,status,owner,summary from ticket where status != 'closed'\"|"); while (<SQL>) { chomp; my ($id, $status, $owner, $summary) = split /\|/; #print "Proj: $projname, ID: '$id', status: '$status', owner: '$owner', summary: '$summary'\n"; next if ($owner eq ''); $user{$owner}{"$projname-$id"} = $summary; } close(SQL); } foreach my $username (sort keys %user) { # Verify it is valid my $fullname = $username; $fullname.= Nodomain if ($fullname !~ /@/); # Put together the email my $mh = ADMMailHandler::new($fullname, 1, Subject); $mh->append(qq{<TABLE WIDTH=95% BORDER=1><TR> <TD><B><U>Project</U></B></TD> <TD><B><U>Ticket Summary</U></B></TD></TR>}); foreach my $ticket (sort keys %{$user{$username}}) { my ($proj, $num) = split(/-/, $ticket); # print "Processing (proj = $proj) (num = $num) (summary = $user{$username}{$ticket})\n"; $mh->append(qq{<TR><TD>$proj #$num</TD><TD><A HREF="}.URLbeforeproj.$proj.URLafterproj.$num.qq{">}.$user{$username}{$ticket}."</A></TD></TR>\n"); } $mh->append("</TABLE>"); $mh->sendit(); # Now we'll make a special 'MyTickets' wiki page if the person has their own sandbox my ($tryname, undef) = split(/@/, $fullname); if (scalar grep(/^${tryname}$/, @projs)) { # There is a match open(WIKI, ">/tmp/$$"); print WIKI "== My Tickets ==\n||'''__Project__'''||'''__Ticket Summary__'''||\n"; foreach my $ticket (sort keys %{$user{$username}}) { my ($proj, $num) = split(/-/, $ticket); print WIKI "||$proj !#$num||[$proj:ticket:$num $user{$username}{$ticket}]||\n"; } close WIKI; `/usr/bin/trac-admin $tryname wiki remove MyTickets > /dev/null 2>/dev/null`; `/usr/bin/trac-admin $tryname wiki import MyTickets /tmp/$$`; unlink "/tmp/$$"; } # Sandbox or not } # Each username
Using SVK for Roaming SVN Users
I have a Trac/SVN server on my work laptop (in a VMWare box). Others are needing access to the files more, so I needed a way to have two way merging. Of course, others have had this problem already, and svk was the result. However, there are certain aspects of svk that I'm not too fond of. Mainly, I didn't want to lose my TortoiseSVN capabilities or all my subversion know-how. However, I'm going to exploit the fact that an svk "depot" is under the hood a svn repository.
Here's what I did:
- I needed to get svk running under Cygwin. That was a real PITA, but luckily, somebody was nice enough to put all the instructions on this wiki page.
- Now I need to get a local copy of the entire svn repository under svk:
svk mkdir svnrepo
svk mirror http://svnserver/svn/path/to/repo //svnrepo/reponame
svk sync -a
(This took FOREVER)svk mkdir local
svk cp //svnrepo/reponame //local/reponame
OK, so now, we have a local svk "depot" which has in it /svnrepo/
and /local/
but it is all kept in a single svn repository on your hard drive. Now, the magic: we check out from that file using TortoiseSVN to create a subversion working copy. Using TortoiseSVN, I checked out "from" file:///E:/cygwin/home/user/.svk/local/local/reponame
- you'll note that the first part is my cygwin home directory (username of 'user'), and the double local
is not a typo - the first is a "real" directory on my E: drive, the second is at the root level of the repository (that we made above).
Now, when I'm offline, I can just use my local working copy, and am able to check in as much as I want without any worries. Another use for this I read about was if you want to check in a lot more than your coworkers do and want to keep the "master" repository "clean."
To perform the actual sync with the master server:
svk pull //local/reponame
(This makes sure the local svk depot is in sync)svk push --verbatim //local/reponame
- The
verbatim
flag prevents svk from inserting its own header which was causing problems with trac by pointing to revision numbers in the future which just made no sense.
- The
Drawbacks
- One of the files I tried to
push
was locked on the master repository, but that information doesn't seem to be propagated properly, so the push failed until I unlocked the file manually on the master server. - Need to do the
push
andpull
manually. - svn's keyword substitution now replaces info with local information, like revision number of the file in the local svk depot, not the master repository (which means printouts aren't going to match). - It seems that all svn properties may be iffy.
Resources
FAT32 perl utilities
As noted before, my work laptop dual boots into WinXP and Fedora Core 7. They share a large FAT32 partition. Yesterday I finally got a 500GB external drive at work to back up my stuff. It's also FAT32. So I whipped up this quick script that splits a large data stream (using redirection or cat
would make files work) and dumps it in 1GB slices. The second has some modifications to instead fill up the hard drive with zeroes, which is needed to make a backup of it more compressable. On a Linux box, I normally just do dd if=/dev/zero of=delme bs=102400 || rm -rf delme
but that would exceed the file size limitations of FAT32. The first iteration of the filler
was simply cat /dev/zero | perl splitter.pl fill
but then realized that there was a lot of actual reading going on, instead of just dumping zeros, so I changed some stuff.
In filler
, I tried to pre-allocate the 2GB slice file and then fill it with zero to try to avoid even more fragmentation and FAT table manipulations. However, when I re-opened the file and then seeked to zero it would change the size back down - I didn't have time to research it further; if anyone has a good solution please let me know.
I've also run filler
under Cygwin to fill another partition.
splitter.pl
:
#!/usr/bin/perl -w # This program splits incoming data into ~1GB chunks (for dumping a file # on the fly to FAT32 partitions for example). # Data is STDIN, and first argument is prefix of output (optional). # # To recombine the output, simply: # cat FILE_* > /path/to/better/fs/OriginalFile BEGIN { push(@INC, "/mnt/hd/usr/lib/perl5/5.8.8/"); push(@INC, "/mnt/hd/usr/lib/perl5/5.8.8/i386-linux-thread-multi/"); } use strict; use Fcntl; # import sysread flags binmode(STDIN); use constant FULL_SIZE => (2*1024*1024*1024); # 2 GB my $chunk_byte_count = FULL_SIZE+1; # Force an open on first output byte my $chunk_file_count = 0; # Start at file 0 my ($read_count, $buffer); my $blksize = 1024; # This might get overwritten later my $prefix = $ARGV[0] || "FILE"; # The framework of this is from camel page 231 while ($read_count = sysread STDIN, $buffer, $blksize) { if (!defined $read_count) { next if $! =~ /^Interrupted/; die "System read error: $!\n"; } # Decide if we need another file if ($chunk_byte_count >= FULL_SIZE) { # Need a new file close OUTFILE if $chunk_file_count; sysopen OUTFILE, (sprintf "${prefix}_%02d", $chunk_file_count++), O_WRONLY | O_TRUNC | O_CREAT | O_BINARY or die "Could not open output file for write!\n"; $blksize = (stat OUTFILE)[11] || 16384; # Get preferred block size # print STDERR "(New output file from $0 (blksize $blksize))\n"; $chunk_byte_count = 0; } # New file my $wr_ptr = 0; # Pointer within buffer while ($read_count) { # This handles partial writes my $written = syswrite OUTFILE, $buffer, $read_count, $wr_ptr; die "System write error: $!\n" unless defined $written; $read_count -= $written; $wr_ptr += $written; } # Writing a chunk $chunk_byte_count += $wr_ptr; #print "(\$wr_ptr = $wr_ptr), (\$chunk_byte_count = $chunk_byte_count), (\$chunk_file_count = $chunk_file_count)\n"; } # Main read loop # Report on it print "Wrote out $chunk_file_count chunk files.\n";
filler.pl
:
#!/usr/bin/perl -w # This program fills a hard drive with 2GB files all NULL. # (This makes compressed images of the hard drive smaller.) # First argument is prefix of output (optional). # BEGIN { push(@INC, "/mnt/hd/usr/lib/perl5/5.8.8/"); push(@INC, "/mnt/hd/usr/lib/perl5/5.8.8/i386-linux-thread-multi/"); } use strict; use Fcntl qw(:DEFAULT :seek); # import sysread flags use constant FULL_SIZE => 2*(1024*1024*1024); # 2 GB my $chunk_byte_count = FULL_SIZE+1; # Force an open on first output byte my $chunk_file_count = 0; # Start at file 0 my ($read_count, $buffer); my $blksize = 16384; # This might get overwritten later my $prefix = $ARGV[0] || "FILL"; my $last_show = -1; $| = 1; # always flush # The framework of this is from camel page 231 $buffer = "\0" x $blksize; # Without pre-alloc: #real 1m20.860s #user 0m10.155s #sys 0m32.531s # With pre-alloc: #real 8m56.391s #user 0m16.359s #sys 1m11.921s # Which makes NO sense, but hey, that's Cygwin... maybe because FAT32? # Note: It was O_RDWR but switching to O_WRONLY didn't seem to help. # However, maybe if Norton is disabled? while (1) { # Decide if we need another file if ($chunk_byte_count >= FULL_SIZE) { # Need a new file close OUTFILE if $chunk_file_count; print STDERR "\rNew fill output file ($prefix)... \n"; sysopen OUTFILE, (sprintf "${prefix}_%02d", $chunk_file_count++), O_WRONLY | O_TRUNC | O_CREAT | O_BINARY | O_EXCL or die "Could not open output file for write!\n"; # Pre-allocate the file # print STDERR "New fill output file ($prefix) pre-allocating, expect freeze... \n"; # sysseek OUTFILE, FULL_SIZE-1, SEEK_SET; # syswrite OUTFILE, $buffer, 1, 0; # close OUTFILE; # print STDERR "\tdone, now blanking out the file.\n"; # sysopen OUTFILE, (sprintf "${prefix}_%02d", $chunk_file_count++), # O_WRONLY | O_BINARY or die "Could not re-open output file for write!\n"; # sysseek OUTFILE, 0, SEEK_SET; # This might just be ignored? # Done pre-allocating my $blk = $blksize; $blksize = (stat OUTFILE)[11] || 16384; # Get preferred block size if ($blksize != $blk) { # new block size, should only happen once $buffer = "\0"x$blksize; } $chunk_byte_count = 0; $last_show = -1; } # New file $read_count = $blksize; while ($read_count) { # This handles partial writes my $written = syswrite OUTFILE, $buffer, $read_count, 0; die "System write error: $!\n" unless defined $written; $read_count -= $written; $chunk_byte_count += $written; } # Writing a chunk # End of a chunk my $new_show = int ($chunk_byte_count/(1024*1024)); if ($new_show > $last_show) { print STDERR "\r${new_show}MB"; $last_show = $new_show; } # print "(\$chunk_byte_count = $chunk_byte_count), (\$chunk_file_count = $chunk_file_count)\n"; } # Main while loop # Report on it [think it always crashes before this ;)] print "\rWrote out $chunk_file_count chunk files.\n";
Offline Wikipedia
As seen on a million and one websites (/. etc al), a smart geek put together some offline Wikipedia stuff. I had some problems on my work laptop (Fedora Core 7) because of the OS (PHP executable wrongly named) and because of the way I have it partitioned (on a FAT32 partition). Anyway, here's my email to the original poster (wiki-fied):
- Thanks, you're a geek hero.
- I had a few problems with relative paths, I had to edit the places that pointed at quickstart* executables.
- Fedora Core 7's "
php5
" executable is actually only named "php
" - no big deal with a "ln -s /usr/bin/php /usr/bin/php5
" - My machine is dual-boot, and the only partition big enough was FAT32. Had some problems with the too many split files. I threw together a perl script (I had done the split by hand before downloading the Makefile ). It's pasted below.
Anyway, thanks again. Feel free to add any of this stuff to your page (like the FC7 notes). If you do, please don't include my email, just credit to RevRagnarok is fine.
- RevRagnarok
#!/usr/bin/perl -w # This was written by RevRagnarok (I'm on Wikipedia) # I was having problems with all the split files on a FAT32 partition. I assume # it is because there were so many plus two entries for each (LFNs). # This simply combines all the rec* files again into large chunks of N where # I used 5, but you can set below with $combine. # Verification info below. # Lastly, I needed to modify the Makefile and remove the "split" from the # "wikipedia" target. use strict; # Using: rec13778enwiki-20070802-pages-articles.xml.bz2 my $last = 13778; my $lastd = 5; # How many digits in above (yes, I can compute this, but why?) my $date = 20070802; my $suffix = "enwiki-${date}-pages-articles.xml.bz2"; my $combine = 5; # This will combine every 5 into a group # (If this number makes > 4 digit results, it will not sort nicely) my $outputdir = '/data/wikipedia/'; # Don't make it the same place... my $joinstr = ''; my $fcount = 0; for (1 .. $last) { my $num = sprintf "%0${lastd}d", $_; $joinstr .= "rec${num}${suffix} "; if (($_ % $combine) == 0) { &catthem($joinstr, $fcount++); $joinstr = ''; } } &catthem($joinstr, $fcount++) if ($joinstr ne ''); print "All done!\n"; sub catthem ($$) { my $ofile = sprintf "rec%04d.bz2", $_[1]; `/bin/cat $_[0] >${outputdir}${ofile}`; # Lazy again, there are more Perl-ish ways. print "."; } __DATA__ To make sure they were all taken in, you can do this: bash$ bzip2 -tvv *bz2 2>&1 | grep -v ok | grep -v bz2 | wc -l 13778 ...which is equal to the number of start blocks, so I know nothing is missing now.
"Monte Carlo" Perl
This is pretty cool - had not seen it before.
http://math.fullerton.edu/mathews/n2003/MonteCarloPiMod.html
http://en.wikipedia.org/wiki/Monte_Carlo_method
Those of you with cygwin or unix and perl:
#!/usr/bin/perl my $COUNT = 8; #Orders of magnitude my $HIT = 0; for(my $i = 1; $i < 10**$COUNT; ++$i){ $HIT++ if rand()**2 + rand()**2 <= 1; printf "PI ~ %1.8f after %0${COUNT}i points\n", 4*$HIT/$i, $i if ($i % 1_000) == 0; }
I got PI ~ 3.14155046 after 99999000 points and a second run with PI ~ 3.14160204 after 999990000 points
The new blog is ready!
So I have now competed moving all my stuff from the bblog engine to trac.
To inaugurate the new blog, my notes on how I did it:
- By hand, slowly copied and tagged every entry. Made sure the page name was the proper date (ignored timestamp) from the old system.
- Wrote a script to fool trac into thinking the pages were actually written on that date (see below).
- In the directory with
trac.db
database:apachectl stop
sqlite3 trac.db '.dump wiki' > wiki_in.txt
./trac_blog_datefix.pl < wiki_in.txt > wiki_out.txt
sqlite3 trac.db 'drop table wiki'
sqlite3 trac.db < wiki_out.txt
apachectl start
The script:
#!/usr/bin/perl -w use Date::Parse; while (<>) { if ($_ !~ /insert into/i) { print; next; } if (m|(\d{4}/\d{1,2}/\d{1,2})/|ios) { print STDERR "Found date of $1: "; my $time1 = str2time($1); print STDERR $time1, "\n"; # Now replace the current timestamp with a newone. my $str = $_; $str =~ s/\d{10}\.?\d{0,2}/$time1/; print $str; } else { # Standard wiki page (non-blog) print; } }
Some interesting httpd rewriting with perl
<VirtualHost *> ServerName svn.whatever ServerAlias svn <Perl> #!/usr/bin/perl my $svn_path = "/var/svn"; my $svn_location = ""; my $trac_path = "/var/trac"; opendir(SVN_ROOT, $svn_path) or die "Cannot open $svn_path"; while (my $name = readdir(SVN_ROOT)) { if ($name =~ /^[[:alnum:]]+$/) { $Location{"$svn_location/$name"} = { DAV => "svn", SVNPath => "$svn_path/$name", AuthType => "Basic", AuthName => "\"Subversion login\"", AuthUserFile => "$trac_path/access.user", AuthGroupFile => "$trac_path/access.group", Require => "group $name", }; } } closedir(SVN_ROOT); __END__ </Perl> </VirtualHost>
mydiff - INI style diff
Well, needed to compare two 300MB directories at work yesterday. Unfortunately, 'regular' diff just wasn't cutting it. A file would be declared different even if it was an INI style moved section… Example:
File 1: [a] Setting1=a Setting2=b [b] Setting3=c Setting4=d File 2: [b] Setting3=c Setting4=d [a] Setting1=a Setting2=b
Obviously, these two files are EFFECTIVELY the same, but diff will show the first as having the entire [a] section only, then [b] common, then file 2 only having… the same exact [a] section. So I whipped up a perl script to tell me that those two files are the same. This script may have problems and might not do what you want (it was quick and dirty) but it may help others (and me later, which is what this blog is more for anyway)… Looking at it this morning I can see a handful of places to easily condense it, but oh well… and if you care, these were Quartus project files and associated files (CSF, PSF, etc). Note: It fails when there is a < > or | in the text file. But if usually dumps so little you can eyeball it and decide if it is OK.
#!/usr/bin/perl -w use Data::Dumper; my $textdump; my %lhash; my %rhash; my $debug = 0; my $file = $ARGV[0]; # Some filenames have () in them that we need to escape: $file =~ s/\(/\\(/g; $file =~ s/\)/\\)/g; open (INPUT, "diff -iEbwBrsty --suppress-common-lines Projects/$file Folder\\ for\\ Experimenting/Projects/$file|"); while (<INPUT>) { if ($_ =~ /Files .*differ$/) { #Binary files print "Binary file comparison - they differ.\n"; exit; } if ($_ =~ /Files .*identical$/) { print "No diff!\n"; exit; } my $a = 0; # For some reason chomp was giving me problems (cygwin, win2k) s/\n//g; s/\r//g; $_ =~ /^(.*)([<>\|])(.*)$/; my $left = $1; my $dir = $2; my $right = $3; $left =~ /^\s*(.*?)\s*$/; $left = $1; $right =~ /^\s*(.*?)\s*$/; $right = $1; # print "1: '$left'\n2: '$dir'\n3: '$right'\n"; # OK, now we have all we wanted... if ($dir eq '<') { $lhash{$left}++; $a++; }; if ($dir eq '>') { $rhash{$right}++; $a++; } if ($dir eq '|') { $lhash{$left}++; $rhash{$right}++; $a++; } print "Missed this: $left $dir $right\n" unless $a; } # while close(INPUT); foreach (sort keys %lhash) { if (not exists $rhash{$_}) { # No Match... print "Only in left: '$_'\n"; } else { if ($lhash{$_} != $rhash{$_}) { print "Left count not equal to Right, $_\n"; } } } foreach (sort keys %rhash) { if (not exists $lhash{$_}) { # No Match... print "Only in right: '$_'\n"; } else { if ($lhash{$_} != $rhash{$_}) { print "Left count not equal to Right, $_\n"; } } } print Dumper(\%rhash) if $debug; print Dumper(\%lhash) if $debug;
mp3Tag RULES!
Let me just say this thing rocks… I need to clean up my MP3 tags because of my new iPod… I was doing a lot of manipulating with Perl, but this is so much easier. It runs under windows, but does handle most Perl REs…
http://www.mp3tag.de/en/index.html
Anyway, I highly recommend adding these two new "Actions" I wrote:
- Remove "/N" from "n/N" track numbers
Replace with regular expression Field: TRACK Regular expression: (\d+)/\d+ Replace matches with: $1
- Force track numbers to be 2 digits. (You can do this in file renaming with '$num("%track%",2)' but I cannot find another quick way to fix the TAG)
Replace with regular expression Field: TRACK Regular expression: ^(\d)$ Replace matches with: 0$1
These two actions can be applied together in the order listed.
smtproutes.pl
Many (stupid) mail servers are now assuming all cable modem users are SPAMmers, so more and more are refusing to accept my mail. Here's a script that I run to regenerate QMail's 'smtproutes' whenever I need to add new ISPs… Start:
#!/usr/bin/perl open OUTFILE, ">smtproutes"; $s = ":smtp.comcast.net\n"; # Replace with your ISP's outbound server foreach (<DATA>) { chomp; next if /^\w*$/; next if /#/; print OUTFILE "$_$s.$_$s"; } __DATA__ aol.com pipeline.com earthlink.net comcast.net ix.netcom.com netcom.com hut.fi t-3.cc earthengineering.com usa.com #CS is old compuserv, now AOL cs.com stanfordalumni.org erasableinc.org sbcglobal.net hp.com abs.net juno.com sourcenw.com yahoogroups.com msn.com