Recent posts (max 20) - Browse or Archive for more

Temporary File Descriptors (FDs) in Bash

Found this useful the other day when I needed an FD and not a file name… in my example, I was testing some python code where C++ was doing the heavy lifting and was going to pass an open FD to python.

exec {tempo}<> scratchfile
echo ${tempo}
ls -halF /proc/self/fd
command --fd=${tempo}


Migrated Again

Wow. I've had this blog since 2002. Waaay back in the day, it was some proprietary format, and I migrated it 13 years ago to trac.

At that time, it was on a dedicated Red Hat box that also acted as my firewall.

At some point since then, I migrated it to vmware - see that topic for some of the problems.

Originally that VM image ran on my CentOS server (cf. ) and some point it was migrated to my Windows 7 desktop.

Since it was in a VM, I could always snapshot and really beat on it. I had files as far back as 2001 and GPG signatures for my RPMs from Red Hat OS before it was Fedora/RHEL split. Over the years, I've managed to beat it into submission to the point I had it running Fedora 31; of course it's built-in now with dnf system-upgrade. But that's not the point. Fedora 32 broke Python2, and trac isn't there yet. (Side note - the VM has been called webmail for years, but I uninstalled SquirrelMail and moved to Google-hosting many years ago.)

With the COVID-19 quarantine, I decided to migrate this blog to containers so I can just use a pre-defined trac container and go on my merry way. Hopefully less maintenance in the future.

So, that's where it is now. As I tested the site from my wife's iPad (on cellular) I just had to marvel at how data travels to get this post out of my house:

(you) <=> (Cloudflare) <=> OpenWrt <=> Win10 Pro <=> Hyper-V Docker VM <=> Container [Ephemeral] <=> Docker Volume

WiFi Checker for OpenWrt

It's been a while since I dumped anything here… hope I can post…

I have OpenWRT on my home router and it's using a secondary chipset to run a guest-only network that sometimes randomly drops out. I've been told they no longer support it in that manner, which explains a lot. Anyway, in case my config gets nuked, here's what I did:

# cat /etc/crontabs/root
* * * * * /etc/custom_scripts/
# cat /etc/custom_scripts/
if iwinfo | grep -iq ragnarok_guest; then
  rm -f /tmp/guest_down
  exit 0
if [ -e /tmp/guest_down ]; then
  echo "$(date) -- REBOOTING" > /var/log/guest_check
touch /tmp/guest_down
echo "$(date) -- DOWN" > /var/log/guest_check
#service network stop
#service network start
wifi down
wifi up

Spoofing an RPM's host name

I came up with this method years ago, and thought I posted it, but couldn't find it. So here's the latest incarnation:

# This spoofs the build host for both 32- and 64-bit applications

# To use:
# 1. Add libmyhostname as a target that calls rpmbuild
# 2. Add "myhostnameclean" as a target to your "clean"
# 3. Call rpmbuild or any other program with $(SPOOF_HOSTNAME) prefix

MYHOSTNAME_MNAME:=$(shell uname -m)
MYHOSTNAME_PWD:=$(shell pwd)

.PHONY: myhostnameclean
.SILENT: myhostnameclean
.IGNORE: myhostnameclean
        rm -rf myhostname

# Linux doesn't support explicit 32- vs. 64-bit LD paths like Solaris, but
# does accept a literal "$LIB" in the path to expand to lib vs lib64. So we need
# to make our own private library tree myhostname/lib{,64} to feed to rpmbuild.
.PHONY: libmyhostname
.SILENT: libmyhostname
libmyhostname: /usr/include/gnu/stubs-32.h /lib/
        mkdir -p myhostname/lib{,64}
        $(MAKE) -I $(MYHOSTNAME_PWD) -s --no-print-directory -C myhostname/lib   -f $(MYHOSTNAME_PWD)/Makefile $(libmyhostname) MYHOSTARCH=32
        $(MAKE) -I $(MYHOSTNAME_PWD) -s --no-print-directory -C myhostname/lib64 -f $(MYHOSTNAME_PWD)/Makefile $(libmyhostname) MYHOSTARCH=64

.SILENT: /usr/include/gnu/stubs-32.h /lib/
        echo "You need to install the 'glibc-devel.i686' package."
        echo "'sudo yum install glibc-devel.i686' should do it for you."

        echo "You need to install the 'libgcc.i686' package."
        echo "'sudo yum install libgcc.i686' should do it for you."

.SILENT: libmyhostname $(libmyhostname) libmyhostname_$(MYHOSTNAME_MNAME).o libmyhostname_$(MYHOSTNAME_MNAME).c
$(libmyhostname): libmyhostname_$(MYHOSTNAME_MNAME).o
        echo "Building $(MYHOSTARCH)-bit version of hostname spoofing library."
        gcc -m$(MYHOSTARCH) -shared -o $@ $<

libmyhostname_$(MYHOSTNAME_MNAME).o: libmyhostname_$(MYHOSTNAME_MNAME).c
        gcc -m$(MYHOSTARCH) -fPIC -rdynamic -g -c -Wall $<

        echo "$$libmyhostname_body" > $@

define libmyhostname_body
#include <string.h>
#include <asm/errno.h>

int gethostname(char *name, size_t len) {
        const char *myhostname = "buildhost_$(MYHOSTNAME_MNAME).myprojectname.proj";
        if (len < strlen(myhostname))
        strcpy(name, myhostname);
export libmyhostname_body

Chrome on CentOS 7

So my Google Chrome on my CentOS 7 box updated, and SELinux doesn't like it.

There's an official bug for it - - but I don't know when that will propagate down.

Until then, here's what I did, with some plaintext showing what was happening:

$ sudo grep chrome /var/log/audit/audit.log | grep setcap | audit2allow
#============= chrome_sandbox_t ==============

#!!!! This avc is allowed in the current policy
allow chrome_sandbox_t self:process setcap;

$ sudo grep chrome /var/log/audit/audit.log | grep setcap |
audit2allow -M chrome.pol
******************** IMPORTANT ***********************
To make this policy package active, execute:

semodule -i chrome.pol.pp

$ cat chrome.pol.te

module chrome.pol 1.0;

require {
        type chrome_sandbox_t;
        class process setcap;

#============= chrome_sandbox_t ==============

#!!!! This avc is allowed in the current policy
allow chrome_sandbox_t self:process setcap;

$ sudo semodule -i chrome.pol.pp

Learning git

I've been using subversion for at least a decade now. I was going to switch to a git-based project so wanted to learn. I think I finally get the differences, especially with the "index." These are the resources I used:

And finally:

Goodbye Netflix

Wow. I just checked, and I've had Netflix since 08/10/2001. Over thirteen years. Longer than my marriage. Two houses ago. I'm down to the cheapest one-at-a-time plan, and I still get around to it every three or four months.

I think it's time to say goodbye.

But here's how they get you to stay:

Based on your 1698 ratings, this is the list of movies and TV shows you've seen. 

Yeah… thirteen and a half years of data that I don't want to lose! And that's my main account - I have two other profiles too. I searched the 'net for a solution, and came up with a lot. None worked. GreaseMonkey ones. PHP ones. None worked.

This was the closest:

But I don't have a Mac, so I needed to manually capture that info. Ninety pages of ratings. So I used DownThemAll!. I opened the download manager manually, and for the URL I used[1:90] - I had manually determined 90 with some trial and error. This saved all the pages to files named MoviesYouveSeen.htm and then MoviesYouveSeen_NNN.htm.

I modified the script to read these HTML files instead of launching Safari. After that, the ratings were off - every movie in the file would have the rating of the first in the file. So I tweaked that. For some reason, some don't show a rating in the HTML, even when these were supposedly rated. Some are "No Interest," but others, I just don't know what happened. So I have it output 0.0 if it couldn't figure it out - a 99% solution.

Here are my changes from the gitlab (17 Jan 2014) version (depending on screen width, you might have to scroll way down):

  • .py

    old new  
     1#!/bin/env python
     2# Original @
    24Scrape a user's Netflix movie ratings by automating a Safari browsing
    35session (with the user already logged in).  The ratings are written
    107109from jinja2 import Template
    108110from lxml import html
     111import re
     113fname_regex = re.compile(r'(\w+?)_?(\d+)?\.(\w+)')
     114rating_regex = re.compile(r'You rated this movie: (\d)\.(\d)')
    111117# AppleScript functions asrun and asquote (presently unused) are from:
    159165    All values are strings.
    160166    """
    161167    # Load the page, grab the HTML, and parse it to a tree.
    162     script = ASTemplate.render(URL=url, DTIME=dtime)
    163     reply = asrun(script)
     168    reply = ''
     169    try:
     170      with open(url) as infile:
     171        for str_ in infile:
     172          reply += str_
     173    except IOError:
     174      return [], None
    164177    tree = html.fromstring(reply)
    165178    rows = tree.xpath('//table[@class="listHeader"]//tr')
    180193            # changing from page to page.  For info on XPath for such cases, see:
    181194            #
    182195            # rating = data[3].xpath('//span[@class="stbrMaskFg sbmfrt sbmf-50"]')[0].text_content()
    183             rating = data[3].xpath('//span[contains(concat(" ", normalize-space(@class), " "), " stbrMaskFg ")]')[0].text_content()
    184             rating = rating.split(':')[1].strip()  # keep only the number
     196            rating_cut = rating_regex.match(data[3].text_content())
     197            rating = '0.0'
     198            if rating_cut:
     199               rating = "%s.%s"%(,
    185201            info.append((title, year, genre, rating))
    187203    # Next URL to load:
    188     next_elem = tree.xpath('//li[@class="navItem paginationLink paginationLink-next"]/a')
    189     if next_elem:
    190         next_url = next_elem[0].get('href')
    191     else:  # empty list
    192         next_url = None
     204    fname_cut = fname_regex.match(url)
     205    if fname_cut:
     206      if None ==
     207        num = 0
     208      else:
     209        num =
     210      next_url = "%s_%03.f.%s"%(,int(num)+1,
     211    else:
     212      print "Regex failed."
     213      next_url = None
    194216    return info, next_url
    197219# Use this initial URL for DVD accounts:
    198 url = ''
     220url = 'MoviesYouveSeen.htm'
    199221# Use this initial URL for streaming accounts:
    200222# url = ''

This renders a lot of the script useless, but there's no benefit in making the diff larger so I didn't trim anything else.

Here's when I ran it across my "TV Queue" account - yeah they're not all TV, sometimes I accidentally rated things with the wrong profile:

$ ./
Scraping MoviesYouveSeen.htm
1:  Garmin Streetpilot 2610/2650 GPS (2003) [Special Interest] - 1.0
2:  Six Feet Under (2001) [Television] - 0.0

Scraping MoviesYouveSeen_001.htm
3:  The Thief of Bagdad (1924) [Classics] - 4.0
4:  The Tick (2001) [Television] - 4.0
5:  Michael Palin: Pole to Pole (1992) [Documentary] - 0.0
6:  Kung Fu: Season 3 (1974) [Television] - 0.0
7:  Danger Mouse (1981) [Children & Family] - 3.0
8:  Farscape (1999) [Television] - 3.0
9:  Helvetica (2007) [Documentary] - 3.0
10:  Hogan's Heroes (1965) [Television] - 3.0
11:  The Lion in Winter (2003) [Drama] - 3.0
12:  Monty Python: John Cleese's Best (2005) [Television] - 3.0
13:  Sarah Silverman: Jesus Is Magic (2005) [Comedy] - 3.0
14:  Stephen King's It (1990) [Horror] - 3.0
15:  Superman II (1980) [Action & Adventure] - 3.0
16:  Superman: The Movie (1978) [Classics] - 3.0
17:  Tom Brown's Schooldays (1951) [Drama] - 3.0
18:  An Evening with Kevin Smith 2 (2006) [Comedy] - 0.0
19:  Crimewave (1986) [Comedy] - 2.0
20:  Huff (2004) [Television] - 2.0
21:  Aqua Teen Hunger Force (2000) [Television] - 1.0
22:  The Boondocks (2005) [Television] - 1.0

Scraping MoviesYouveSeen_002.htm
23:  Ricky Gervais: Out of England (2008) [Comedy] - 5.0
24:  Robot Chicken (2005) [Television] - 5.0
25:  Robot Chicken Star Wars (2007) [Comedy] - 5.0
26:  Rome (2005) [Television] - 5.0
27:  Scrubs (2001) [Television] - 5.0
28:  Stewie Griffin: The Untold Story (2005) [Television] - 5.0
29:  Spaced: The Complete Series (1999) [Television] - 0.0
30:  Alice (2009) [Sci-Fi & Fantasy] - 0.0
31:  Best of the Chris Rock Show: Vol. 1 (1999) [Television] - 4.0
32:  The Critic: The Complete Series (1994) [Television] - 4.0
33:  Dilbert (1999) [Television] - 4.0
34:  An Evening with Kevin Smith (2002) [Comedy] - 4.0
35:  John Adams (2008) [Drama] - 4.0
36:  King of the Hill (1997) [Television] - 4.0
37:  The Lone Gunmen: The Complete Series (2001) [Television] - 4.0
38:  Neverwhere (1996) [Sci-Fi & Fantasy] - 4.0
39:  Robin Hood (2006) [Television] - 4.0
40:  The Sand Pebbles (1966) [Classics] - 4.0
41:  The Sarah Silverman Program (2007) [Television] - 4.0
42:  The Silence of the Lambs (1991) [Thrillers] - 4.0

Scraping MoviesYouveSeen_003.htm
43:  Alias (2001) [Television] - 5.0
44:  Alien (1979) [Sci-Fi & Fantasy] - 5.0
45:  Band of Brothers (2001) [Drama] - 5.0
46:  Bleak House (2005) [Drama] - 5.0
47:  Brisco County, Jr.: Complete Series (1993) [Television] - 5.0
48:  Code Monkeys (2007) [Television] - 5.0
49:  Coupling (2000) [Television] - 5.0
50:  Dead Like Me (2003) [Television] - 5.0
51:  Deadwood (2004) [Television] - 5.0
52:  Family Guy (1999) [Television] - 5.0
53:  Family Guy: Blue Harvest (2007) [Television] - 5.0
54:  Firefly (2002) [Television] - 5.0
55:  Futurama (1999) [Television] - 5.0
56:  Futurama the Movie: Bender's Big Score (2007) [Television] - 5.0
57:  The Great Escape (1963) [Classics] - 5.0
58:  Greg the Bunny (2002) [Television] - 5.0
59:  How I Met Your Mother (2005) [Television] - 5.0
60:  MI-5 (2002) [Television] - 5.0
61:  My Name Is Earl (2005) [Television] - 5.0
62:  Police Squad!: The Complete Series (1982) [Television] - 5.0

Scraping MoviesYouveSeen_004.htm

Thanks a ton to the original author, and the full version is attached here for posterity.

IP Address in Python (Windows)

From StackOverflow, my changes:

  • Py3 compat (no big deal)
  • Added DHCP support
  • Use CurrentControlSet (saner IMHO)
Error: Failed to load processor python3
No macro or processor named 'python3' found

Upgrading to Fedora 21

These are mostly my personal note-to-self, but in case it helps somebody else…

fedup - I've used this a few times, and man does it make upgrades easy. I had some key problems but those were easy enough to fix.

My web server was "down" and I was looking at iptables and saw all this stuff about zones, etc. I checked /etc/sysconfig/iptables and it looked good so when I ran system-config-firewall-tui it told me that "FirewallD is active, please use firewall-cmd" - of course, now I see that in the FAQ (I used nonproduct).

It looks like they have a new Firewall Daemon. In the end, all I had to do was:

firewall-cmd --add-service=http --zone=public --permanent
firewall-cmd --reload

There are other commands I used in between like --get-services to see what was predefined and --list-services to ensure http was added after the reload.

Since it's in a VM, I do have a screenshot of the happy hot dog that mysteriously isn't in my /var/log/fedup.log file. ;)

C++ Move Semantics

I finally read this and wanted to keep it for posterity.

Awesome Stack Overflow answers (the top two).

Some BOOST examples

I've been trying to learn BOOST lately, and wrote a little program to convert 16-bit sampled data to 8-bit. In theory, you can do other things similar (scaling).

As always, any feedback is welcome. Especially a "proper" C++ way of creating a file of a given size.

// Templated function to convert from one size int to another with automatic scaling factors.
// Doesn't do so well with signed <=> unsigned!
// g++ -o 16_to_8 -lboost_iostreams

// Boost concepts shown/used:
// integer traits (compile-time min/max of a type)
// memory mapped I/O
// (unused) make_iterator_range (converts C-style array into STL-compliant "Forward Range" concept)
// minmax_element (does 3N/2 compares instead of 2N)
// (unused) bind to create on-the-fly lambda-like function for std::transform
// lambda in std::transform
// replace_first for a string

#include <boost/integer_traits.hpp>
#include <iostream>
#include <algorithm>
//unused #include <boost/bind.hpp>
#include <boost/lambda/lambda.hpp>
#include <boost/lambda/casts.hpp>
#include <boost/iostreams/device/mapped_file.hpp>
#include <boost/exception/diagnostic_information.hpp>
#include <boost/algorithm/minmax_element.hpp>
#include <boost/algorithm/string/replace.hpp>

using namespace std;

// Generic read/writer
template <typename intype, typename outtype>
bool ReadWriteScale(const string &infilename, const string &outfilename) {
  using namespace boost;
  using namespace /*boost::*/iostreams;
  using namespace lambda;

  mapped_file_source infile;
  mapped_file_sink outfile;

  try {;
  } catch (boost::exception &err) {
        cerr << diagnostic_information(err);
        return false;
  size_t insize = infile.size(); // bytes
  if (0 == insize) {
        cerr << "The input file seems empty." << endl;
        return false;
  // Create a new file for output. We'll use old-school C to create the file, and then memory mapped to let the kernel handle paging.
        FILE *outfile_raw = fopen(outfilename.c_str(), "w");
        fseek(outfile_raw, insize / sizeof(intype) * sizeof(outtype) - 1, SEEK_SET);
        const char nil = '\0';
        fwrite(&nil, 1, 1, outfile_raw);
  try {;
  } catch (boost::exception &err) {
        cerr << diagnostic_information(err);
        return false;
  // Get min/max
  intype min = 0, max = 0;
  const intype *indata = reinterpret_cast<const intype *>(;
  cout << "Scanning " << insize << " bytes." << endl;
  insize /= sizeof(intype); // Change scale of insize from here on out
  //iterator_range<const intype*> iter = make_iterator_range(indata, indata+insize);
  //pair<const intype*, const intype*> result = minmax_element(iter.begin(), iter.end());
  pair<const intype*, const intype*> result = minmax_element(indata, indata+insize);
  // Figure out scaling. TODO: Handle unsigned properly (check const_min == 0)
  const int64_t min64 = static_cast<int64_t>(*result.first);
  const int64_t max64 = static_cast<int64_t>(*result.second);
  //cout << "Factors: " << static_cast<double>(integer_traits<outtype>::const_min) / min64 << ", " << static_cast<double>(integer_traits<outtype>::const_max) / max64 << endl;
  const double scaling_factor = std::min( static_cast<double>(integer_traits<outtype>::const_min) / min64, static_cast<double>(integer_traits<outtype>::const_max) / max64 );
  const double newmin = min64 * scaling_factor;
  const double newmax = max64 * scaling_factor;
  cout << "Scaling factor = " << scaling_factor << " = " << min64 << ":" << max64 << " => " << newmin << ":" << newmax <<
        " [" << static_cast<double>(static_cast<outtype>(newmax)) << "]" << endl;
  // Do the caling
  outtype *outdata = reinterpret_cast<outtype *>(;
  // transform(indata, indata+insize, outdata, bind(multiplies<double>(), _1, scaling_factor));
  transform(indata, indata+insize, outdata, ll_static_cast<outtype>(_1 * scaling_factor));
  cout << "Processed " << insize << " data points." << endl;
  return true;

int main (int argc, char **argv) {
        if (argc != 2) {
                cout << "Usage: " << argv[0] << " filename.16data" << endl;
        // Change file name
        string outfilename(argv[1]);
        boost::replace_last(outfilename, ".16data", ".8data");
  // Return zero is success in Unix
  return !ReadWriteScale<int16_t, int8_t>(argv[1], outfilename);

Fixing sudo timeouts

So at work, a script needs to download a set of large RPMs and then install them. This is in a Makefile, so if sudo returns a negative, it fails and you need to find the temporary directory, or re-run. sudo can be told to change the timeout, but that seems to only be by modifying /etc/sudoers, not via a commandline option. So if the user walks away during the download and doesn't come back within five minutes (by default) after the download is complete, no dice.

Here's the applicable section of the Makefile:

# We are passed the RPM_BASE_NAME - we will pull down the entire matching directory

TMP_DIR:=$(shell mktemp -d)

  echo Fetching $(RPM_BASE_NAME) RPMs...
  # -r=recursive, -nv=non-verbose (but not quiet), -nd=make no directories, -nH=make no host names
  # -P=move to path first, -Arpm=accept only RPM files
  wget -r -nv -nd -nH -P $(TMP_DIR) -Arpm -np $(DLSITE)/$(RPM_BASE_NAME)/
  # If you walk away and come back, your download was wasted after sudo's five minute timeout!
  sudo -n ls /tmp > /dev/null 2>/dev/null || read -n1 -sp "sudo credentials have expired. Press any key when you are ready to continue." dontcare
  echo " "
  sudo -p "Please enter %u's password to install required RPMs: " rpm -Uvh $(TMP_DIR)/*rpm
  -rm -rf $(TMP_DIR)

Raspberry Pi and a BT Keyboard

I bought a Favi Bluetooth keyboard to use with the Raspberry Pi.

I wish I documented better how I got it running. I followed somebody else's page, but don't have the details…

Some of the root user's history:

update-rc.d -f dbus defaults
apt-get install bluetooth bluez-utils blueman
hcitool scan
hcitool dev
hcitool lescan
hcitool inq
hciconfig -a
bluez-simple-agent hci0  54:46:6B:xx:xx:xx
bluez-test-device trusted  54:46:6B:xx:xx:xx yes
bluez-test-input connect 54:46:6B:xx:xx:xx

I added /etc/init.d/bluetooth restart to /etc/rc.local

I possibly added blacklist hci_usb to /etc/modprobe.d/raspi-blacklist.conf

I can't get it to work again, so maybe some day…

Scripting konsole and tabs

At work I want to launch two programs in separate tabs in konsole from a script, so I whipped this one up:


checkfile() {
  if [ ! -f $1 ]; then
    echo "could not find $1"
    exit 99
    echo "OK"

# Check for App1 XML
echo -n "Checking for App 1 XML... "
checkfile ${DEVROOT}/${XMLA}

# Check for App2 XML
echo -n "Checking for App 2 XML... "
checkfile ${DEVROOT}/${XMLB}

# Launch Konsole
echo -n "Launching konsole... "
K=$(dcopstart konsole-script)

[ -z "${K}" ] && exit 98
# Create second tab and resize
SDA=$(dcop $k konsole currentSession)
SDB=$(dcop $k konsole newSession)
dcop $K $SDA setSize 121x25

# Let bash login, etc.
sleep 1

# Rename the tabs
dcop $K $SDA renameSession "App 1"
dcop $K $SDB renameSession "App 2"

# Start services, letting user watch
echo -n "starting app1... "
dcop $K konsole activateSession $SDA
dcop $K $SDA sendSession "echo -ne '\033]0;DEV (${hostname})\007' && clear && starter $XMLA"
sleep 2
echo -n "starting app2... "
dcop $K konsole activateSession $SDB
dcop $K $SDB sendSession "echo -ne '\033]0;DEV (${hostname})\007' && clear && starter $XMLB"
echo done.

The funky echo commands will set the application title to "DEV (hostname)" while the tab title is set with renameSession.

Delay Before RPM Installs

I wanted the user to have a chance to abort an install, for the ones that insist on using rpm vs yum.

%if <something>
%global fake %%(for i in {1..100}; do echo " "; done)
%{echo:%{fake}WARNING: RPM will not properly require vendor drivers! }
%global fake %%(for i in {1..5}; do echo " "; done)
%global fake %(sleep 10)

Have RPM Require an Environmental Variable

$(warning RHEL4 is no longer supported!)
$(warning The driver RPMs no longer support RHEL4 and the RPM that is generated will not properly require them.)
$(warning If you are OK with that, then you can re-run with TARGET_RHEL_OVERRIDE=1)
$(error RHEL4 is no longer supported!)

Python deepcopy broken

Well, that was annoying… spent a long time last Friday and today to find out that Python 2.7's copy.deepcopy doesn't play well with xml.dom.minidom. See this bug report.

The workaround is to use "doc.cloneNode(True)" instead.

RPM Compression

The other day at work I noticed that at the end of an RPM build, it seemed to hang. It turns out, it was compressing the files to create the installer. I'd rather not have it do that if I am building development versions since they only get scp'd to a lab environment.

Even if it does compress, I'd like to have feedback as to what it is doing. So I added these lines to my .spec file. They should be easy enough to tweak and add to a system-level macros file.

Background: We had "dev" appended to the version number already, so this was the easiest way to do it:

%if 0%(echo %{rpm_version} | grep -c dev)
%define _source_payload w0.gzdio %%{echo:Skipping source RPM compression...}
%define _binary_payload w0.gzdio %%{echo:Skipping binary RPM compression...}
%define _source_payload w9.gzdio %%{echo:Maximum source RPM compression...}
%define _binary_payload w9.gzdio %%{echo:Maximum binary RPM compression...}
%global _enable_debug_packages 0
%global _debug_package %{nil}

So now my RPMs are about four times as large as they were, but are built a lot faster.

Email your new IP address with TomatoUSB

So my router is now TomatoUSB and I wanted an alert when the IP changed. Sure, I could probably put something local on the router, but where's the fun in that?

So I put together a quick python script to drop me an email if the IP ever changes. Yes, TomatoUSB supports various Dynamic DNS services, but doesn't seem to natively support "email me."

So on the DDNS setup page, I chose the "Custom URL" service, and I put in "" as the URL (the internal address of an Apache server running WSGI.

I have a custom config file /etc/httpd/conf.d/wsgi_IP as follows:

WSGIScriptAlias /IPCHECKS /var/www/wsgi/IP.wsgi

<Directory "/var/www/wsgi/">
  WSGIApplicationGroup %{GLOBAL}
  Order deny,allow
  Deny from all
  Allow from 192 127 ::1

HOPEFULLY that means none of you can change what I think my IP address is. ;)

Here's the actual python script (/var/www/wsgi/IP.wsgi):

from __future__ import print_function
from cgi import parse_qs, escape
import socket
import smtplib

# This is RevRagnarok's ugly IP checker.
# Tomato (firmware) will post to us with a "new_ip" parameter
# At this point, I want to see manually that the IPs change, not have it autoupdate
# Note: I had to enable HTTP sending email in SELinux:
# setsebool -P httpd_can_sendmail 1

def application(environ, start_response):
    parameters = parse_qs(environ.get('QUERY_STRING', ''))
    if 'new_ip' in parameters:
        newip = escape(parameters['new_ip'][0])
        newip = 'Unknown!'
    start_response('200 OK', [('Content-Type', 'text/html')])
    # Look up DNS values
    oldip = socket.gethostbyname('') # Yes, IPv4 only
    # Compare
    changed = ''
    if newip != oldip:
        changed = 'IP changed from {0} to {1}.'.format(oldip, newip)
    if changed:
        e_from = '[email protected]'
        e_to = ['firewall-report[email protected]']
        e_msg = """Subject: IP Address change detected

        # I considered a try/catch block here, but then what would I do?
        smtpObj = smtplib.SMTP('localhost')
        smtpObj.sendmail(e_from, e_to, e_msg)
       changed = '(unchanged)'
       changed = 'IP is {0} (unchanged).'.format(newip)
    return [changed]

And don't forget, if you use SELinux, fix permissions on the script, and allow the webserver to send email:

[root@webserver wsgi]# ls -Z IP.wsgi 
-rw-r--r--. root root system_u:object_r:httpd_sys_script_exec_t:s0 IP.wsgi
[root@webserver wsgi]# setsebool -P httpd_can_sendmail 1

Optional requirements for RPM

(Wow, it's been over a year since my last blog post…)

At work, I have a program that optionally could use the Qt libraries, but I didn't want my RPM to actually require like it wanted to. And RPM doesn't seem to support an "OptRequires" or something similar… so here's my hack of a solution.

I put this script named find-requires into my project's "build" subdirectory so it is included in the "Source" tarball that rpmbuild will be using. I wrote it to be expandable.

#!/usr/bin/perl -w
use strict;
use IPC::Open2;

# This quick script will run the native find-requires (first parameter)
# and then strip out packages we don't want listed.

open2(\*IN, \*OUT, @ARGV);
print OUT while (<STDIN>);
my $list = join('', <IN>);

# Apply my filter(s):
$list =~ s/^*?$//mg;

print $list;

Then put in your .spec file this, which will call our script with the original script as the first parameter:

# Note: 'global' evaluates NOW, 'define' allows recursion later...
%global _use_internal_dependency_generator 0
%global __find_requires_orig %{__find_requires}
%define __find_requires %{_builddir}/%{?buildsubdir}/build/find-requires %{__find_requires_orig}