jumping into the perl time machine

Today I had to monkey-patch the HP iLO configuration utility to disable SSL host verification.  Wasn’t terribly difficult thanks to the excellent documentation of the IO::Socket::SSL module on cpan.  Changing stuff in a vendor script is normally not ideal, but there’s no interface provided to override the default host verify behavior in the module so you gotta do what you gotta do.

Also, diving into Perl makes me a bit nostalgic for the days when Perl was my go-to scripting language for pretty much everything.  The various shortcuts the language allows are nice when you’re working on a quick one-off job, and the fact that so much stuff is just available in the native namespace (no import re for regex’es here, just built-in!) is somewhat convenient.  Then again, the oo support is awful, there’s no real exception handling ( || die doesn’t count!), and typing sigils all the time gets a bit annoying.  Hurrah for python?  =D



How did I not know about the lsblk command?  Prior to that I would typically do something like ‘cat /proc/partitions’ to discover what block devices/partitions were on a system.  Not super proud.  =D

upstart subshells

Ran into an interesting problem with an upstart service config recently.  So an engineering team was running a third-party daemon using an upstart script, but they needed to capture the stdout and stderr of the daemon for debugging purposes.  After thinking for a bit, I just appended ‘ | logger -t <somelabel>’ to the exec line in the config, and everything seemed to work.

Of course what I didn’t realize is that if any shell metacharacters are included in the exec statement (eg. the pipe), upstart will automatically open a subshell and run the daemon in it.  The pid of the subshell is then tracked by upstart as the pid of the service.  This is fine in theory as when upstart stops the job and sends a SIGTERM to the parent shell, the child process receives a SIGHUP and should shut down.

In this case, the daemon apparently handled SIGHUP and continued running.  This means that upstart thinks the daemon is stopped when it is actually still running, which isn’t good news for anyone as now multiple daemon instances are possible.

A possible solution is to simply use file redirection (eg. something like > /var/log/somelog), but a new logfile means a logrotate update, splunk forwarder updates, and possible other headaches.  It would be really nice to use logger and let syslog handle the messages if possible.

Thankfully, I stumbled across this very clever approach.  This solution creates a temporary fifo to negotiate a shared file descriptor that the given daemon can write to and logger can read from.  Very elegant IMHO.


allocating memory in c

I recently had a need to generate arbitrary memory usage with arbitrary binary names for a monitoring effort I was working on. My typical go-to for generating resource usage is the excellent stress utility, but this didn’t meet the arbitrary binary name criteria, and I’m pretty sure the memory workers it spins up just consume as much memory as possible.

So, I wrote a quick and dirty C program to do the job. My first attempt was a simple malloc(arbitrary_bytes) followed by a sleep (shut up, I said it was quick and dirty =D), but interestingly, the binary ran with almost no memory usage. The reason is apparently that while malloc allocates the requested memory, it doesn’t initialize it. Switching this call to calloc(num_elements, element_size) produced the desired behavior. Huzzah!

assignment in python

So I burned about an hour untangling some problem in a python script. Turns out that the root cause was that I was using a template dict to populate with config variables, but because variables are essentially references in python, you need to be careful when doing this. For example:

    "option1": None,
    "option2": None,
    "option3": None,

for i in instances:
    i = BASELINE
    i['option1'] = figure_out_option()

In the first iteration of that for loop, variables i and BASELINE point to the same object, so changes to i affect BASELINE as well. Whoops. To get around this, you can either use the copy function in the copy module, or in this case with a dict, just use the copy method, eg:

for i in instances:
    i = BASELINE.copy()

no escape from sigkill

Today I learned that you apparently cannot handle a sigkill. Not like you personally can’t handle it, more like it is not possible to handle a sigkill on a POSIX-compliant OS (and possibly even non-compliant ones?). That explains the error I was seeing when trying to set a handler for sigkill in python. =D

Sigterm is sigkill’s kindler, gentler cousin that is handleable. Sigterm is also the default signal sent by kill (and its cousin killall), but this can be overridden with -SIGKILL (or the infamous -9, but IMHO the former is more readable). Cool. =)

driven by daemons

Until very recently, I never realized how much work went into creating a proper daemon program for a linux system. I had always assumed that sysinitv and the distro’s init function library (eg. daemon() in redhat/cent) handled all of the details. I wrote a simple python “daemon” to handle some task I was working on, and was surprised to find that the “daemon” was about as un-daemonlike as it could be. A ‘service start ‘ immediately started displaying the stdout/stderr of the program, and the service binary never returned until a ctrl-C was entered. Whoops.

So as it turns out, there’s a bit of stuff you have to do in the program itself to properly daemonize. Specifically, you need to double-fork so that the daemon process becomes a session and process group leader, then close both parent processes so that the daemon can never have a terminal attached. There’s a few other details as well, which thankfully are pretty well documented. Also, it’s awesome that python has native bindings for every required bit of functionality in the os module. Really makes things easier.

For a while, I just ran my daemon using upstart rather than sysinitv, which seemed to magically handle all of the details with zero daemonization efforts in the code itself (plus, the respawn feature is really nice). However, it was a rather cool experience digging into the gritty details of how daemon processes actually work.