Tuesday, June 25, 2013

Getting notification when a long-running command finishes.

My job often involves running a command to do something, realising that it's going to take longer than I expected and going doing something else in the meantime. Conversely I'll run something that should take a long time, go do something else and find that it stopped with an error after a very short time and I didn't notice.

For a long time I've wanted to set things up so that any command that takes more than 20s to run would send me an IM when it was finished. What I didn't want was to have to prefix all my commands with some notification because that's annoying and I would probably forget in exactly the cases I need it.

It sounds pretty simple but getting it work turned out to be quite fiddly.

Delivering an instant message.

There's a program call sendxmpp that will deliver an instant message over XMPP. It's written in Perl and definitely works on Linux but probably also Mac and Windows if you really want. Every few months I would try to make it work and when I finally got it working, it would stop working again the next day.. Apparently Google's XMPP service is particularly finicky. I appear to have made it work for real now. I honestly don't know what I did differently, it may even be something that changed when Google deprecated XMPP in favour of Hangouts. As I write this, I see that it no longer works for one of my accounts but it's working fine with another! If there's a more reliable option, I'd love to know.

To make this work you'll need a second Google account because sending a message from yourself to yourself doesn't seem to work. So I created an account that I'm going to use only for sending IMs. After creating that account, I invited it to chat and made sure that the accounts can chat with each other through the normal Google chat interface.

Next I created a config file for it with the following commands (you'll need to replace some parts of the last one)

Now to send a notification to myself I just do

echo some message | sendxmpp -t -u my.new.im.username -o gmail.com other.username@gmail.com

and the message shows up. I also get a bunch of Perl warnings but such is life.

Finally I wrapped that up as a command called notify and I can just pipe a message into that.

Spotting that a command is taking a long time to finish

I hate unix shells, they're a horrible mishmash of special cases, obscure features and crap that's only that way for backwards compatibility. What I've done is a hack in bash and it's not perfect (it seems like it would be slightly less hacky in zsh but that wouldn't fix the oddities described below). The core is this

What that does is run bash_pre_cmd every time you hit enter to kick off a command and bash_post_command every time it prints the command prompt (which indicates that the command has finished, although there are various way for this not to be true, hence the oddities but in normal usage it's true). So all that remains is sensible definitions for those two commands

You'll probably want to customize the callback for your own preferences.

Put all that into your .bashrc and cross your fingers


If you suspend a process, that will cause the prompt to be printed and may cause a notification. If you unsuspend that process with fg then when it finishes, this code will see fg as the command that just completed. Also, we cannot notify for 0 seconds otherwise just hitting enter would cause notification and the IM would refer to the last command in the history. Interactive commands obviously take a long time and so cause useless notifications. I'll probably whitelist a few common ones like man and less. There are probably some problems too but for normal type-run-complete shell use it works just fine.

Basically shells are shitty and not very flexible. Doing this correctly would involve being able to stash away some state along with each command that's started then getting access to that state when the command finishes, it would require changes to shell.

Friday, June 14, 2013

Julian Assange's "secret" meeting with Eric Schmidt

http://wikileaks.org/Transcript-Meeting-Assange-Schmidt.html has a transcript and a 3 hour recording of a meeting between Julian Assange and Eric Schmidt (and some others) as part of the research for Eric's recent book.

It's long but it's quite interesting (nothing terribly secret though).

I like Assange's goal of using content-based addressing for everything. It's not an original idea of his but I his point about detecting newspaper articles that silently disappear was good. If he can help to popularise it, that's great.

I completely agree with his call for "scientific journalism". That is journalism where all claims are backed up with references. He says that anything else should be dismissed as not journalism at all. There used to be the excuse that there wasn't enough room on paper but that gone now. George Monbiot (http://www.monbiot.com/) has been including full references in the web versions of his articles for years now but I haven't see anyone else doing it.

I thought he didn't give a convincing response to the question of what happens if the governments and corporations floods Wikileaks with thousands of computer-generated fake but plausible leaked documents. That whole conversation got a bit messed up and he seemed to miss the core of the question.

Possibly the most shocking thing was that Eric Schmidt didn't know what simulated annealing was :)

Anyway, I've linked the MP3 to this post so if you want to listen to it conveniently through a podcast player, you can subscribe to my feed to get it (there are no other podcasts in my feed).