I've taken over "maintaining" DVswitch from Ben Hutchings a few years ago, since Ben realized he didn't have the time anymore to work on it well.

After a number of years, I have to admit that I haven't done a very good job. Not becase I didn't want to work on it, but mainly because I don't have enough time to fix DVswitch against the numerous moving targets that it uses; the APIs of libav and of liblivemedia are fluent enough that just making sure everything remains compilable and in working order is quite a job.

DVswitch is used by many people; DebConf, FOSDEM, and the CCC are just a few examples, but I know of at least three more.

Most of these (apart from DebConf and FOSDEM) maintain local patches which I've been wanting to merge into the upstream version of dvswitch. However, my time is limited, and over the past few years I've not been able to get dvswitch into a state where I confidently felt I could upload it into Debian unstable for a release. One step we took in order to get that closer was to remove the liblivemedia dependency (which implied removing the support for RTSP sources). Unfortunately, the resulting situation wasn't good enough yet, since libav had changed API enough that current versions of DVswitch compiled against current versions of libav will segfault if you try to do anything useful.

I must admit to myself that I don't have the time and/or skill set to maintain DVswitch on an acceptable level all by myself. So, this is a call for help:

If you're using DVswitch for your conference and want to continue doing so, please talk to us. The first things we'll need to do:

  • Massage the code back into working order (when compiled against current libav)
  • Fix my buildbot instance so that my grand plan of having nightly build/test runs against libav master actually works.
  • Merge patches from the suse and CCC people that look nice
  • Properly release dvswitch 0.9 (or maybe 1.0?)
  • Party!

See you there?

Posted Wed Apr 16 18:24:00 2014

I'm not much of a reader anymore these days (I used to be when I was a young teenager), but I still do tend to like reading something every once in a while. When I do, I generally prefer books that can be read front to cover in one go—because that allows me to immerse myself into the book so much more.

John Scalzi's book is... interesting. It talks about a bunch of junior officers on a starship of the "Dub U" (short for "Universal Union"), which flies off into the galaxy to Do Things. This invariably involves away missions, and on these away missions invariably people die. The title is pretty much a dead giveaway; but in case you didn't guess, it's mainly the junior officers who die.

What I particularly liked about this book is that after the story pretty much wraps up, Scalzi doesn't actually let it end there. First there's a bit of a tie-in that has the book end up talking about itself; after that, there are three epilogues in which the author considers what this story would do to some of its smaller characters.

All in all, a good read, and something I would not hesitate to recommend.

Posted Fri Apr 11 17:25:58 2014

While reading up on dnssec-keygen and other related stuff in order to update my puppet bind module to support DNSSEC transparently, I accidentally stumbled across something in the BIND Administrator's Reference Manual called GSS-TSIG. Intrigued, I set out to learn more, and found that this is terribly easy to set up with BIND:

  • create a DNS/<nameserver address> principal inside your realm. Note that the "nameserver address" really needs to be the name of the primary name server as specified in the SOA record of the zone; this is in contrast to pretty much every other use of Kerberos out there, where it has to be the name of the machine as specified by the PTR record of the IP address you're talking to.
  • Write that principal to a file which is readable by named. I used /etc/bind/bind.keytab for this purpose. Make sure this file is owned by the UID that named runs as.
  • Add a tsig-gssapi-keytab stanza to the options block in your named.conf file (or named.conf.local) on Debian. This stanza wants a string specifying the filename of your keytab file. In the above example, that becomes tkey-gssapi-keytab "/etc/bind/bind.keytab";
  • In the zone block of the relevant zones, add an update-policy statement for each of the users that you want to allow to make updates; e.g., update-policy { grant exampleuser@EXAMPLE.COM zonesub any; }; will allow a user with principal exampleuser@EXAMPLE.COM to perform updates for this zone.
  • exampleuser@EXAMPLE.COM can now use nsupdate -g to perform any GSSAPI-authenticated updates of the zone file.

There are a few more interesting features; e.g., krb5-self can be used to allow the host/machine@REALM principal to update the A record for machine, and there are ways to specify ACLs by wildcard. For more information, see the BIND administrator's reference manual, chapter 6, in the section entitled "Dynamic Update Policies".

Posted Thu Mar 27 22:21:15 2014

Kerberos is a great protocol for single sign-on authentication. It's supported by many protocols, allowing you to not have to enter a password to each and every one of them; instead, the protocols behind the scenes (not "just" Kerberos, but also the things that embed tickets, such as GSSAPI, or the things that embed GSSAPI, such as SASL or SPNEGO) use your ticket-granting-ticket to ask for security credentials for the service you're trying to use, magic happens, and you're authenticated. I blogged about kerberos before (even if it was ages ago); since then, I've not only used it on my own systems, but also on the systems of various customers.

One thing I've learned in that time, however, is that most web application developers have a bad case of NIHilism when it comes to authentication. Most webservers that I've seen have a wide range of methods to do authentication in the webserver through various means, including things like certificate-based authentication, one-time password modules, and, yes, kerberos. Yet almost no webapp out there will look at the magic variables that those webservers set to explain we're authenticated, instead reinventing the wheel through webforms and other various stupid means. Sigh.

So, that means, no kerberos authentication for webapps. Worse, if the application has no way to pass on authentication to something external, that means users will now have to learn another password: one for Kerberos, one for the webapp. And, probably, one for this other webapp, too -- because once you add one webapp, people expect you to add more of them.

Well, mostly. In some cases, webapps do have ways to externalize authentication. In most cases this means "store passwords in a database", or "try authenticating against this other service here".

When "this other service here" is an IMAP server, then all you need to do is make sure cleartext authentication on the IMAP server eventually ends up trying to authenticate against the Kerberos server, and you're all set. When "this other service here" is an LDAP server, however, you're out of luck. Right?

It turns out that no, you're not. I recently learned that OpenLDAP can, in fact, check "simple" bind requests by checking some other service, and that this other service can be a Kerberos realm. Doing so is called "Pass-through authentication" in the OpenLDAP documentation, and this is how you do it:

  • Add a "userPassword" attribute to the user who will authenticate, with a value of the form "{SASL}principal@KERBEROS.REALM". That is, take the user's principal name, tack "{SASL}" in front of it, and put that in the userPassword attribute.
  • Make sure you have saslauthd installed and running, with the kerberos5 mechanism active. In Debian, that means you have to install sasl2-bin, and edit /etc/default/saslauthd, so it has the variable START set to yes (yuck), and the variable MECHANISMS set to kerberos5.
  • Create a file /etc/ldap/sasl/slapd.conf, and add the following contents:

mech_list: plain
pwcheck_method: saslauthd

This directs the SASL libraries, loaded by slapd, to talk to saslauthd when trying to authenticate some SASL things - Make sure slapd has the correct permissions to access the saslauthd unix domain socket. On Debian, that means you need to add the "openldap" user to the "sasl" group:

`adduser openldap sasl`

(obviously that won't be active until the next slapd restart)

And that's it; if you now try a simple bind against the LDAP directory, and enter your Kerberos password, you should be in. If it doesn't work, try running "testsaslauthd"; if that works, it means the error is in your slapd configuration. If it doesn't, then the problem is in saslauthd.

Some notes:

  • this is not the same thing as using Kerberos authentication for LDAP. OpenLDAP has the ability to allow SASL binds for LDAP, which is way more interesting (since it allows authentication to actually be done by Kerberos rather than by "entering a password"); instead, with this method, it is possible to verify a password for a simple bind against a Kerberos password.
  • Since this can't use hashed passwords, this method is inherently insecure. Only use it if all else fails, and for the love of $DEITY, at least make sure all the network connections which are going to contain such passwords are encrypted with SSL or TLS or similar. Otherwise, everyone who can sniff anything on your network will learn all the Kerberos passwords, which is a very very bad thing.

Oh, and if you're a webapp developer: please please please make it easy for me to use an external authentication mechanism. This isn't hard; all you need is a separate page that will read the magic variables (probably REMOTE_USER if you're using apache), set a session variable or whatever, and then redirect to your normal "we've just logged in" page. ikiwiki gets this right; you can too!

(for added bonus points, have some way to declare a mapping from REMOTE_USER values to internal, readable, usernames, but it's not too crucial)

Posted Wed Mar 26 13:35:53 2014

I know, I know, I should resist saying this. But every time I see it, I wonder why it happens, and I should just get this off my chest.

The difference between "its" and "it's" is something that many people, even native english speakers, seem to miss. Yet, it's so extremely simple that I, a non-native english speaker, have been baffled about that common mistake for as long as I can remember.

The apostrophe (') in any sentence usually means that something at the location of that apostrophe is gone out to lunch. In this particular case, it means that the " " and the "i" in the phrase "it is" were hungry. So rather than "it is", we contract that to "it's", and allow the space and the i to enjoy their meal while the apostrophe keeps their seats warm.

Practically, what that means is that every time you want to write "it's", you should consider whether you can replace it with "it is" without making the sentence sound like junk. If you can't, you probably meant to write "its" rather than "it's".

For instance, consider the following sentence:

"It's not possible to repair this car within the budget that its owner wants to pay"

It's perfectly possible to say "it is not possible" here, so we need to have the apostrophe keep a seat warm for the space and the i.

It makes no sense to say "it is owner", unless you're trying to speak a much deformed form of english, so that makes it a possessive pronoun (similar to "hers", "his", "theirs", etc) and you shouldn't use an apostrophe.

Speaking of food, it's time for lunch now.

Posted Wed Mar 19 12:04:17 2014

"Given a Debian system with just the set of essential packages installed, find a single package name that, together with its dependencies, will increase the used diskspace of the system by as much as possible."

My entry: openarena.

Off to bed, now.

Posted Tue Mar 11 22:53:21 2014

My talk proposal for LOADays 2014 has been accepted. That makes the fifth consecutive time I'll be speaking there. I like this little event; it's small but always seems to come with some good content. And as someone who's ended up being a system administrator for most of my career, I can always find something interesting to talk about.

This time around, my talk will be about debian-installer automation. I plan to explain a bit how d-i is structured internally so as to help people understand better how things work, and will then run a little demo on how we used preseeding to install the 45 laptops for FOSDEM with (almost) no user input.

I've also decided to use libreoffice impress this time around, so I can try out the impress remote. Let's see how it goes.

See you there?

Posted Sun Mar 9 08:04:47 2014

If you can read this, that means my transition to ikiwiki has happened.

I've been interested in ikiwiki ever since Joey first wrote about it back in 2006, but it took me until just recently to finally bite the bullet and migrate to ikiwiki.

The plan was to retain everything, including comments, and I've mostly achieved that. Still, a few things were lost along the way:

  • There were a few blog entries that had special characters in their filename which blosxom didn't care about but ikiwiki didn't like. Since there were only a handful of them, I've ignored those for now. It does mean those are now dead links, but I decided it's not worth bothering about much.
  • My previous setup had a fairly intricate way to choose a different stylescript which I'm fairly sure nobody (but me) ever used. To keep things simple, I've decided not to port that over to ikiwiki. Instead, only the most recent 'grass' stylesheet has been retained (though it required some changes to accomodate for the fact that elements have different class names and ids in ikiwiki as compared to my own stuff, obviously)
  • My own comment system had threading support, which ikiwiki does not seem to have. I did consider adding it, but I've decided not to care much.

I should note that at this point I've only ported the blog itself, not the rest of my website. While ikiwiki can be used for that as well, the parts outside of my blog are fairly static and don't change all that often. In addition, those bits are set up in an even uglier way, and I didn't want to fix them up just yet.

Things I've learned:

  • ikiwiki does not track timestamps across git moves. Not sure whether this is a bug. It meant I couldn't rename the "special-characters entries" that I mentioned above, since the dates would be all wrong.
  • You can cherry-pick an entire git branch onto another branch (without having to specify each and every commit individually), if you need/want to avoid a merge commit. Apparently ikiwiki got confused about the merge commit, and we ran in the above same issue; but with cherry-picking the commits from the git-svn branch on which they were, stuff was done correctly.
Posted Sat Mar 1 13:30:03 2014

"Nuclear precision weapons"

Now that the madness is mostly over, I have some time to catch up on reading my newspaper.

Last week, one article in De Standaard talked about the nuclear weapons on Belgian soil which officially didn't exist (until wikileaks proved otherwise) and which are now apparently going to be modernized. The newer model would be "nuclear precision weapons".

Ignoring the question of whether today's world still requires nuclear bombs (this may or may not be true, I don't care), I question the logic which leads to that phrase. A nuclear weapon is a weapon of mass destruction. By definition, a weapon of mass destruction causes collateral damage. By definition, a precision weapon is a weapon that does not cause collateral damage—or, at the very least, where every effort is made to limit the amount of collateral damage.

Even the very first nuclear bombs were capable of destroying entire cities. Today's nuclear weapons, even the smaller ones, are far more powerful than those.

Don't get me wrong: I'm not a peace activist. In fact, I have been contracted by companies who produce military equipment, and don't feel bad about that. But to claim that it is possible to create "nuclear precision weapons" is to deceive oneself. A nuclear weapon is not very precise.

Posted Wed Feb 5 15:10:22 2014

Waking up

Today is the monday after FOSDEM 2014, and I'm slowly waking up.

Being in charge of the video team, and after having spent a whole week preparing followed by a whole weekend stressing about, today is a day of slowly waking up from hibernation and returning gear to various rental companies.

At FOSDEM 2013, we had recorded five rooms. Even though we did lose a few talks, I felt pretty good about the whole thing, and I think we did pretty well then. It had been my plan to increase the number of rooms to be recorded from 5 to a reasonably higher number—say, 10 or so—but someone convinced me to do all of them.

I'll readily admit I was afraid. Scratch that, I was terrified. Petrified, that we wouldn't be able to pull it off. For previous editions, we'd kept the number conservative, since video work is a high-risk business: if something goes wrong with volunteers work (say, the heralding volunteer doesn't show up), you find a replacement, or you have a staff member introduce a speaker, and the show will go on.

Not so with video work: if something minor goes wrong, you will most likely lose content. If something minor goes wrong in one devroom, I'll deal with it. If something minor goes wrong in 20% of devrooms, that's 4 to 5 devrooms, and I can't deal with it. Moreover, finding enough volunteers to manage the video for five rooms was a challenge last year; I couldn't even begin to imagine how to do so for 22 of them.

We did work out a way that didn't seem too far-fetched, and that might work: why not look for volunteers from the devrooms themselves? But then, they couldn't be expected to have the required experience, so we'd have to do a lot of handholding.

The week before FOSDEM, four of us sat down in an office in Mechelen, and started preparing. We had a lot to do; and when you do, more jobs keep being added to the pile; TODO lists tend to grow longer, not shorter. Installing servers, testing cameras, setting twinpact dipswitches, configuring laptops. Changing a few tests in the config management system, and having to go over the laptops again. Buying extra gear. Buying yet more extra gear, several times. Buying cardboard boxes when it appeared that the plastic foldable ones we'd ordered wouldn't make it in time, and unloading the pallet of them when it did, after all, so we could load them into the van with all of the other gear. Laminating instruction sheets. Receiving shipments of rented gear. Calling another supplier last minute when one of our other suppliers had contacted us to let us know one of their cameras that they'd promised us had broken down, and couldn't be repaired in time. Calling that last-minute supplier again when we found out that some of the cameras we'd rented could do HDV only (and not DV), and we needed to find more than the ones we'd already asked for. Discovering that his cameras were of the same problematic model and so also couldn't be used. The feeling of relief when he called me a few hours later and told me he'd managed to scrounge up enough cameras to help us out after all.

And then it was time to go to the ULB and start setting up everything. On friday, we had a little bit of help, but did have to do most of it ourselves. After setting up everything in every room and testing as much as possible within the time constraints that we had, we picked up the most expensive bits of the gear, brought it down to a safe place, and went home.

On saturday and sunday, for me it was mostly a matter of handing out gear to various devrooms, running around to fix various issues, and then in the evening receiving boxes back, using a checklist to ensure everything expected was in the box (and that things not needed would remain in the room, or be sent back).

It was exhausting, to the extent that on saturday afternoon, I had to take a bit of time off so I could go and rest.

I think it's safe to say that something like this hasn't been done before. FOSDEM is a huge conference; most conferences with multiple rooms don't have more somewhere between 5 and 10 of them. The fact that we have so many more, and the fact that we record all of them, puts FOSDEM video work in a class of its own, to the extent that the professional cameramen I talked to in order to get the right cameras would incredulously ask "what do you need that for?" when I gave them the numbers we needed.

As such, I didn't expect, nor was aiming for, perfection. Unfortunately, with video work it is close to impossible to attain perfection; I have told multiple people during the event that I would be happy if we reached 85% of talks recorded at acceptable quality; and though it is much too soon to say for certain, my gut feeling tells m that we've probably achieved that.

One thing we hadn't planned for was streaming. During the past three years, FOSDEM enlisted the help and sponsorship of flumotion to get streaming going. Unfortunately, that did not work out this year; and since we already had far too much on our plates with everything else, we decided to forego streaming and focus on recording, only.

A few days before the event, however, Steinar Gunderson took it upon himself to fix that. While we couldn't support him a great deal, we could give him access to the secondary laptops (which weren't otherwise doing much anyway) on which he could then do his thing. This was mostly transparent to us; we did communicate to some extent (e.g., when a machine that had been down had fixed and was working again), but he mostly did his thing while we did ours. Full details, for those who want it, in the linked blog post.

Today, then, I spent most of my time handling hardware: waiting for other members of FOSDEM to drive the van up to my office; waiting for the laptop rental company to retrieve most of the laptops; waiting for two of the camera rental companies to retrieve their cameras and tripods; driving over to the final camera rental company to return their cameras and tripods; and finally, driving up to the ULB to retrieve the last laptops who had been copying files all night.

As I write this, some video files have already been uploaded to the FOSDEM video archive. However, these are extremely low-quality renditions of video snippets that need to be reviewed; they are not ready yet for public consumption. When those files exist, expect an announcement through the main FOSDEM website.

Posted Mon Feb 3 19:54:08 2014