My talk proposal for LOADays 2014 has been accepted. That makes the fifth consecutive time I'll be speaking there. I like this little event; it's small but always seems to come with some good content. And as someone who's ended up being a system administrator for most of my career, I can always find something interesting to talk about.
This time around, my talk will be about debian-installer automation. I plan to explain a bit how d-i is structured internally so as to help people understand better how things work, and will then run a little demo on how we used preseeding to install the 45 laptops for FOSDEM with (almost) no user input.
See you there?
If you can read this, that means my transition to ikiwiki has happened.
I've been interested in ikiwiki ever since Joey first wrote about it back in 2006, but it took me until just recently to finally bite the bullet and migrate to ikiwiki.
The plan was to retain everything, including comments, and I've mostly achieved that. Still, a few things were lost along the way:
- There were a few blog entries that had special characters in their filename which blosxom didn't care about but ikiwiki didn't like. Since there were only a handful of them, I've ignored those for now. It does mean those are now dead links, but I decided it's not worth bothering about much.
- My previous setup had a fairly intricate way to choose a different stylescript which I'm fairly sure nobody (but me) ever used. To keep things simple, I've decided not to port that over to ikiwiki. Instead, only the most recent 'grass' stylesheet has been retained (though it required some changes to accomodate for the fact that elements have different class names and ids in ikiwiki as compared to my own stuff, obviously)
- My own comment system had threading support, which ikiwiki does not seem to have. I did consider adding it, but I've decided not to care much.
I should note that at this point I've only ported the blog itself, not the rest of my website. While ikiwiki can be used for that as well, the parts outside of my blog are fairly static and don't change all that often. In addition, those bits are set up in an even uglier way, and I didn't want to fix them up just yet.
Things I've learned:
- ikiwiki does not track timestamps across git moves. Not sure whether this is a bug. It meant I couldn't rename the "special-characters entries" that I mentioned above, since the dates would be all wrong.
- You can cherry-pick an entire git branch onto another branch (without having to specify each and every commit individually), if you need/want to avoid a merge commit. Apparently ikiwiki got confused about the merge commit, and we ran in the above same issue; but with cherry-picking the commits from the git-svn branch on which they were, stuff was done correctly.
"Nuclear precision weapons"
Now that the madness is mostly over, I have some time to catch up on reading my newspaper.
Last week, one article in De Standaard talked about the nuclear weapons on Belgian soil which officially didn't exist (until wikileaks proved otherwise) and which are now apparently going to be modernized. The newer model would be "nuclear precision weapons".
Ignoring the question of whether today's world still requires nuclear bombs (this may or may not be true, I don't care), I question the logic which leads to that phrase. A nuclear weapon is a weapon of mass destruction. By definition, a weapon of mass destruction causes collateral damage. By definition, a precision weapon is a weapon that does not cause collateral damage—or, at the very least, where every effort is made to limit the amount of collateral damage.
Even the very first nuclear bombs were capable of destroying entire cities. Today's nuclear weapons, even the smaller ones, are far more powerful than those.
Don't get me wrong: I'm not a peace activist. In fact, I have been contracted by companies who produce military equipment, and don't feel bad about that. But to claim that it is possible to create "nuclear precision weapons" is to deceive oneself. A nuclear weapon is not very precise.
Today is the monday after FOSDEM 2014, and I'm slowly waking up.
Being in charge of the video team, and after having spent a whole week preparing followed by a whole weekend stressing about, today is a day of slowly waking up from hibernation and returning gear to various rental companies.
At FOSDEM 2013, we had recorded five rooms. Even though we did lose a few talks, I felt pretty good about the whole thing, and I think we did pretty well then. It had been my plan to increase the number of rooms to be recorded from 5 to a reasonably higher number—say, 10 or so—but someone convinced me to do all of them.
I'll readily admit I was afraid. Scratch that, I was terrified. Petrified, that we wouldn't be able to pull it off. For previous editions, we'd kept the number conservative, since video work is a high-risk business: if something goes wrong with volunteers work (say, the heralding volunteer doesn't show up), you find a replacement, or you have a staff member introduce a speaker, and the show will go on.
Not so with video work: if something minor goes wrong, you will most likely lose content. If something minor goes wrong in one devroom, I'll deal with it. If something minor goes wrong in 20% of devrooms, that's 4 to 5 devrooms, and I can't deal with it. Moreover, finding enough volunteers to manage the video for five rooms was a challenge last year; I couldn't even begin to imagine how to do so for 22 of them.
We did work out a way that didn't seem too far-fetched, and that might work: why not look for volunteers from the devrooms themselves? But then, they couldn't be expected to have the required experience, so we'd have to do a lot of handholding.
The week before FOSDEM, four of us sat down in an office in Mechelen, and started preparing. We had a lot to do; and when you do, more jobs keep being added to the pile; TODO lists tend to grow longer, not shorter. Installing servers, testing cameras, setting twinpact dipswitches, configuring laptops. Changing a few tests in the config management system, and having to go over the laptops again. Buying extra gear. Buying yet more extra gear, several times. Buying cardboard boxes when it appeared that the plastic foldable ones we'd ordered wouldn't make it in time, and unloading the pallet of them when it did, after all, so we could load them into the van with all of the other gear. Laminating instruction sheets. Receiving shipments of rented gear. Calling another supplier last minute when one of our other suppliers had contacted us to let us know one of their cameras that they'd promised us had broken down, and couldn't be repaired in time. Calling that last-minute supplier again when we found out that some of the cameras we'd rented could do HDV only (and not DV), and we needed to find more than the ones we'd already asked for. Discovering that his cameras were of the same problematic model and so also couldn't be used. The feeling of relief when he called me a few hours later and told me he'd managed to scrounge up enough cameras to help us out after all.
And then it was time to go to the ULB and start setting up everything. On friday, we had a little bit of help, but did have to do most of it ourselves. After setting up everything in every room and testing as much as possible within the time constraints that we had, we picked up the most expensive bits of the gear, brought it down to a safe place, and went home.
On saturday and sunday, for me it was mostly a matter of handing out gear to various devrooms, running around to fix various issues, and then in the evening receiving boxes back, using a checklist to ensure everything expected was in the box (and that things not needed would remain in the room, or be sent back).
It was exhausting, to the extent that on saturday afternoon, I had to take a bit of time off so I could go and rest.
I think it's safe to say that something like this hasn't been done before. FOSDEM is a huge conference; most conferences with multiple rooms don't have more somewhere between 5 and 10 of them. The fact that we have so many more, and the fact that we record all of them, puts FOSDEM video work in a class of its own, to the extent that the professional cameramen I talked to in order to get the right cameras would incredulously ask "what do you need that for?" when I gave them the numbers we needed.
As such, I didn't expect, nor was aiming for, perfection. Unfortunately, with video work it is close to impossible to attain perfection; I have told multiple people during the event that I would be happy if we reached 85% of talks recorded at acceptable quality; and though it is much too soon to say for certain, my gut feeling tells m that we've probably achieved that.
One thing we hadn't planned for was streaming. During the past three years, FOSDEM enlisted the help and sponsorship of flumotion to get streaming going. Unfortunately, that did not work out this year; and since we already had far too much on our plates with everything else, we decided to forego streaming and focus on recording, only.
A few days before the event, however, Steinar Gunderson took it upon himself to fix that. While we couldn't support him a great deal, we could give him access to the secondary laptops (which weren't otherwise doing much anyway) on which he could then do his thing. This was mostly transparent to us; we did communicate to some extent (e.g., when a machine that had been down had fixed and was working again), but he mostly did his thing while we did ours. Full details, for those who want it, in the linked blog post.
Today, then, I spent most of my time handling hardware: waiting for other members of FOSDEM to drive the van up to my office; waiting for the laptop rental company to retrieve most of the laptops; waiting for two of the camera rental companies to retrieve their cameras and tripods; driving over to the final camera rental company to return their cameras and tripods; and finally, driving up to the ULB to retrieve the last laptops who had been copying files all night.
As I write this, some video files have already been uploaded to the FOSDEM video archive. However, these are extremely low-quality renditions of video snippets that need to be reviewed; they are not ready yet for public consumption. When those files exist, expect an announcement through the main FOSDEM website.
On the init system debate
The decision on which init system to use has plagued Debian for a fairly long time now. We've had people talking about it as early as debconf11 in Banja Luka, and we still haven't got a decision. Instead, the decision has been going round in circles, in a typical bikeshed fashion.
Originally, my opinion on the subject was "sysv-rc is good enough for everyone, and it's portable, which other alternatives aren't". By now, however, I've become convinced that the first part of that statement isn't true, and that as a result a switch of a default init system is indeed appropriate; that we should indeed switch to "something else", at least for our Linux ports.
Once I came to that conclusion, my opinion on what, exactly, we should use turned out to be nonexistent. That is, I don't care. Anything will be fine.
What is bothering me, though, is that things keep dragging on and on and on and on and on and on and on, ad infinitum.
We should just pick one and be done with it, dammit. The fact that sysv-rc is replaced by "something else" does not mean people must stop using sysv init scripts. Even if the maintainer of some random package "foo" refuses to accept patches to support sysv-rc for his package, there's nothing stopping anyone from providing a package "sysv-support" containing init scripts (and nothing else) for sysv-rc.
The same goes for a hypothetical upstart-support or systemd-support package, of course: if you want to continue living in the stone age and keep using sysv-rc forever, then there is nothing stopping you. Even if we decide to use something else for our Linux ports.
Ideally the non-Linux ports would move to something more modern too; but there really really really isn't any reason why it would be a problem if they didn't.
Can we stop painting the bikeshed now? If we don't, soon we'll have to call it "hunk of paint with some bikes inside" rather than "bikeshed".
I'd set up buildbot at a customer last monday. Since that resulted in me understanding the thing, I also set up an instance for nbd. The need for this had become apparent after the last upload to unstable failed on a disconcertingly large number of architectures.
Apart from the fact that it uses python (and wants me to write python in its config file, grrr), it's a fairly nice system; pretty lightweight (it runs on barbershop, which is a QNAP TS419U, for crying out loud), yet still flexible enough that I don't have to jump through hoops to do normal things which the developer hadn't thought about.
I also just noticed that if you push multiple commits to a git repository at once (put otherwise, if buildbots notices multiple new revisions when it checks), it will make sure that all revisions get built at least once. Of course it would have been nice if it would pick the fastest machines to do the "older" revisions rather than "whoever happens to be first", but that doesn't really matter all that much.
Meanwhile, the bug that makes building 3.6 impossible on some architectures has been fixed. It turned out to be an LFS-issue, which is specific to 32-bit machines; and as my own machine runs amd64, well...
Computer education in Belgium
Today (well, technically yesterday by now), the Media were reporting of an open letter by '5 belgian university professors from the IT field' without calling them by name about how Belgian primary education should introduce programming into the curriculum, starting at age 5.
Unsurprisingly, this did not go well with a majority of the Belgian populace. Arguments like "what should the school drop to make place for that, then" and "why would people ever need to learn programming" were rampant on some internet fora that I had a short look at.
That being said, though, I think these five unnamed professors are correct. Understanding how a tool works is essential to proficient usage of said tool; and while it is possible to teach particular common and popular use cases of a given tool without providing enough background, doing so will only result in confusion when the computer "is acting up"—an oxymoron if there ever was one. After all, a computer can't "act up"; it can only follow instructions. Whenever your seems to be "acting up", what's really happening is that someone (most likely, you) gave it the wrong instructions. Or, perhaps, someone evil on the Internet gave it the right instructions (for their goals, anyway) and infected your computer with a virus. But that's unlikely.
Computer education currently mostly consists of "teaching software": rather than explaining how a computer does what it does, people are taught that if you click this button, the text will be bold, or if you click that button, the printer will start buzzing. That's all nice and dandy, until the actual computer people encounter when off the school bench and at work has this other piece of software, where all the buttons are in other places and look all "wrong". Because then they get all confused about the fact that things aren't where they're supposed to be.
So, yes, it is my opinion that any computer education should help people understand how a computer does the things it does. And what better way to do that than to actually explain things to a computer, the way computer programmers do such things?
Note that programming a computer doesn't have to be hard, and it can be something which kids can understand and will find fun to do. For instance, there's this scratch thing from MIT, which was made for teaching programming in such a context.
Yes, adding programming to the primary school curriculum will require that some other things are dropped instead, and I could see that being a problem. However, in our modern world computers have become so pervasively important that we would be doing our kids a disservice not to explain them how to understand computers, as opposed to just learn how to use them...
About half a decade ago, I created a gmail account (maybe longer—not so sure anymore). After testing it out for a while, I decided that it wasn't for me; but as a gmail account comes with a google account for free, I didn't throw it out.
As time went on, I added a few other things to that account. I try to limit its use a bit, but I do own a few android devices (for lack of a better alternative), and I do use adsense on my website, and all of these are linked to my existing google account; so I don't want to throw it away.
I don't ever read mail sent to my gmail account, though; I prefer to read my mail in a local mail client (I used mutt for about a decade, having switched to thunderbird recently). Since it kept happening that people thought I would read my gmail account (which I do still use for XMPP, in spite of your recent crippling of that service), I configured an autoresponder on gmail to warn people off, and told them to use my main mail address instead of the gmail one. Additionally, I configured my main mail addresses (plural; they're really aliases) as "secondary" addresses on my gmail account, so that people who try to send mail to me could figure out where to actually send it before seeing the autoreply.
Unfortunately, for some reason you seem to have interpreted that as "screw his MX records and DNSSEC, we'll just intercept anything coming from our servers and going to Wouter Verhelst as going to his gmail account".
There's a word for that. I'm not going to use it.
I: 00check: Untarring chroot environment. This might take a minute or two.