Yesterday, I spent most of the day finishing up the multiarch work I'd been doing on introducing multiarch to the eID middleware, and did another release of the Linux builds. As such, it's now possible to install 32-bit versions of the eID middleware on a 64-bit Linux distribution. For more details, please see the announcement.

Learning how to do multiarch (or biarch, as the case may be) for three different distribution families has been a, well, learning experience. Being a Debian Developer, figuring out the technical details for doing this on Debian and its derivatives wasn't all that hard. You just make sure the libraries are installed to the multiarch-safe directories (i.e., /usr/lib/<gnu arch triplet>), you add some Multi-Arch: foreign or Multi-Arch: same headers where appropriate, and you're done. Of course the devil is in the details (define "where appropriate"), but all in all it's not that difficult and fairly deterministic.

The Fedora (and derivatives, like RHEL) approach to biarch is that 64-bit distributions install into /usr/lib64 and 32-bit distributions install into /usr/lib. This goes for any architecture family, not just the x86 family; the same method works on ppc and ppc64. However, since fedora doesn't do powerpc anymore, that part is a detail of little relevance.

Once that's done, yum has some heuristics whereby it will prefer native-architecture versions of binaries when asked, and may install both the native-architecture and foreign-architecture version of a particular library package at the same time. Since RPM already has support for installing multiple versions of the same package on the same system (a feature that was originally created, AIUI, to support the installation of multiple kernel versions), that's really all there is to it. It feels a bit fiddly and somewhat fragile, since there isn't really a spec and some parts seem fairly undefined, but all in all it seems to work well enough in practice.

The openSUSE approach is vastly different to the other two. Rather than installing the foreign-architecture packages natively, as in the Debian and Fedora approaches, openSUSE wants you to take the native foo.ix86.rpm package and convert that to a foo-32bit.x86_64.rpm package. The conversion process filters out non-unique files (only allows files to remain in the package if they are in library directories, IIUC), and copes with the lack of license files in /usr/share/doc by adding a dependency header on the native package. While the approach works, it feels like unnecessary extra work and bandwidth to me, and obviously also wouldn't scale beyond biarch.

It also isn't documented very well; when I went to openSUSE IRC channels and started asking questions, the reply was something along the lines of "hand this configuration file to your OBS instance". When I told them I wasn't actually using OBS and had no plans of migrating to it (because my current setup is complex enough as it is, and replacing it would be far too much work for too little gain), it suddenly got eerily quiet.

Eventually I found out that the part of OBS which does the actual build is a separate codebase, and integrating just that part into my existing build system was not that hard to do, even though it doesn't come with a specfile or RPM package and wants to install files into /usr/bin and /usr/lib. With all that and some more weirdness I've found in the past few months that I've been building packages for openSUSE I now have... Ideas(TM) about how openSUSE does things. That's for another time, though.

(disclaimer: there's a reason why I'm posting this on my personal blog and not on an official website... don't take this as an official statement of any sort!)

Posted Thu Aug 21 10:30:36 2014

Several years ago, I blogged about how to use a Belgian electronic ID card with SSH. I never really used it myself, but was interested in figuring out if it would still work.

The good news is that since then, you don't need to recompile OpenSSH anymore to get PKCS#11 support; this is now compiled in by default.

The slightly bad news is that there will be some more typework. Rather than entering ssh-add -D 0 (to access the PKCS#11 certificate in slot 0), you should now enter something along the lines of ssh-add -s /usr/lib/libbeidpkcs11.so.0. This will ask for your passphrase, but it isn't necessary to enter the correct pin code at this point in time. The first time you try to log on, you'll get a standard beid dialog box where you should enter your pin code; this will then work. The next time, you'll be logged on and you can access servers without having to enter a pin code.

The worse news is that there seems to be a bug in ssh-agent, making it impossible to unload a PKCS#11 library. Doing ssh-add -D will remove your keys from the agent; the next time you try to add them again, however, ssh-agent will simply report SSH_AGENT_FAILURE. I suspect the dlopen()ed modules aren't being unloaded when the keys are removed.

Unfortunately, the same (or at least, a similar) bug appears to occur when one removes the card from the cardreader.

As such, I don't currently recommend trying to use this.

Update: fix command-line options to ssh-add invocation above.

Posted Thu Jul 31 12:28:11 2014

A few weeks back, I learned that some government webinterfaces require users to download a PDF files, sign them with their eID, and upload the signed PDF document. On Linux, the only way to do this appeared to be to download Adobe Reader for Linux, install the eID middleware, make sure that the former would use the latter, and from there things would just work.

Except for the bit where Adobe Reader didn't exist in a 64-bit version. Since the eid middleware packages were not multiarch ready, that meant you couldn't use Adobe Reader to create signatures with your eID card on a 64-bit Linux distribution. Which is, pretty much, "just about everything out there".

For at least the Debian packages, that has been fixed now (I still need to handle the RPM side of things, but that's for later). When I wanted to test just now if everything would work right, however...

... I noticed that Adobe no longer provides any downloads of the Linux version of Adobe Reader. They're just gone. There is an ftp.adobe.com containing some old versions, but nothing more recent than a 5.x version.

Well, I suppose that settles that, then.

Regardless, the middleware package has been split up and multiarchified, and is ready for early adopters. If you want to try it out, you should:

  • run dpkg --add-architecture i386 if you haven't yet enabled multiarch
  • Install the eid-archive package, as usual
  • Edit /etc/apt/sources.list.d/eid.list, and enable the continuous repository (that is, remove the # at the beginning of the line)
  • run dpkg-reconfigure eid-archive, so that the key for the continuous repository is enabled
  • run apt-get update
  • run apt-get -t continuous install eid-mw to upgrade your middleware to the version in continuous
  • run apt-get -t continuous install libbeidpkcs11-0:i386 to install the 32-bit middleware version.
  • run your 32-bit application and sign things.

You should, however, note that the continuous repository is named so because it contains the results of our continuous integration system; that is, every time a commit is done to the middleware, packages in this repository are updated automatically. This means the software in the continuous repository might break. Or it might eat your firstborn. Or it might cause nasal daemons. As such, FedICT does not support these versions of the middleware. Don't try the above if you're not prepared to deal with that...

Posted Fri Jul 25 13:44:07 2014

Dear lazyweb,

reprepro is a great tool. I hand it some configuration and a bunch of packages, and it creates the necessary directory structure, moves the packages to the right location, and generates a (signed) Debian package repository. Obviously it would be possible to all that reprepro does by hand—by calling things like cp and dpkg-scanpackages and gpg and other things by hand—but it's easy to forget a step when doing so, and having a tool that just does things for me is wonderful. The fact that it does so only on request (i.e., when I know something has changed, rather than "once every so often") is also quite useful.

At work, I currently need to maintain a bunch of package repositories. The Debian package archives there are maintained with reprepro, but I currently maintain the RPM archives pretty much by hand: create the correct directories, copy the right files to the right places, run createrepo over the correct directories (and in the case of the OpenSUSE repository, also run gpg), and a bunch of other things specific to our local installation. As if to prove my above point, apparently I forgot to do a few things there, meaning, some of the RPM repositories didn't actually work correctly, and my testing didn't catch on.

Which makes me wonder how RPM package repositories are usually maintained. When one needs to maintain just a bunch of packages for a number of servers, well, running createrepo manually isn't too much of a problem. When it gets beyond own systems, however, and when you need to support multiple builds for multiple versions of multiple distributions, having to maintain all those repositories by hand is probably not the best idea.

So, dear lazyweb: how do large RPM repositories maintain state of the packages, the distributions they belong to, and similar things?

Please don't say "custom scripts" ;-)

Posted Wed Jul 16 12:43:45 2014
printer-driver-postscript-hp Depends: hplip
hplip Depends: policykit-1
policykit-1 Depends: libpam-systemd
libpam-systemd Depends: systemd (= 204-14)

Since the last in the above is a versioned dependency, that means you can't use systemd-shim to satisfy this dependency.

I do think we should migrate to systemd. However, it's unfortunate that this change is being rushed like this. I want to migrate my personal laptop to systemd—but not before I have the time to deal with any fallout that might result, and to make sure I can properly migrate my configuration.

Workaround (for now): hold policykit-1 at 0.105-3 rather than have it upgrade to 0.105-6. That version doesn't have a dependency on libpam-systemd.

Off-hand questions:

  • Why does one need to log in to an init system? (yes, yes, it's probably a session PAM module, not an auth or password module. Still)
  • What does policykit do that can't be solved with proper use of Unix domain sockets and plain old unix groups?

All this feels like another case of overengineering, like most of the *Kit thingies.

Update: so the systemd package doesn't actually cause systemd to be run, there are other packages that do that, and systemd-shim can be installed. I misread things. Of course, the package name is somewhat confusing... but that's no excuse.

Posted Tue Jul 8 20:15:52 2014

I have a number of USB hard disks. Like, I suppose, mostly everyone who reads this blog. Unlike many people who do, however, for whatever reason I decided to create LVM volumes on most of my USB hard disks. The unfortunate result is that these now contain a lot of data with a somewhat less than efficient partitioning system.

I don't really care much, but it's somewhat annoying, not in the least because disconnecting an LVM device isn't as easy as it used to be; originally you could just run the lvm2 init script with the stop argument, but that isn't the case anymore today. That is, you can run that, but it won't help you because all that does, effectively, is exit 0.

So what do you do instead? This:

  • First, make sure your devices aren't mounted anymore. Note: do not use lazy umount for a device that you're going to remove from your system! I've seen a few forum posts here and there of people who think it's safe to use umount -l for a device they're about to remove from their system which is still in use. It's not. It's a good way to cause data loss.

    Instead, make sure your partitions are really unmounted. Use fuser -m if you need to figure out which process is still using the partition.

  • Next, use vgchange -a n. This will cause LVM to deactivate any logical volumes and volume groups that aren't open any more. Note that this can't work if you haven't done the above. Also note that this doesn't cause the devices to be gone when you do things like vgs or so. They're still there, they're just not in use anymore. Skipping this step isn't recommended, though; it will make LVM unhappy, mostly because some caches are still in use.
  • Remove your device from the computer. That is, disconnect the USB cable, or call nbd-client -d, or do whatever you need to make sure the PV isn't connected to your system anymore.
  • Finally, run vgchange --refresh. This will cause the system to rescan all partitions, notice that the volume groups which you've just disconnected aren't there anymore, and remove them from configuration.

Voila, your LVM volume group is no longer available, and you've not suffered data loss. Kewl.

Note: I don't know what the lvm2 init script used to do. I suspect there's another way which doesn't require the --refresh step. I don't think it matters all that much, though. This works, and is safe. That being said, comments are welcome...

Posted Thu Jul 3 23:53:27 2014
NBD

When I took over maintenance of the nbd userland utilities, I decided not to do a full rewrite, even though that might have been a good idea. I've managed to keep things running, but the code isn't very pretty. Unfortunately I've also made a few mistakes. Yes, I know. No, I can't say it won't happen again.

Still, the "design", for lack of a better word, of the nbd-server code was problematic, to the extent that some needed performance improvements could not be done, or would require so much work that it would be better to just throw everything out and start over. The unfortunate result of this has been that those who did want to try to do something new with NBD did just that, rather than contributing back to the reference implementation.

Over the years, I've sometimes tried to come up with ways to improve the way in which things work. Most of these efforts haven't gone very far, either because they turned out to be dead ends, or because I just didn't have the time to finish it, and that by the time I could finally work on the branch again, the main branch had diverged so much that the branch was so outdated that I would almost have to start over if I was ever going to make it useful again.

The most recent of those efforts is the io_transaction branch. This branch has two main goals:

  1. Make nbd-server somewhat more modular, so that it would be possible to a different backend per export, if necessary
  2. Improve upon the way we handle select() and friends, so that the server can deal with requests more efficiently.

Today, I've reached the (fairly significant) milestone with that branch that on my machine, it seems to pass the "read" part of the test suite. What's more, the second part of the above two means it actually did so at a much higher speed than it does, on average, on the main branch.

Of course, reading is only half of the story, and no, write doesn't seem to work yet. After that, there's also a lot of work still to be done if I want to get this branch "on par" with the main branch, feature-wise.

But there's a light at the end of the tunnel. Finally.

Posted Sat Jun 28 09:51:36 2014

Twenty years ago, I was sixteen and in high school. The school at the time, if memory serves right, was the "Kunsthumaniora voor muziek en woord van het gemeenschapsonderwijs" in Brussels (which translates approximately to "art high school for music and word of the communal educational branch"... that's not entirely right, but it's late and I'm too tired to look for a dictionary). I was taking drama classes there. No, I'm not making this up. Except maybe the sixteen bit—I might be off by one or two years.

Hey, I did a lot of things during high school. Stop looking at me like that.

At one point in time during my stint in drama, a group of educational interns—the sort of people who would be teaching some sort of stuff somewhere after their graduation—cooperated with our school to come up with a whole day of classes around one subject. I'll never forget the title: "Can art rescue the democracy?" If that sounds pompous and silly, that's because it was. However, I was at art school, fer crying out loud, so I drank it up like it was cool-aid. Which it wasn't. It was worse.

At the end of the day (literally, that is), the educational interns had booked some hot shot art person for a debate. I've since completely forgotten his name. He must've been not that hot shot after all, since I never even once read anything about him in the next few years. Of course I can't exclude the most recent decade, not remembering him and all, but whatever. I also don't remember whether he was a hot shot art critic, a hot shot artist, or just some random hot shot person who writes about art, but doesn't actually do it himself. Whatever.

One of the main topics during the whole day was the point about how artsy people find it extremely difficult to define what art actually is. I mean, it's all they do all day, but they can't come up with a decent definition of the damn thing.

During the afternoon break, just before the debate with this maybe-sortof-semi hot shot art person, I walk around the playground and think about the whole thing. And come up with some personal definition of art. My definition.

As the time of the debate comes up, the hot shot art person sits in the front of the gym behind some table, and the whole school (literally) is sitting in chairs in the rest of the gym. Some questions are asked. Many of those are just shot down.

At some point, I raise my finger when it's asked if anyone has further questions. I walk up to the microphone. I ask him:

"Could we maybe define art as that thing that, though it might be easy to reproduce, in no case is easy to produce?"

He sits (he never got up, really). He thinks. I stand, and wait. After a few seconds of this, he answers. He seems impressed. His reply is something along the lines of "that's not a perfect definition, but it's pretty good. There's a lot to be said for that, and I urge you to write about art when you grow older".

I never followed his advice. I got a bit more interested in art, but quickly found out that art consists of one group of people who spend their time doing things other people find pretty, and another group of people who spend their time doing things that makes other people "think", whatever that is. They may not have a brain, they may be silly as hell, but they still want to "think". It's not for me, it's never been. Art, that is -- not the think bit. That is something I don't mind doing.

Don't get me wrong. I still like art. I like going to museums from time to time; I like the performing arts. I mean, I play the flute. Not the piano, not the guitar, not the flipping drums, the flute. Which I like, for what it's worth.

But if the intent is to make people think, there are better ways to do that. If I want to make people think, I'm not going to make some obscure object that may or may not have a message, in the hope that a millionaire with no better use for his money would buy it just to make his friends jealous, after which he's going to put it in a safe for a few years so he can sell it at a higher value. Without thinking about it. If I want to make people think, I'm not going to write a play or piece of music that's so obscure it will make people all confused, so they can fill their evening afterwards drinking cocktails at a reception, claiming it was all nice and thought-provoking, quoting little parts of it to people they've never met, just so they can make their social status look more than it actually is.

Good thing I never finished drama school, I suppose.

No, if I want to make people think, I'll try doing so where it actually matters. Like, say, in politics. Not that I have any political ambitions, mind you. But I think the answer to that question of twenty years ago should be a firm "no." Art cannot rescue democracy. Not if they don't have a lot of interesting things to say to anyone but themselves. Maybe the reverse is true, though; maybe democracy can rescue art. Not that I care much.

Why is all this relevant today?

A few days ago, as I was driving somewhere, there was some show on the radio relating to the current exploits of the Belgian national soccer team. There are a lot of them these days. Radio shows about that subject, that is—not Belgian national soccer teams. I suppose having a lot of competition makes it hard to find a new angle to come up with, and still keep things interesting. I also suppose having a lot of stuff going on about that squad gets people annoyed if they're not the least bit interested. I suppose that could be a new angle. Presumably that radio host supposed the same, because he'd been looking for, and asking questions of, people who didn't like soccer and who weren't going to watch the match. Most of them said they weren't interested and added one or two words about what they were going to be doing instead.

One of them said that "they" would be better off spending money on "art and culture", rather than on soccer. No, I don't know who "they" were, he didn't say. Never mind that, let me go on now.

I don't know who this dude was; they—the radio people—didn't say. He sounded like someone between 50 and 65, and had a somewhat tenor-y voice. I suspect he had two kids, and a mercedes. Yes, I just made that up. The part about the kids. And the car. No, it doesn't matter. But it's still likely. He sounded like that sort of guy.

Whoever this dude was, though, I'd like to just say one thing: Dude, you're an idiot. There's a time and a place for everything.

The time and place for art is "everywhen", and "in any random art gallery, opera house, or theatre, out of the public eye". Not because the rest of us doesn't want to deal with art, but because artsy people like it that way. They like to feel all pompous and important, and therefore use difficult words. Words that nobody except those along with them in their ivory tower like to use. Words that don't actually mean anything. But in doing so, they make this Art thing uninteresting to look at for people who don't care about their pompous and silly words. And strengthen the walls of their ivory tower. Only to complain later on that nobody ever shows up at art galleries, and that the really good ones keep going out of business.

The time and place for the world championship soccer is "once every four years", and "everywhere". Not because soccer people want to annoy you—although, yes, I'll grant you that the KBVB has gone a little overboard with the merchandising this time around—but rather because it's so simple. You kick the ball, and you hit the goal. There, done. Everyone can do it. Yes, true, there are some pompous people talking about it on TV, too. And yes, true, some people are better at it than others. But nobody claims you can't do soccer unless you're part of the "in" club. There are plenty of people who claim you can't do art unless you are. And if everyone can do it, then everyone can understand it. If you can understand it, it's easier to enjoy it. This is why so few people enjoy cricket or baseball outside of the few countries where it's popular.

That's also why so few people enjoy art: because you make it so difficult. And people just don't care. They want to be entertained.

I'm not saying that soccer is the best sport in the world, or that watching it is the most entertaining thing one can do. It isn't. In fact, beyond the national team, I'm not really following it all that well myself. If through some weird spacial anomaly the world championship would suddenly cease to exist and I would be the only person alive remembering it, I don't think I'd spend a lot of time trying to get it back.

But don't compare it to art. Because, well, in the grand scheme of things, neither of those two really matters all that much.

That is all.

Posted Tue Jun 24 01:31:02 2014

Since about a month, I've been working for a customer whose customer is FedICT, and am now helping out with maintaining the official software for the Belgian electronic ID card (eID). One of the first things I did was revamp the way in which the official Linux binaries are built and distributed, and also made work of the (somewhat overdue) new release for Linux.

Previously, the website contained downloadable packages for a number of distributions: two .deb files (one for i386 and one for amd64, for all .deb-based distributions), and a number of RPM files (one each for fedora 15, 16, and red hat enterprise 5, also for both architectures).

The builds as well as the supported distributions were somewhat outdated. This was a problem in and of itself, as eID cards issued since March 2014 are signed by the new government CA3 certificate rather than the older CA2 one, which required minor updates for the middleware to work. Since the Linux packages available on the website predated the required change, they wouldn't work for more recent cards.

Moreover, the actual distributions that were supported were also outdated—Fedora 16 hasn't been supported in over a year by the Fedora project, for instance—and there was a major gap in our list of supported distributions, in that openSUSE RPMs were not provided.

If you check out the install on Linux pages now, however, you'll see that the installation instructions have been changed somewhat. Rather than links to packages to install, we now pass you an 'eid-archive' package that you can install; this package adds relevant configuration for your distribution, after which you can install the packages you need—eid-mw for the PKCS#11 library and the firefox and chrome plugins; eid-viewer for the graphical viewer application to view and possibly print data from your id card.

Apart from the fact that there are now repositories rather than just single-file downloads, the repositories (and in case of RPM packages, the RPM files themselves) are now also signed with an OpenPGP key. Actually, they are signed with two OpenPGP keys; the first one is for officially released builds (i.e., builds that have seen some extensive testing before they were deemed "working"), while the second one is for automatic builds that are generated through a continuous integration system after each and every commit. These untested packages are also in a separate repository that is disabled by default. In addition, there's also support for openSUSE now—which required more work than I expected, but wasn't a major problem.

Enjoy!

(for clarity: while I now work at FedICT, there's an obvious reason why I'm publishing this on my blog and not on any .belgium.be website—don't assume this is an official Belgian message or anything...)

Posted Fri Jun 20 14:12:37 2014

Yesterday, the Belgian team played against Algeria, and won 2-1. Before the match, there was some criticism about coach Wilmots' decision not to field a number of top players in the team at the start of the match. Today, some journalists -- including, apparently, some Russian journalists -- seem to think that the team didn't play all that well, and that the Belgian team could be beaten by a better team.

I don't think that's the case; I think the strategy of Wilmots was a stroke of genius.

The Algerian coach fielded a wall of 5 defenders, and played a pretty good counter game. When the Belgian team had posession -- for about 65% of the time during the entire match -- the Algerians returned every man to their own half of the field. Whenever one of the Belgian strikers came close to the Algerian penalty area, at least two Algerian players would be near and block them off.

How does one win from a team that doesn't want to give you a millimeter of room? One way is to wear them out, and that's exactly what the team did: by keeping the pressure on the Algerian defense, the Algerian defense had to run a lot. Running a lot makes you tired, and if you're tired you make mistakes, or you can't follow a fresh substitute player.

During half-time, it was reported that Wilmots had relayed the message to his players that "the bench will make the difference". And indeed it did; some time into through the second half, Wilmots substituted three players -- two strikers and one offensive midfielder -- with the top players he'd kept on the bench. Two of these three players made the difference, scoring two goals to end up with 2-1.

Reading some articles today, it seems like many analysts think that this was a spur of the moment thing by Wilmots; but I think it's more likely that this was a deliberate strategy. Making the opposing defense run a lot is certainly a good strategy; but the downside of that is that your own offensive players will need to run a lot, too. If you can swap those out for top players on the bench...

It'll be interesting to see the next matches.

Posted Wed Jun 18 08:32:43 2014