After yesterday's late night accomplishments, today I fixed up the UI of joytest a bit. It's still not quite what I think it should look like, but at least it's actually usable with a 27-axis, 19-button "joystick" (read: a PS3 controller). Things may disappear off the edge of the window, but you can scroll towards it. Also, I removed the names of the buttons and axes from the window, and installed them as tooltips instead. Few people will be interested in the factoid that "button 1" is a "BaseBtn4", anyway.

The result now looks like this:

If you plug in a new joystick, or remove one from the system, then as soon as udev finishes up creating the necessary device node, joytest will show the joystick (by name) in the treeview to the left. Clicking on a joystick will show that joystick's data to the right. When one pushes a button, the relevant checkbox will be selected; and when one moves an axis, the numbers will start changing.

I really should have some widget to actually show the axis position, rather than some boring numbers. Not sure how to do that.

Posted Fri Dec 19 23:21:49 2014

I've owned a Logitech Wingman Gamepad Extreme since pretty much forever, and although it's been battered over the years, it's still mostly functional. As a gamepad, it has 10 buttons. What's special about it, though, is that the device also has a mode in which a gravity sensor kicks in and produces two extra axes, allowing me to pretend I'm really talking to a joystick. It looks a bit weird though, since you end up playing your games by wobbling the gamepad around a bit.

About 10 years ago, I first learned how to write GObjects by writing a GObject-based joystick API. Unfortunately, I lost the code at some point due to an overzealous rm -rf call. I had planned to rewrite it, but that never really happened.

About a year back, I needed to write a user interface for a customer where a joystick would be a major part of the interaction. The code there was written in Qt, so I write an event-based joystick API in Qt. As it happened, I also noticed that jstest would output names for the actual buttons and axes; I had never noticed this, because due to my 10 buttons and 4 axes, which by default produce a lot of output, the jstest program would just scroll the names off my screen whenever I plugged it in. But the names are there, and it's not too difficult.

Refreshing my memory on the joystick API made me remember how much fun it is, and I wrote the beginnings of what I (at the time) called "libgjs", for "Gobject JoyStick". I didn't really finish it though, until today. I did notice in the mean time that someone else released GObject bindings for javascript and also called that gjs, so in the interest of avoiding confusion I decided to rename my library to libjoy. Not only will this allow me all kinds of interesting puns like "today I am releasing more joy", it also makes for a more compact API (compare joy_stick_open() against gjs_joystick_open()).

The library also comes with a libjoy-gtk that creates a GtkListStore* which is automatically updated as joysticks are added and removed to the system; and a joytest program, a graphical joystick test program which also serves as an example of how to use the API.

still TODO:

  • Clean up the API a bit. There's a bit too much use of GError in there.
  • Improve the UI. I suck at interface design. Patches are welcome.
  • Differentiate between JS_EVENT_INIT kernel-level events, and normal events.
  • Improve the documentation to the extent that gtk-doc (and, thus, GObject-Introspection) will work.

What's there is functional, though.

Update: if you're going to talk about code, it's usually a good idea to link to said code. Thanks, Emanuele, for pointing that out ;-)

Posted Fri Dec 19 00:29:37 2014

Munin is a great tool. If you can script it, you can monitor it with munin. Unfortunately, however, munin is slow; that is, it will take snapshots once every five minutes, and not look at systems in between. If you have a short load spike that takes just a few seconds, chances are pretty high munin missed it. It also comes with a great webinterfacefrontendthing that allows you to dig deep in the history of what you've been monitoring.

By the time munin tells you that your Kerberos KDCs are all down, you've probably had each of your users call you several times to tell you that they can't log in. You could use nagios or one of its brethren, but it takes about a minute before such tools will notice these things, too.

Maybe use CollectD then? Rather than check once every several minutes, CollectD will collect information every few seconds. Unfortunately, however, due to the performance requirements to accomplish that (without causing undue server load), writing scripts for CollectD is not as easy as it is for Munin. In addition, webinterfacefrontendthings aren't really part of the CollectD code (there are several, but most that I've looked at are lacking in some respect), so usually if you're using CollectD, you're missing out some.

And collectd doesn't do the nagios thing of actually telling you when things go down.

So what if you could see it when things go bad?

At one customer, I came in contact with Frank, who wrote ExtreMon, an amazing tool that allows you to visualize the CollectD output as things are happening, in a full-screen fully customizable visualization of the data. The problem is that ExtreMon is rather... complex to set up. When I tried to talk Frank into helping me getting things set up for myself so I could play with it, I got a reply along the lines of...

well, extremon requires a lot of work right now... I really want to fix foo and bar and quux before I start documenting things. Oh, and there's also that part which is a dead end, really. Ask me in a few months?

which is fair enough (I can't argue with some things being suboptimal), but the code exists, and (as I can see every day at $CUSTOMER) actually works. So I decided to just figure it out by myself. After all, it's free software, so if it doesn't work I can just read the censored code.

As the manual explains, ExtreMon is a plugin-based system; plugins can add information to the "coven", read information from it, or both. A typical setup will run several of them; e.g., you'd have the from_collectd plugin (which parses the binary network protocol used by collectd) to get raw data into the coven; you'd run several aggregator plugins (which take that raw data and interpret it, allowing you do express things along the lines of "if the system's load gets above X, set load.status to warning"; and you'd run at least one output plugin so that you can actually see the damn data somewhere.

While setting up ExtreMon as is isn't as easy as one would like, I did manage to get it to work. Here's what I had to do.

You will need:

  • A monitor with a FullHD (or better) resolution. Currently, the display frontend of ExtreMon assumes it has a FullHD display at all time. Even if you have a lower resolution. Or a higher one.
  • Python3
  • OpenJDK 6 (or better)

First, we clone the ExtreMon git repository:

git clone https://github.com/m4rienf/ExtreMon.git extremon
cd extremon

There's a README there which explains the bare necessities on getting the coven to work. Read it. Do what it says. It's not wrong. It's not entirely complete, though; it fails to mention that you need to

  • install CollectD (which is required for its types.db)
  • Configure CollectD to have a line like Hostname "com.example.myhost" rather than the (usual) FQDNLookup true. This is because extremon uses the java-style reverse hostname, rather than the internet-style FQDN.

Make sure the dump.py script outputs something from collectd. You'll know when it shows something not containing "plugin" or "plugins" in the name. If it doesn't, fiddle with the #x3. lines at the top of the from_collectd file until it does. Note that ExtreMon uses inotify to detect whether a plugin has been added to or modified in its plugins directory; so you don't need to do anything special when updating things.

Next, we build the java libraries (which we'll need for the display thing later on):

cd java/extremon
mvn install
cd ../client/
mvn install

This will download half the Internet, build some java sources, and drop the precompiled .jar files in your $HOME/.m2/repository.

We'll now build the display frontend. This is maintained in a separate repository:

cd ../..
git clone https://github.com/m4rienf/ExtreMon-Display.git display
cd display
mvn install

This will download the other half of the Internet, and then fail, because Frank forgot to add a few repositories. Patch (and push request) on github

With that patch, it will build, but things will still fail when trying to sign a .jar file. I know of four ways on how to fix that particular problem:

  1. Add your passphrase for your java keystore, in cleartext, to the pom.xml file. This is a terrible idea.
  2. Pass your passphrase to maven, in cleartext, by using some command line flags. This is not much better.
  3. Ensure you use the maven-jarsigner-plugin 1.3.something or above, and figure out how the maven encrypted passphrase store thing works. I failed at that.
  4. Give up on trying to have maven sign your jar file, and do it manually. It's not that hard, after all.

If you're going with 1 through 3, you're on your own. For the last option, however, here's what you do. First, you need a key:

keytool -genkeypair -alias extremontest

after you enter all the information that keytool will ask for, it will generate a self-signed code signing certificate, valid for six months, called extremontest. Producing a code signing certificate with longer validity and/or one which is signed by an actual CA is left as an exercise to the reader.

Now, we will sign the .jar file:

jarsigner target/extremon-console-1.0-SNAPSHOT.jar extremontest

There. Who needs help from the internet to sign a .jar file? Well, apart from this blog post, of course.

You will now want to copy your freshly-signed .jar file to a location served by HTTPS. Yes, HTTPS, not HTTP; ExtreMon-Display will fail on plain HTTP sites.

Download this SVG file, and open it in an editor. Find all references to be.grep as well as those to barbershop and replace them with your own prefix and hostname. Store it along with the .jar file in a useful directory.

Download this JNLP file, and store it on the same location (or you might want to actually open it with "javaws" to see the very basic animated idleness of my system). Open it in an editor, and replace any references to barbershop.grep.be by the location where you've stored your signed .jar file.

Add the chalice_in_http plugin from the plugins directory. Make sure to configure it correctly (by way of its first few comment lines) so that its input and output filters are set up right.

Add the configuration snippet in section 2.1.3 of the manual (or something functionally equivalent) to your webserver's configuration. Make sure to have authentication—chalice_in_http is an input mechanism.

Add the chalice_out_http plugin from the plugins directory. Make sure to configure it correctly (by way of its first few comment lines) so that its input and output filters are set up right.

Add the configuration snippet in section 2.2.1 of the manual (or something functionally equivalent) to your webserver's configuration. Authentication isn't strictly required for the output plugin, but you might wish for it anyway if you care whether the whole internet can see your monitoring.

Now run javaws https://url/x3console.jnlp to start Extremon-Display.

At this point, I got stuck for several hours. Whenever I tried to run x3mon, this java webstart thing would tell me simply that things failed. When clicking on the "Details" button, I would find an error message along the lines of "Could not connect (name must not be null)". It would appear that the Java people believe this to be a proper error message for a fairly large number of constraints, all of which are slightly related to TLS connectivity. No, it's not the keystore. No, it's not an API issue, either. Or any of the loads of other rabbit holes that I dug myself in.

Instead, you should simply make sure you have Server Name Indication enabled. If you don't, the defaults in Java will cause it to refuse to even try to talk to your webserver.

The ExtreMon github repository comes with a bunch of extra plugins; some are special-case for the place where I first learned about it (and should therefore probably be considered "examples"), others are general-purpose plugins which implement things like "is the system load within reasonable limits". Be sure to check them out.

Note also that while you'll probably be getting most of your data from CollectD, you don't actually need to do that; you can write your own plugins, completely bypassing collectd. Indeed, the from_collectd thing we talked about earlier is, simply, also a plugin. At $CUSTOMER, for instance, we have one plugin which simply downloads a file every so often and checks it against a checksum, to verify that a particular piece of nonlinear software hasn't gone astray yet again. That doesn't need collectd.

The example above will get you a small white bar, the width of which is defined by the cpu "idle" statistic, as reported by CollectD. You probably want more. The manual (chapter 4, specifically) explains how to do that.

Unfortunately, in order for things to work right, you need to pretty much manually create an SVG file with a fairly strict structure. This is the one thing which Frank tells me is a dead end and needs to be pretty much rewritten. If you don't feel like spending several days manually drawing a schematic representation of your network, you probably want to wait until Frank's finished. If you don't mind, or if you're like me and you're impatient, you'll be happy to know that you can use inkscape to make the SVG file. You'll just have to use dialog behind ctrl+shift+X. A lot.

Once you've done that though, you can see when your server is down. Like, now. Before your customers call you.

Posted Tue Dec 9 19:43:19 2014

Last friday saw a somewhat distressing email to the debian-devel mailinglist, wherein Joey Hess, one of Debian's most valuable contributors, announced his decision to quit the project.

For all of Joey's contributions over the years, this is an unwelcome message; I'd much rather have seen him remain active in Debian, both on a personal and a technical level. As it is, I have a feeling of not just losing a colleague in Debian, but also a friend.

For people not active in Debian, it's easy to miss Joey's contributions to the project, but let me tell you that without Joey Hess, Debian simply would not be where it is today. We would not have the debian-installer and we would not have debhelper (so we would have massively complicated debian/rules files). More than that, though, Joey has been one of those people who, on a technical level, seemed to instinctively sense the right thing to do; as in, whenever Joey Hess disagrees with you on a technical matter, you know you must be doing something wrong.

As sudden and as unwelcome as last friday's announcement was, however, I can't say that it was totally unexpected. I've noticed Joey reducing his efforts in core areas of the project over the past few years, where on the one hand he has been mostly withdrawing from debian-installer development, and on the other hand his two most recent 'large' projects (ikiwiki and git-annex) haven't really been about Debian. Still, it's a painful loss; and while Debian has lost other high-profile contributors in the past over divisive issues, and recovered, that doesn't make it any less fun.

Here's to you, Joey. May you find joy in whatever you decide to do next. May you not disappear from our collective radars. May we meet again, at some conference in the future, so I can buy you a beer. (hint: FOSDEM ;-) )

Posted Sun Nov 9 09:28:14 2014

About a month ago, I received an upstream bugreport that the nbd-server wouldn't build on Solaris and its derivatives. This was because nbd-server uses the d_type field of struct dirent, which is widely implemented (in Linux and FreeBSD, at least), but not part of POSIX and therefore not implemented on Solaris (which tends to be more conservative about implementing new features).

The bug reporter pointed towards a blog post by a Solaris user who had written something he calls "adirent", meant to work around the issue by implementing something that would wrap readdir() so that it would inject a stat() call when needed. While that approach works, it seems a bit strange to add a function which wraps readdir to become portable. After all, readdir() does not always return the file type in d_type, not even on systems that do implement it. One example in which this is true is XFS; if one runs readdir() on a directory on an XFS filesystem, then everything will have DT_UNKNOWN as its filetype, indicating that you need to run stat() after all.

As such, I think a better approach is to use that fact so that things will just work on systems where d_type isn't available. The GNU autotools even have a test for it (AC_STRUCT_DIRENT_D_TYPE), which makes things easier. In the case of NBD, I've added that to configure.ac, and then added a touch of preprocessor magic to reuse the infrastructure for dealing with DT_UNKNOWN which is already there:

#ifdef HAVE_STRUCT_DIRENT_D_TYPE
#define NBD_D_TYPE de->d_type
#else
#define NBD_D_TYPE 0
#define DT_UKNOWN 0
#define DT_REG 1
#endif

(...opendir(), readdir(), ...)

switch(NBD_D_TYPE) {
    case DT_UNKNOWN:

(...call stat(), figure out if it is a file...)

    case DT_REG:

(...we know it is a file...)

    default:

(...we know it is not a file...)

this seems cleaner to me than using a wrapper, and has the additional advantage that the DT_UNKNOWN code path could receive some more testing.

Posted Fri Oct 24 15:33:20 2014
                          _____o****o__                               
                        dQ@@@@@WQ@Q&WWbo*,                            
                     __o@@@@@@@@@@@@@@@@&*b_,                         
                    .dQ@@@@@@@@@@@@@@@@@@@@&*bo__                     
                   .*@@@@@@@@@@@@@@@@@@@@@@@@&Q@bo                    
                   d@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@b                   
                  <]@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@#b                  
                 _d#@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@&**,                
              _,.dd@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@*b,               
              d"Q@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@b*b_              
             <*Q@@@@@V?$@@@@@@@@V$F$@@@@@@@@@@@@@@@@@[**_,            
            .*Q@@@@?*"***VV$V?"******"V@@@@@@@@FV@@@@@&W**>           
            <#@@@@"`---<"*****?'`''?''**"VVV@"****$@@@@@&b,           
           .d@@@@"`'-------''`--------''"****?'--'"$@@@@@*>           
           ]#@@#"`'----------------------'?'-------"@@@@@*">          
          <*@@@*`  ,--------------------------------"@@@@&*,          
          d#@@V>   '--------------------------------'"@@@?"*,         
         <d@@$"`   .--------'------------------------<$@@&b*b,        
         <@@@"`     ---------  -----------------------"@@@[**>        
         d@@@"      '-----`     -      '--------------'#@@@***        
        <@@@"`       '-`-               --- -----------*@@@***>       
        <]@""                             -,-----------<$@@[**>       
         ][F[        ` '`                 '-------------]@@[**>       
        <Q@[`                               -`----------*@@&**>       
        <#"">                                   --------*@@@**>       
        <@"*                                    ----`---*@@@&*>       
        ]Q#[                                      ------]#@@&*        
        ]@#[                                          <-<#@@@*>       
        ]@#b                                          .-<]@@@*        
        "@@[                                         ---<#@@@*        
        <@@"                                         ----]@@@*        
        ]#@[                                           -.]@@&"        
        ]#@b                                           -<]@@@[        
        <@@[                                       <   -<]@@"[        
        <@@[                                           -<]@@V>        
        <@@>     ,                                 ..  -<]@@b,        
        d@#>   -._----___ _                 _ _-_   -- -<#@@#-        
        ]@*> _-d*****obo_----       __--------___-, ----]@@@*,        
        d@[> -**?"**@@@@@o*>--,  -,---.o*ooW******_,,.--]@@#"`        
        #@[> -???'''--''?"**----<----<*@@@@@V******[----]@@@b>        
        *@[  -------------------<----*"'----''----'--->-<@@@*>        
        *@[  ------ooWWo,------- ------------_ '--------<@@@*-        
        "]>  <---bd?"@@@"`.----,  -------dQ@@&b-.-----'''$@[*`        
        *">   '--*",'$@#> -,'---  ---`--''$@@@?&_---->  -]@[*-        
        "*`    --'--.'"` `--.---  --- --, ]@@"-?*----   -"@*`>        
        <*>      <------`   ----  ---  ''--'",------    -]"*->        
        <*,       -----,   -----'----,  ---------`      -]#*-         
        '">        ---`    -----  <---                  '][[          
         <`                -----------,                 <d"[          
         <>               .------------                 -**-          
          >               <-----  -----                <-`--          
          -               .-----  -----                .--'           
          -               ------,.-----                -'-            
          ->              -------------                -.-            
          --              ---  -  -----                --`            
          --              ---      -----              --->            
          --             .->       -----              ---             
          -->            -----    .-----              ---             
          <-> '         ----`_-----__---            ' ---             
           --_.        ------'-----'----           ...---             
           ---`       ---------''------.           <----`             
           ---,     --------`-------`--,-          -----              
           <-----  --------_--------,-----        .-----              
            --------------*?--------`--------,    -----`              
            -----'------_?`--`'------'b--------- .-----               
            ----- ---"""`---.   '  ---'bb-------.------               
            ----------d,-------___-----'*o"----`` -----               
            '--------''`-.,-------------<*[-`----.----                
             -----------'------'--------'"*----> -----                
             -----------, '---   ---------------------                
             -------------,_---->----`'_-------------                 
             ',---------------.__.,.-----------------                 
              '-------------''?*"?''----------------`                 
              '>------------------------------------                  
              'b`-----------------------------------                  
              <<,---------------------------------,-                  
               '*,-------------- --------------.--`                   
               -"*_----------------------------'d>-                   
           .o, --**>.o----------------------,-.**`-                   
         .@@@[ .-"*****_-_----------------o******--                   
         Q@@F`  -'******d*--------------,-******`--                   
        .@@F`   '-<*******-.-.,------.o*********---,_o@W,             
        ]@[--    --"***d***od*--****dbd********"----@@@@[             
      .d@@#,-    --'***"@o*************oWb****"-----'$@@&,            
    _oQ&@@@>    ----'"**]@@o********doQ@?****"--------"@@b            
 dQ@@@@@@@@b      '---**"$@@@@@@@@@@@@F"*****---------<]@@d,,         
@@@@@@@@@@@[,      ---'***V?""??VVV"********>---------<Q@@@@@&b,      
@@@@@@@@@@@@><      ---'*******************?----------d@@@@@@@@@F_    
@@@@@@@@@@@@[.      <----"****************"----------]Q@@@@@@@@@@@@bo_
@@@@@@@@@@@@@>-      -----'**************'----------.#@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@[>        ------**********`------------d@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@[        '-------"****?`-------------.Q@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@&,         ------'''''---------------d@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@o,         ------------------------.Q@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@[_         '----------------------d@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@[,          --------------------]@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@b-_          '----------------.Q@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@b-            '--------------Q@@@@@@@@@@@@@@@@@@@@@

You know you're doing a fun gig when you get to do things like the above on billable hours.

Full story: writing a test suite for reading data from eID cards. It makes sense to decode the JPEG data which you read from the card, so that you know there's no error in the lower-layer subroutines (which would result in corruption). And since we've decoded it anyway, why not show it in the test suite log? Right.

Posted Fri Sep 5 12:20:53 2014

Yesterday, I spent most of the day finishing up the multiarch work I'd been doing on introducing multiarch to the eID middleware, and did another release of the Linux builds. As such, it's now possible to install 32-bit versions of the eID middleware on a 64-bit Linux distribution. For more details, please see the announcement.

Learning how to do multiarch (or biarch, as the case may be) for three different distribution families has been a, well, learning experience. Being a Debian Developer, figuring out the technical details for doing this on Debian and its derivatives wasn't all that hard. You just make sure the libraries are installed to the multiarch-safe directories (i.e., /usr/lib/<gnu arch triplet>), you add some Multi-Arch: foreign or Multi-Arch: same headers where appropriate, and you're done. Of course the devil is in the details (define "where appropriate"), but all in all it's not that difficult and fairly deterministic.

The Fedora (and derivatives, like RHEL) approach to biarch is that 64-bit distributions install into /usr/lib64 and 32-bit distributions install into /usr/lib. This goes for any architecture family, not just the x86 family; the same method works on ppc and ppc64. However, since fedora doesn't do powerpc anymore, that part is a detail of little relevance.

Once that's done, yum has some heuristics whereby it will prefer native-architecture versions of binaries when asked, and may install both the native-architecture and foreign-architecture version of a particular library package at the same time. Since RPM already has support for installing multiple versions of the same package on the same system (a feature that was originally created, AIUI, to support the installation of multiple kernel versions), that's really all there is to it. It feels a bit fiddly and somewhat fragile, since there isn't really a spec and some parts seem fairly undefined, but all in all it seems to work well enough in practice.

The openSUSE approach is vastly different to the other two. Rather than installing the foreign-architecture packages natively, as in the Debian and Fedora approaches, openSUSE wants you to take the native foo.ix86.rpm package and convert that to a foo-32bit.x86_64.rpm package. The conversion process filters out non-unique files (only allows files to remain in the package if they are in library directories, IIUC), and copes with the lack of license files in /usr/share/doc by adding a dependency header on the native package. While the approach works, it feels like unnecessary extra work and bandwidth to me, and obviously also wouldn't scale beyond biarch.

It also isn't documented very well; when I went to openSUSE IRC channels and started asking questions, the reply was something along the lines of "hand this configuration file to your OBS instance". When I told them I wasn't actually using OBS and had no plans of migrating to it (because my current setup is complex enough as it is, and replacing it would be far too much work for too little gain), it suddenly got eerily quiet.

Eventually I found out that the part of OBS which does the actual build is a separate codebase, and integrating just that part into my existing build system was not that hard to do, even though it doesn't come with a specfile or RPM package and wants to install files into /usr/bin and /usr/lib. With all that and some more weirdness I've found in the past few months that I've been building packages for openSUSE I now have... Ideas(TM) about how openSUSE does things. That's for another time, though.

(disclaimer: there's a reason why I'm posting this on my personal blog and not on an official website... don't take this as an official statement of any sort!)

Posted Thu Aug 21 10:30:36 2014

Several years ago, I blogged about how to use a Belgian electronic ID card with SSH. I never really used it myself, but was interested in figuring out if it would still work.

The good news is that since then, you don't need to recompile OpenSSH anymore to get PKCS#11 support; this is now compiled in by default.

The slightly bad news is that there will be some more typework. Rather than entering ssh-add -D 0 (to access the PKCS#11 certificate in slot 0), you should now enter something along the lines of ssh-add -s /usr/lib/libbeidpkcs11.so.0. This will ask for your passphrase, but it isn't necessary to enter the correct pin code at this point in time. The first time you try to log on, you'll get a standard beid dialog box where you should enter your pin code; this will then work. The next time, you'll be logged on and you can access servers without having to enter a pin code.

The worse news is that there seems to be a bug in ssh-agent, making it impossible to unload a PKCS#11 library. Doing ssh-add -D will remove your keys from the agent; the next time you try to add them again, however, ssh-agent will simply report SSH_AGENT_FAILURE. I suspect the dlopen()ed modules aren't being unloaded when the keys are removed.

Unfortunately, the same (or at least, a similar) bug appears to occur when one removes the card from the cardreader.

As such, I don't currently recommend trying to use this.

Update: fix command-line options to ssh-add invocation above.

Posted Thu Jul 31 12:28:11 2014

A few weeks back, I learned that some government webinterfaces require users to download a PDF files, sign them with their eID, and upload the signed PDF document. On Linux, the only way to do this appeared to be to download Adobe Reader for Linux, install the eID middleware, make sure that the former would use the latter, and from there things would just work.

Except for the bit where Adobe Reader didn't exist in a 64-bit version. Since the eid middleware packages were not multiarch ready, that meant you couldn't use Adobe Reader to create signatures with your eID card on a 64-bit Linux distribution. Which is, pretty much, "just about everything out there".

For at least the Debian packages, that has been fixed now (I still need to handle the RPM side of things, but that's for later). When I wanted to test just now if everything would work right, however...

... I noticed that Adobe no longer provides any downloads of the Linux version of Adobe Reader. They're just gone. There is an ftp.adobe.com containing some old versions, but nothing more recent than a 5.x version.

Well, I suppose that settles that, then.

Regardless, the middleware package has been split up and multiarchified, and is ready for early adopters. If you want to try it out, you should:

  • run dpkg --add-architecture i386 if you haven't yet enabled multiarch
  • Install the eid-archive package, as usual
  • Edit /etc/apt/sources.list.d/eid.list, and enable the continuous repository (that is, remove the # at the beginning of the line)
  • run dpkg-reconfigure eid-archive, so that the key for the continuous repository is enabled
  • run apt-get update
  • run apt-get -t continuous install eid-mw to upgrade your middleware to the version in continuous
  • run apt-get -t continuous install libbeidpkcs11-0:i386 to install the 32-bit middleware version.
  • run your 32-bit application and sign things.

You should, however, note that the continuous repository is named so because it contains the results of our continuous integration system; that is, every time a commit is done to the middleware, packages in this repository are updated automatically. This means the software in the continuous repository might break. Or it might eat your firstborn. Or it might cause nasal daemons. As such, FedICT does not support these versions of the middleware. Don't try the above if you're not prepared to deal with that...

Posted Fri Jul 25 13:44:07 2014

Dear lazyweb,

reprepro is a great tool. I hand it some configuration and a bunch of packages, and it creates the necessary directory structure, moves the packages to the right location, and generates a (signed) Debian package repository. Obviously it would be possible to all that reprepro does by hand—by calling things like cp and dpkg-scanpackages and gpg and other things by hand—but it's easy to forget a step when doing so, and having a tool that just does things for me is wonderful. The fact that it does so only on request (i.e., when I know something has changed, rather than "once every so often") is also quite useful.

At work, I currently need to maintain a bunch of package repositories. The Debian package archives there are maintained with reprepro, but I currently maintain the RPM archives pretty much by hand: create the correct directories, copy the right files to the right places, run createrepo over the correct directories (and in the case of the OpenSUSE repository, also run gpg), and a bunch of other things specific to our local installation. As if to prove my above point, apparently I forgot to do a few things there, meaning, some of the RPM repositories didn't actually work correctly, and my testing didn't catch on.

Which makes me wonder how RPM package repositories are usually maintained. When one needs to maintain just a bunch of packages for a number of servers, well, running createrepo manually isn't too much of a problem. When it gets beyond own systems, however, and when you need to support multiple builds for multiple versions of multiple distributions, having to maintain all those repositories by hand is probably not the best idea.

So, dear lazyweb: how do large RPM repositories maintain state of the packages, the distributions they belong to, and similar things?

Please don't say "custom scripts" ;-)

Posted Wed Jul 16 12:43:45 2014