Let's say you have a configure.ac file which contains this:

PKG_CHECK_VAR([p11_moduledir], "p11-kit-1", "p11_module_path")
AC_SUBST([p11_moduledir])

and that it goes with a Makefile.am which contains this:

dist_p11_module_DATA = foo.module

Then things should work fine, right? When you run make install, your modules install to the right location, and p11-kit will pick up everything the way it should.

Well, no. Not exactly. That is, it will work for the common case, but not for some other cases. You see, if you do that, then make distcheck will fail pretty spectacularly. At least if you run that as non-root (which you really really should do). The problem is that by specifying the p11_moduledir variable in that way, you hardcode it; it doesn't honour any $prefix or $DESTDIR variables that way. The result of that is that when a user installs your package by specifying --prefix=/opt/testmeout, it will still overwrite files in the system directory. Obviously, that's not desireable.

The $DESTDIR bit is especially troublesome, as it makes packaging your software for the common distributions complicated (most packaging software heavily relies on DESTDIR support to "install" your software in a staging area before turning it into an installable package).

So what's the right way then? I've been wondering about that myself, and asked for the right way to do something like that on the automake mailinglist a while back. The answer I got there wasn't entirely satisfying, and at the time I decided to take the easy way out (EXTRA_DIST the file, but don't actually install it). Recently, however, I ran against a similar problem for something else, and decided to try to do it the proper way this time around.

p11-kit, like systemd, ships pkg-config files which contain variables for the default locations to install files into. These variables' values are meant to be easy to use from scripts, so that no munging of them is required if you want to directly install to the system-wide default location. The downside of this is that, if you want to install to the system-wide default location by default from an autotools package (but still allow the user to --prefix your installation into some other place, accepting that then things won't work out of the box), you do need to do the aforementioned munging.

Luckily, that munging isn't too hard, provided whatever package you're installing for did the right thing:

PKG_CHECK_VAR([p11_moduledir], "p11-kit-1", "p11_module_path")
PKG_CHECK_VAR([p11kit_libdir], "p11-kit-1", "libdir")
if test -z $ac_cv_env_p11_moduledir_set; then
    p11_moduledir=$(echo $p11_moduledir|sed -e "s,$p11kit_libdir,\${libdir},g")
fi
AC_SUBST([p11_moduledir])

Whoa, what just happened?

First, we ask p11-kit-1 where it expects modules to be. After that, we ask p11-kit-1 what was used as "libdir" at installation time. Usually that should be something like /usr/lib or /usr/lib/gnu arch triplet or some such, but it could really be anything.

Next, we test to see whether the user set the p11_moduledir variable on the command line. If so, we don't want to munge it.

The next line looks for the value of whatever libdir was set to when p11-kit-1 was installed in the value of p11_module_path, and replaces it with the literal string ${libdir}.

Finally, we exit our if and AC_SUBST our value into the rest of the build system.

The resulting package will have the following semantics:

  • If someone installs p11-kit-1 your package with the same prefix, the files will install to the correct location.
  • If someone installs both packages with a different prefix, then by default the files will not install to the correct location. This is what you'd want, however; using a non-default prefix is the only way to install something as non-root, and if root installed something into /usr, a normal user wouldn't be able to fix things.
  • If someone installs both packages with a different prefix, but sets the p11_moduledir variable to the correct location, at configure time, then things will work as expected.

I suppose it would've been easier if the PKG_CHECK_VAR macro could (optionally) do that munging by itself, but then, can't have everything.

Posted Thu 08 Sep 2016 02:44:16 PM CEST

By popular request...

If you go to the Debian video archive, you will notice the appearance of an "lq" directory in the debconf16 subdirectory of the archive. This directory contains low-resolution re-encodings of the same videos that are available in the toplevel.

The quality of these videos is obviously lower than the ones that have been made available during debconf, but their file sizes should be up to about 1/4th of the file sizes of the full-quality versions. This may make them more attractive as a quick download, as a version for a small screen, as a download over a mobile network, or something of the sorts.

Note that the audio quality has not been reduced. If you're only interested in the audio of the talks, these files may be a better option.

Posted Wed 27 Jul 2016 10:13:54 PM CEST

I've been tweaking the video review system which we're using here at debconf over the past few days so that videos are being published automatically after review has finished; and I can happily announce that as of a short while ago, the first two files are now visible on the meetings archive. Yes, my own talk is part of that. No, that's not a coincidence. However, the other talks should not take too long ;-)

Future plans include the addition of a video RSS feed, and showing the videos on the debconf16 website. Stay tuned.

Posted Tue 05 Jul 2016 11:58:57 AM CEST

I had planned to do some work on NBD while here at debcamp. Here's a progress report:

Task Concept Code Tested
Change init script so it uses /etc/nbdtab rather than /etc/nbd-client for configuration
Change postinst so it converts existing /etc/nbd-client files to /etc/nbdtab
Change postinst so it generates /etc/nbdtab files from debconf
Create systemd unit for nbd based on /etc/nbdtab
Write STARTTLS support for client and/or server

The first four are needed to fix Debian bug #796633, of which "writing the systemd unit" was the one that seemed hardest. The good thing about debcamp, however, is that experts are aplenty (thanks Tollef), so that part's done now.

What's left:

  • Testing the init script modifications that I've made, so as to support those users who dislike systemd. They're fairly straightforward, and I don't anticipate any problems, but it helps to make sure.
  • Migrating the /etc/nbd-client configuration file to an nbdtab(5) one. This should be fairly straightforward, it's just a matter of Writing The Code(TM).
  • Changing the whole debconf setup so it writes (and/or updates) an nbdtab(5) file rather than a /etc/nbd-client shell snippet. This falls squarely into the "OMFG what the F*** was I thinking when I wrote that debconf stuff 10 years ago" area. I'll probably deal with it somehow. I hope. Not so sure how to do so yet, though.

If I manage to get all of the above to work and there's time left, I'll have a look at implementing STARTTLS support into nbd-client and nbd-server. A spec for that exists already, there's an alternative NBD implementation which has already implemented it, and preliminary patches exist for the reference implementation, so it's known to work; I just need to spend some time slapping the pieces together and making it work.

Ah well. Good old debcamp.

Posted Wed 29 Jun 2016 03:07:26 PM CEST

If you're reading this through Planet Grep, you may notice that the site's layout has been overhauled. The old design, which had just celebrated its 10th birthday, was starting to show its age. For instance, the site rendered pretty badly on mobile devices.

In that context, when I did the move to github a while back, I contacted Gregory (who'd done the original design), asking him whether he would be willing to update it. He did say yes, although it would take him a while to be able to make the time.

That's now happened, and the new design is live. We hope you like it.

Posted Wed 16 Mar 2016 08:52:40 AM CET
cat hello.c
#include <stdio.h>
int main(void) {
    printf("Hello World!\n");
    return 0;
}

A simple, standard, C program. Compiling and running it shouldn't produce any surprises... except when it does.

./hello
bash: ./hello: No such file or directory

The first time I got that, it was quite confusing. So, strace to the rescue -- I thought. But nope:

strace ./hello
execve("./hello", ["./hello"], [/* 14 vars */]) = -1 ENOENT (No such file or directory)

(output trimmed)

No luck there. What's going on? No, it's not some race condition whereby the file is being removed. A simple ls -l will show it's there, and that it's executable. So what? When I first encountered this, I was at a loss. Eventually, after searching for several hours, I figured it out and filed it under "surprising things with an easy fix". And didn't think of it anymore for a while, because once you understand what's going on, it's not that complicated. But I recently realized that it's not that obvious, and I've met several people who were at a loss when encountering this, and who didn't figure it out. So, here goes:

If you tell the kernel to run some application (i.e., if you run one of the exec system calls), it will open the binary and try to figure out what kind of application it's dealing with. It may be a script with a shebang line (in which case the kernel calls the appropriate interpreter), or it may be an ELF binary, or whatever.

If it is an ELF binary, the kernel checks if the architecture of the binary matches the CPU we're running on. If that's the case, it will just execute the instructions in the binary. If it's not an ELF binary, or if the architecture doesn't match, it will fall back on some other mechanism (e.g., the binfmt_misc subsystem could have some emulator set up to run binaries for the architecture in question, or may have been set up to run java on jar files, etc). Eventually, if all else fails, the kernel will return an error:

./hello: cannot execute binary file: Exec format error

"Exec format error" is the error message for ENOEXEC, the error code which the kernel returns if it determines that it cannot run the given binary. This makes it fairly obvious what's wrong, and why.

Now assume the kernel is biarch -- that is, it runs on an architecture which can run binaries for two CPU ISAs; e.g., this may be the case for an x86-64 machine, where the CPU can run binaries for that architecture as well as binaries for the i386 architecture (and its 32-bit decendants) without emulation. If the kernel has the option to run 32-bit x86 binaries enabled at compile time (which most binary distributions do, these days), then running i386 ELF binaries is possible, in theory. As far as the kernel is concerned, at least.

So, the kernel maps the i386 binary into memory, and jumps to the binary's entry point. And here is where it gets interesting: When the binary in question uses shared libraries, the kernel not only needs to open this binary itself, but also the runtime dynamic linker (RTDL). It is then the job of the RTDL to map the shared libraries into memory for the process to use, before jumping to its code.

But what if the RTDL isn't installed? Well, then the kernel won't find it. The RTDL is just this file, so that means it will produce an ENOENT error code -- the error message for which is "No such file or directory".

And there you have it. The solution is simple, and the explanation too; but still, even so, it can be a bit baffling: the system tells you "the file isn't there", without telling you which file it's missing. Since you passed it only one file, it's reasonable for you to believe the missing file is the binary you're asking the system to execute. But usually, that's not the problem: the problem is that the file containing the RTDL is not installed, which is just a result of you not having enabled multiarch.

Solution:

dpkg --add-architecture <target architecture>
apt update
apt install libc6:<target architecture>

Obviously you might need to add some more libraries, too, but that's not usually a problem.

Posted Fri 11 Mar 2016 12:25:58 PM CET

Before this bug:

[[!img Error: Image::Magick is not installed]]

after:

[[!img Error: Image::Magick is not installed]]

Whee!

Posted Fri 11 Mar 2016 12:25:58 PM CET
alias ls='ls --color=auto -N'

Unfortunately it doesn't actually revert to the previous behaviour, but it's close enough.

Posted Wed 03 Feb 2016 02:54:00 PM CET

While I'm not convinced that encrypting everything by default is necessarily a good idea, it is certainly true that encryption has its uses. Unfortunately, for the longest time getting an SSL certificate from a CA was quite a hassle -- and then I'm not even mentioning the fact that it would cost money, too. In that light, the letsencrypt project is a useful alternative: rather than having to dabble with emails or webforms, letsencrypt does everything by way of a few scripts. Also, the letsencrypt CA is free to use, in contrast to many other certificate authorities.

Of course, once you start scripting, it's easy to overdo it. When you go to the letsencrypt website and try to install their stuff, they point you to a "letsencrypt-auto" thing, which starts installing a python virtualenv and loads of other crap, just so we can talk to a server and get a certificate properly signed. I'm sure that makes sense for some people, and I'm also sure it makes some things significantly easier, but honestly, I think it's a bit overkill. Luckily, there is also a regular letsencrypt script that doesn't do all the virtualenv stuff, and which is already packaged for Debian, so installing that gets us rid of the overkill bits. It's not in stable, but it is an arch:all package. Unfortunately, not all dependencies are available in Jessie, but backporting it turned out to be not too complicated.

Once the client is installed, you need to get it to obtain a certificate, and you need to have the certificate be activated for your webserver. By default, the letsencrypt client will update your server configuration for you, and run a temporary webserver, and do a whole lot of other things that I would rather not see it do. The good news, though, is that you can switch all that off. If you're willing to let it write a few files to your DocumentRoot, it can do the whole thing without any change to your system:

letsencrypt certonly -m <your mailaddress> --accept-tos --webroot -w \
    /path/to/DocumentRoot -d example.com -d www.example.com

In that command, the -m option (with mail address) and the --accept-tos option are only required the first time you ever run letsencrypt. Doing so creates an account for you at letsencrypt.org, without which you cannot get any certificates. The --accept-tos option specifies that you accept their terms of service, which you might want to read first before blindly accepting.

If all goes well, letsencrypt will tell you that it wrote your account information and some certificates somewhere under /etc/letsencrypt. All that's left is to update your webserver configuration so that the certificates are actually used. No, I won't tell you how that's done -- if you need that information, it's probably better if you just use the apache or nginx plugin of letsencrypt, which updates your webserver's configuration automatically. But if you already know how to do so, and prefer to maintain your configuration by yourself, then it's good to know that it is at least possible.

Since we specified two -d parameters, the certificate you get from the letsencrypt CA will have subjectAlternativeName fields, allowing you to use them for more than one domain name. I specified example.com and www.example.com as the valid names, but there is nothing in letsencrypt requiring that the domain names you specify share a suffix; so yes, it is certainly possible to have multiple domains on a single certificate, if you want that.

Finally, you need to ensure that your certificates are renewed before they expire. The easiest way to do that is to add a cronjob, which runs letsencrypt again; only this time, you'll need slightly different options:

letsencrypt certonly --renew-by-default --webroot -w \
    /path/to/DocumentRoot -d domain.name.tld -d www.domain.name.tld

The --renew-by-default option tells letsencrypt that you want to renew your certificates, not create new ones. If you specify this, the old certificate will be revoked, and a new one will be generated, again with three months validity. The symlinks under /etc/letsencrypt/live will be updated, so if your webserver's configuration uses those as the path to your certificates, you don't need to update any configuration files (although you may need to restart your webserver). You'll also notice that the -m and --accept-tos options are not specified this time around; this is because those are only required when you want to create an account -- once that step has been taken, letsencrypt will read the necessary information from its configuration files (and creating a new account for every renewal is just silly).

With that, you now have a working, browser-trusted, certificate.

Update: fix mistake about installing on Jessie. It's possible (I've done so), but it's been a while and I forgot that I had to recompile several of letsencrypt's dependencies in order for it to be installable.

Posted Sun 24 Jan 2016 12:01:42 PM CET

I've been wanting for a very long time to have hackergochis on Planet Grep. The support is there, but if people don't submit them, that makes it so much harder to actually have hackergochis.

Experience as a reader of Planet Debian has taught me that the photos do add some value; they personalize the experience, and make a Planet more interesting to read -- at least in my personal opinion.

In an effort to get more hackergochis, I've added libravatar URLs for everyone on Planet Grep for whom I have at least one email address. With this, hopefully things will start looking somewhat better.

If you don't like the avatar that's being generated for your feed currently, you have two options:

  1. Configure a hackergochi using libravatar or gravatar. Planet Grep will pick that up.
  2. Send me a pull request which adds your photo to the Planet Grep configuration (safe-for-work photos only, please)

If you currently don't have a hackergochi on your feed, that means I have no information on you. In that case, please contact me. We can then figure out what the best way forward is.

Posted Tue 17 Nov 2015 02:44:04 PM CET