I've been wanting for a very long time to have hackergochis on Planet Grep. The support is there, but if people don't submit them, that makes it so much harder to actually have hackergochis.

Experience as a reader of Planet Debian has taught me that the photos do add some value; they personalize the experience, and make a Planet more interesting to read -- at least in my personal opinion.

In an effort to get more hackergochis, I've added libravatar URLs for everyone on Planet Grep for whom I have at least one email address. With this, hopefully things will start looking somewhat better.

If you don't like the avatar that's being generated for your feed currently, you have two options:

  1. Configure a hackergochi using libravatar or gravatar. Planet Grep will pick that up.
  2. Send me a pull request which adds your photo to the Planet Grep configuration (safe-for-work photos only, please)

If you currently don't have a hackergochi on your feed, that means I have no information on you. In that case, please contact me. We can then figure out what the best way forward is.

Posted Tue Nov 17 14:44:04 2015

noun | ter·ror·ism | \ˈter-ər-ˌi-zəm\ | no plural

The mistaken belief that it is possible to change the world through acts of cowardice.

First use in English 1795 in reference to the Jacobin rule in Paris, France.

ex.: They killed a lot of people, but their terrorism only intensified the people's resolve.

Posted Mon Nov 16 09:07:28 2015

My main reason for being here was to join the Debconf video team sprint. But hey, since I'm here anyway, why not join all of it, right?


Apart from the video team stuff that I've been involved in, I've been spending the last few days improving NBD:

  • Debian bug #803795: Dealt with the fact that recent Linux kernels handle AF_UNSPEC differenty than do earlier ones, and made IPv6 be enabled by default again. While at it, make it possible to export on multiple listening addresses, too. Update: it isn't a kernel change, it's a change in default settings of the net.ipv6.bindv6only sysctl; so current nbd can be made to work over v6 with current Debian, but doesn't do so by default. With this change, it does.
  • Since 3.10 it was no longer possible to enable the oldstyle handshake, but the code did still check the various data structures to see if oldstyle was wanted (which it never was) to then maybe possibly do the oldstyle handshake. This has now been removed
  • I received notification upstream that there were some issues with the TRIM handling, which hopefully should be resolved now. Added a test to the test suite to prevent this from happening again, too. That required some heavy restructuring of the code, but ah well.
  • Silenced various compiler warnings (no, not by telling the compiler to ignore the issues in question ;-)

There're a few more things that I'd like to see fixed, but after that I'll probably release 3.12 (upstream and in Debian) sometime later this week. With that, I'll be ready to start tackling Debian bug #796633, to provide proper systemd support in nbd-client.

Posted Sat Nov 7 16:21:23 2015

For the longest time, the Planet Grep configuration was stored in a subversion repository on one of my servers. This worked, until the server was moved around once too often, and I apparently killed it. As such, updating things there was... "complicated".

In addition, the Planet Grep configuration repository was the only non-git repository that I was still dealing with, and subversion is just... wrong.

So, in an effort to streamline things, I've just updated things so that rather than in subversion, my configuration is now stored in git. If you want to add your blog to Planet Grep, or if you want to update the URL where you post your stuff, you can now simply send me a pull request.

Posted Sun Nov 1 18:10:17 2015

I got me a new television last wednesday. The previous one was still functional, but with its 20 inch display it was (by today's standards) a very small one. In itself that wasn't an issue, but combined with my rather large movie collection and the 5.1 surround set that I bought a few years ago, my living room was starting to look slightly ridiculous. The new TV is a Panasonic 40" UHD ("4K") 3D-capable so-called smart thing. Unfortunately, all did not go well.

When I initially hooked it up, I found it confusingly difficult to figure out how to make it display something useful from the HDMI input rather than from the (not yet connected) broadcast cable. This eventually turned out to be due to the fact that the HDMI input was not selectable by a button marked "AUX" or similar, but by a button marked with some pictogram on a location near the teletext controls, which was pretty much the last place I'd look for such a thing.

After crossing that bridge, I popped in the correct film for that particular day and started watching it. The first thing I noticed, however, was that something was odd with the audio. It turned out that the TV as well as the 5.1 amplifier support the CEC protocol, which allows the TV to control some functionality of the A/V receiver. Unfortunately, the defaults were set in the TV to route audio to the TV's speakers, rather than to the 5.1 amp. This was obviously wrong, and it took me well over an hour to figure out why that was happening, and how I could fix it. My first solution at that time was to disable CEC on the amplifier, so that I could override where the audio would go there. Unfortunately, that caused the audio and video to go out of sync; not very pleasant. In addition, the audio would drop out every twenty seconds or so, which if you're trying to watch a movie is horribly annoying, and eventually I popped the DVD into a machine with analog 5.1 audio and component HD video outputs; not the best quality, but at least I could stop getting annoyed about it.

Over the next few days, I managed to get the setup working better and better:

  • The audio dropping was caused by an older HDMI cable being used. I didn't know this, but apparently there are several versions of HDMI wiring, and if an older cable is used then the amount of data that can be passed over the line is not as high. Since my older TV didn't do 1080p (only 1080i) I didn't notice this before getting the new set, but changing out some of the HDMI cables fixed that issue.
  • After searching the settings a bit, I found that the TV does have a setting for making it route audio to the 5.1 amp, so I'm back to using CEC, which has several advantages.
  • The hardcoding of one particular type of video to one particular input in the 5.1 amp that I complained about in that other post does seem to have at least some merit: it turns out that this is part of the CEC as well, and so when I change from HDMI input to broadcast data, the TV will automatically switch the amp's input to the "TV" input, too. That's pretty cool, even though I had to buy yet another cable (this time a TOSLINK one) to make it work well.

There's just on thing remaining: when I go into the channel list and try to move some channels around, the TV has the audacity to tell me that it's "not allowed". I mean, I paid for it, so I get to say what's allowed, not you, thankyouverymuch. Anyway, I'm sure I'll figure that out eventually.

The TV also has some 3D capability, but unfortunately it's one of those that require active 3D glasses, so the set that I bought at the movie theatre a while ago won't work. So after spending several tens of euros on extra cabling, I'll have to spend even more on a set of 3D glasses. They'll probably be brand-specific, too. Ah well.

It's a bit odd, in my opinion, that it takes me almost a week to get all that stuff to properly work. Ten years ago, the old TV had some connections, a remote, and that was it; you hook it up and you're done. Not anymore.

Posted Tue Oct 27 11:25:38 2015

I am not going to talk about Norbert Preining's continuous ranting against Debian's Code of Conduct. Suffice to say that it annoys me, and that I think he is wrong.

I am, however going to mention that contrary to his statements (penultimate paragraph), yes, the Code of Conduct is supposed to apply to Planet Debian as well. Its introductory paragraph states:

The Debian Project, the producers of the Debian system, have adopted a code of conduct for participants to its mailinglists, IRC channels and other modes of communication within the project.

Note the "other modes of communication within the project" bit. Planet Debian is planet.debian.org. If your blog is aggregated there, you are communicating within the project.

At least that's my interpretation of it.

Posted Thu Oct 15 23:16:42 2015

Transcoding video from one format to another seems to be a bit of a black art. There are many tools that allow doing this kind of stuff, but one issue that most seem to have is that they're not very well documented.

I ran against this a few years ago, when I was first doing video work for FOSDEM and did not yet have proper tools to do the review and transcoding workflow.

At the time, I just used mplayer to look at the .dv files, and wrote a text file with a simple structure to remember exactly what to do with it. That file was then fed to a perl script which wrote out a shell script that would use the avconv command to combine and extract the "interesting" data from the source DV files into a single DV file per talk, and which would then call a shell script which used gst-launch and sox to do a multi-pass transcode of those intermediate DV files into a WebM file.

While all that worked properly, it was a rather ugly hack, never cleaned up, and therefore I never really documented it properly either. Recently, however, someone asked me to do so anyway, so here goes. Before you want to complain about how this ate the videos of your firstborn child, however, note the above.

The perl script spent a somewhat large amount of code reading out the text file and parsing it into an array of hashes. I'm not going to reproduce that, since the actual format of the file isn't all that important anyway. However, here's the interesting bits:

foreach my $pfile(keys %parts) {
        my @files = @{$parts{$pfile}};

        say "#" x (length($pfile) + 4);
        say "# " . $pfile . " #";
        say "#" x (length($pfile) + 4);
        foreach my $file(@files) {
                my $start = "";
                my $stop = "";

                if(defined($file->{start})) {
                        $start = "-ss " . $file->{start};
                if(defined($file->{stop})) {
                        $stop = "-t " . $file->{stop};
                if(defined($file->{start}) && defined($file->{stop})) {
                        my @itime = split /:/, $file->{start};
                        my @otime = split /:/, $file->{stop};
                        if($otime[1]<0) {
                        if($otime[2]<0) {
                        $stop = "-t " . $otime[0] . ":" . $otime[1] .  ":" . $otime[2];
                if(defined($file->{start}) || defined($file->{stop})) {
                        say "ln " . $file->{name} . ".dv part-pre.dv";
                        say "avconv -i part-pre.dv $start $stop -y -acodec copy -vcodec copy part.dv";
                        say "rm -f part-pre.dv";
                } else {
                        say "ln " . $file->{name} . ".dv part.dv";
                say "cat part.dv >> /tmp/" . $pfile . ".dv";
                say "rm -f part.dv";
        say "dv2webm /tmp/" . $pfile . ".dv";
        say "rm -f /tmp/" . $pfile . ".dv";
        say "scp /tmp/" . $pfile . ".webm video.fosdem.org:$uploadpath || true";
        say "mv /tmp/" . $pfile . ".webm .";

That script uses avconv to read one or more .dv files and transcode them into a single .dv file with all the start- or end-junk removed. It uses /tmp rather than the working directory, since the working directory was somewhere on the network, and if you're going to write several gigabytes of data to an intermediate file, it's usually a good idea to write them to a local filesystem rather than to a networked one.

Pretty boring.

It finally calls dv2webm on the resulting .dv file. That script looks like this:


set -e

newfile=$(basename $1 .dv).webm
wavfile=$(basename $1 .dv).wav
wavfile=$(readlink -f $wavfile)
normalfile=$(basename $1 .dv)-normal.wav
normalfile=$(readlink -f $normalfile)
oldfile=$(readlink -f $1)

echo -e "\033]0;Pass 1: $newfile\007"
gst-launch-0.10 webmmux name=mux ! fakesink \
  uridecodebin uri=file://$oldfile name=demux \
  demux. ! ffmpegcolorspace ! deinterlace ! vp8enc multipass-cache-file=/tmp/vp8-multipass multipass-mode=1 threads=2 ! queue ! mux.video_0 \
  demux. ! progressreport ! audioconvert ! audiorate ! tee name=t ! queue ! vorbisenc ! queue ! mux.audio_0 \
  t. ! queue ! wavenc ! filesink location=$wavfile
echo -e "\033]0;Audio normalize: $newfile\007"
sox --norm $wavfile $normalfile
echo -e "\033]0;Pass 2: $newfile\007"
gst-launch-0.10 webmmux name=mux ! filesink location=$newfile \
  uridecodebin uri=file://$oldfile name=video \
  uridecodebin uri=file://$normalfile name=audio \
  video. ! ffmpegcolorspace ! deinterlace ! vp8enc multipass-cache-file=/tmp/vp8-multipass multipass-mode=2 threads=2 ! queue ! mux.video_0 \
  audio. ! progressreport ! audioconvert ! audiorate ! vorbisenc ! queue ! mux.audio_0

rm $wavfile $normalfile

... and is a bit more involved.

Multi-pass encoding of video means that we ask the encoder to first encode the file but store some statistics into a temporary file (/tmp/vp8-multipass, in our script), which the second pass can then reuse to optimize the transcoding. Since DV uses different ways of encoding things than does VP8, we also need to do a color space conversion (ffmpegcolorspace) and deinterlacing (deinterlace), but beyond that the video line in the first gstreamer pipeline isn't very complicated.

Since we're going over the file anyway and we need the audio data for sox, we add a tee plugin at an appropriate place in the audio line in the first gstreamer pipeline, so that we can later on pick up that same audio data an write it to a wav file containing linear PCM data. Beyond the tee, we go on and do a vorbis encoding, as is needed for the WebM format. This is not actually required for a first pass, but ah well. There's some more conversion plugins in the pipeline (specifically, audioconvert and audiorate), but those are not very important.

We next run sox --norm on the .wav file, which does a fully automated audio normalisation on the input. Audio normalisation is the process of adjusting volume levels so that the audio is not too loud, but also not too quiet. Sox has pretty good support for this; the default settings of its --norm parameter make it adjust the volume levels so that the highest peak will just about reach the highest value that the output format can express. As such, you have no clipping anywhere in the file, but also have an audio level that is actually useful.

Next, we run a second-pass encoding on the input file. This second pass uses the statistics gathered in the first pass to decide where to put its I- and P-frames so that they are placed at the most optimal position. In addition, rather than reading the audio from the original file, we now read the audio from the .wav file containing the normalized audio which we produced with sox, ensuring the audio can be understood.

Finally, we remove the intermediate audio files we created; and the shell script which was generated by perl also contained an rm command for the intermediate .dv file.

Some of this is pretty horrid, and I never managed to clean it up enough so it would be pretty (and now is not really the time). However, it Just Works(tm), and I am happy to report that it continues to work with gstreamer 1.0, provided you replace the ffmpegcolorspace by an equally simple videoconvert, which performs what ffmpegcolorspace used to perform in gstreamer 0.10.

Posted Fri Aug 14 21:33:58 2015

The tape archiver, better known as tar, is one of the older backup programs in existence.

It's not very good at automated incremental backups (for which bacula is a good choice), but it can be useful for "let's take a quick snapshot of the current system" type of situations.

As I'm preparing to head off to debconf tomorrow, I'm taking a backup of my n-1 laptop (which still contains some data that I don't want to lose) so it can be reinstalled and used by the Debconf video team. While I could use a "proper" backup system, running tar to a large hard disk is much easier.

By default, however, tar won't preserve everything, so it is usually a good idea to add some extra options. This is what I' mrunning currently:

sudo tar cvpaSf player.local:carillon.tgz --rmt-command=/usr/sbin/rmt --one-file-system /

which breaks down to create tar archive, verbose output, preserve permissions, automatically determine compression based on file extension, handle Sparse files efficiently, write to a file on a remote host using /usr/sbin/rmt as the rmt program, don't descend into a separate filesystem (since I don't want /proc and /sys etc to be backed up), and back up my root partition.

Since I don't believe there's any value to separate file systems on a laptop, this will back up the entire contents of my n-1 laptop to the carillon.tgz in my home directory on player.local.

Posted Sat Aug 8 10:45:54 2015

Because of CVE-2015-0847 and CVE-2013-7441, two security issues in nbd-server, I've had to updates for nbd, for which there are various supported versions: upstream, unstable, stable, oldstable, oldoldstable, and oldoldstable-backports. I've just finished uploading security fixes for the various supported versions of nbd-server in Debian. There're various relevant archives, and unfortunately it looks like they all have their own way of doing things regarding security:

  • For squeeze-lts (oldoldstable), you check out the secure-testing repository, run a script from that repository that generates a DLA number and email template, commit the result, and send a signed mail (whatever format) to the relevant mailinglist. Uploads go to ftp-master with squeeze-lts as target distribution.
  • For backports, you send a mail to the team alias requesting a BSA number, do the upload, and write the mail (based on a template that you need to modify yourself), which you then send (inline signed) to the relevant mailinglist. Uploads go to ftp-master with $dist-backports as target distribution, but you need to be in a particular ACL to be allowed to do so. However, due to backports policy, packages should never be in backports before they are in the distribution from which they are derived -- so I refrained from uploading to backports until the regular security update had been done. Not sure whether that's strictly required, but I didn't think it would do harm; even so, that did mean the procedure for backports was even more involved.
  • For the distributions supported by the security team (stable and oldstable, currently), you prepare the upload yourself, ask permission from the security team (by sending a debdiff), do the upload, and then ask the security team to send out the email. Uploads go to security-master, which implies that you may have to use dpkg-buildpackage's -sa parameter in order to make sure that the orig.tar.gz is actually in the security archive.
  • For unstable and upstream, you Just Upload(TM), because it's no different from a regular release.

While I understand how the differences between the various approaches have come to exist, I'm not sure I understand why they are necessary. Clearly, there's some room for improvement here.

As anyone who reads the above may see, doing an upload for squeeze-lts is in fact the easiest of the three "stable" approaches, since no intermediate steps are required. While I'm not about to advocate dropping all procedures everywhere, a streamlining of them might be appropriate.

Posted Sun May 24 21:18:22 2015

About a decade ago, I played in the (now defunct) "Jozef Pauly ensemble", a flute choir connected to the musical academy where I was taught to play the flute. At the time, this ensemble had the habit of goin on summer trips every year; sometimes these trips were large international concert tours (like our 2001 trip to Australia), but that wasn't always the case; there have also been smaller trips, like the 2002 one to the French Ardennes.

While there, we went on a day trip to the city of Reims. As a city close to the front in the first world war, it has a museum dedicated to that subject that I remembered going to. But the fondest memory of that day was going to a park where a podium was set up, with a few stacks of fold-up chairs standing nearby. I took one and listened to the music.

That was the day when I realized that I kindof like jazz. I had come into contact with Jazz before, but it had always been something to be used as a kind of musical wallpaper; something you put on, but don't consciously listen to. Watching this woman sing, however, was a different kind of experience altogether. I'm still very fond of her rendition of "Besame Mucho".

After having listened to the concert for about two hours, they called it quits, but did tell us that there was a record which you could buy. Of course, after having enjoyed the afternoon so much, I couldn't imagine not buying it, so that happened.

Fast forward several years, in the move from my apartment above my then-office to my current apartment (just around the corner), the record got put into the wrong box, and when I unpacked things again it got lost; permanently, I thought. Since I also hadn't digitized it yet at the time, I haven't listened to it anymore in quite a while.

But that time came to an end today. The record which I thought I'd lost wasn't, it was just in a weird place, and while cleaning yesterday, I found it sitting among a bunch of old stuff that I was going to throw out. Putting on the record today made me realize again how good it really is, and I thought that I might want to see if she was still active, and if she might perhaps have made another album.

It was great to find out that not only had she made six more albums since the one I bought, she'd also become a lot more known in the Jazz world (which I must admit I don't really follow all that well), and won a number of awards.

At the time, Youn Sun Nah was just a (fairly) recent graduate from a particular Jazz school in Paris. Today, she appears to be so much more...

Posted Sun Apr 19 11:25:55 2015