alias ls='ls --color=auto -N'

Unfortunately it doesn't actually revert to the previous behaviour, but it's close enough.

Posted Wed Feb 3 14:54:00 2016

While I'm not convinced that encrypting everything by default is necessarily a good idea, it is certainly true that encryption has its uses. Unfortunately, for the longest time getting an SSL certificate from a CA was quite a hassle -- and then I'm not even mentioning the fact that it would cost money, too. In that light, the letsencrypt project is a useful alternative: rather than having to dabble with emails or webforms, letsencrypt does everything by way of a few scripts. Also, the letsencrypt CA is free to use, in contrast to many other certificate authorities.

Of course, once you start scripting, it's easy to overdo it. When you go to the letsencrypt website and try to install their stuff, they point you to a "letsencrypt-auto" thing, which starts installing a python virtualenv and loads of other crap, just so we can talk to a server and get a certificate properly signed. I'm sure that makes sense for some people, and I'm also sure it makes some things significantly easier, but honestly, I think it's a bit overkill. Luckily, there is also a regular letsencrypt script that doesn't do all the virtualenv stuff, and which is already packaged for Debian, so installing that gets us rid of the overkill bits. It's not in stable, but it is an arch:all package. Unfortunately, not all dependencies are available in Jessie, but backporting it turned out to be not too complicated.

Once the client is installed, you need to get it to obtain a certificate, and you need to have the certificate be activated for your webserver. By default, the letsencrypt client will update your server configuration for you, and run a temporary webserver, and do a whole lot of other things that I would rather not see it do. The good news, though, is that you can switch all that off. If you're willing to let it write a few files to your DocumentRoot, it can do the whole thing without any change to your system:

letsencrypt certonly -m <your mailaddress> --accept-tos --webroot -w \
    /path/to/DocumentRoot -d example.com -d www.example.com

In that command, the -m option (with mail address) and the --accept-tos option are only required the first time you ever run letsencrypt. Doing so creates an account for you at letsencrypt.org, without which you cannot get any certificates. The --accept-tos option specifies that you accept their terms of service, which you might want to read first before blindly accepting.

If all goes well, letsencrypt will tell you that it wrote your account information and some certificates somewhere under /etc/letsencrypt. All that's left is to update your webserver configuration so that the certificates are actually used. No, I won't tell you how that's done -- if you need that information, it's probably better if you just use the apache or nginx plugin of letsencrypt, which updates your webserver's configuration automatically. But if you already know how to do so, and prefer to maintain your configuration by yourself, then it's good to know that it is at least possible.

Since we specified two -d parameters, the certificate you get from the letsencrypt CA will have subjectAlternativeName fields, allowing you to use them for more than one domain name. I specified example.com and www.example.com as the valid names, but there is nothing in letsencrypt requiring that the domain names you specify share a suffix; so yes, it is certainly possible to have multiple domains on a single certificate, if you want that.

Finally, you need to ensure that your certificates are renewed before they expire. The easiest way to do that is to add a cronjob, which runs letsencrypt again; only this time, you'll need slightly different options:

letsencrypt certonly --renew-by-default --webroot -w \
    /path/to/DocumentRoot -d domain.name.tld -d www.domain.name.tld

The --renew-by-default option tells letsencrypt that you want to renew your certificates, not create new ones. If you specify this, the old certificate will be revoked, and a new one will be generated, again with three months validity. The symlinks under /etc/letsencrypt/live will be updated, so if your webserver's configuration uses those as the path to your certificates, you don't need to update any configuration files (although you may need to restart your webserver). You'll also notice that the -m and --accept-tos options are not specified this time around; this is because those are only required when you want to create an account -- once that step has been taken, letsencrypt will read the necessary information from its configuration files (and creating a new account for every renewal is just silly).

With that, you now have a working, browser-trusted, certificate.

Update: fix mistake about installing on Jessie. It's possible (I've done so), but it's been a while and I forgot that I had to recompile several of letsencrypt's dependencies in order for it to be installable.

Posted Sun Jan 24 12:01:42 2016

I've been wanting for a very long time to have hackergochis on Planet Grep. The support is there, but if people don't submit them, that makes it so much harder to actually have hackergochis.

Experience as a reader of Planet Debian has taught me that the photos do add some value; they personalize the experience, and make a Planet more interesting to read -- at least in my personal opinion.

In an effort to get more hackergochis, I've added libravatar URLs for everyone on Planet Grep for whom I have at least one email address. With this, hopefully things will start looking somewhat better.

If you don't like the avatar that's being generated for your feed currently, you have two options:

  1. Configure a hackergochi using libravatar or gravatar. Planet Grep will pick that up.
  2. Send me a pull request which adds your photo to the Planet Grep configuration (safe-for-work photos only, please)

If you currently don't have a hackergochi on your feed, that means I have no information on you. In that case, please contact me. We can then figure out what the best way forward is.

Posted Tue Nov 17 14:44:04 2015

noun | ter·ror·ism | \ˈter-ər-ˌi-zəm\ | no plural

The mistaken belief that it is possible to change the world through acts of cowardice.

First use in English 1795 in reference to the Jacobin rule in Paris, France.

ex.: They killed a lot of people, but their terrorism only intensified the people's resolve.

Posted Mon Nov 16 09:07:28 2015

My main reason for being here was to join the Debconf video team sprint. But hey, since I'm here anyway, why not join all of it, right?

Right.

Apart from the video team stuff that I've been involved in, I've been spending the last few days improving NBD:

  • Debian bug #803795: Dealt with the fact that recent Linux kernels handle AF_UNSPEC differenty than do earlier ones, and made IPv6 be enabled by default again. While at it, make it possible to export on multiple listening addresses, too. Update: it isn't a kernel change, it's a change in default settings of the net.ipv6.bindv6only sysctl; so current nbd can be made to work over v6 with current Debian, but doesn't do so by default. With this change, it does.
  • Since 3.10 it was no longer possible to enable the oldstyle handshake, but the code did still check the various data structures to see if oldstyle was wanted (which it never was) to then maybe possibly do the oldstyle handshake. This has now been removed
  • I received notification upstream that there were some issues with the TRIM handling, which hopefully should be resolved now. Added a test to the test suite to prevent this from happening again, too. That required some heavy restructuring of the code, but ah well.
  • Silenced various compiler warnings (no, not by telling the compiler to ignore the issues in question ;-)

There're a few more things that I'd like to see fixed, but after that I'll probably release 3.12 (upstream and in Debian) sometime later this week. With that, I'll be ready to start tackling Debian bug #796633, to provide proper systemd support in nbd-client.

Posted Sat Nov 7 16:21:23 2015

For the longest time, the Planet Grep configuration was stored in a subversion repository on one of my servers. This worked, until the server was moved around once too often, and I apparently killed it. As such, updating things there was... "complicated".

In addition, the Planet Grep configuration repository was the only non-git repository that I was still dealing with, and subversion is just... wrong.

So, in an effort to streamline things, I've just updated things so that rather than in subversion, my configuration is now stored in git. If you want to add your blog to Planet Grep, or if you want to update the URL where you post your stuff, you can now simply send me a pull request.

Posted Sun Nov 1 18:10:17 2015

I got me a new television last wednesday. The previous one was still functional, but with its 20 inch display it was (by today's standards) a very small one. In itself that wasn't an issue, but combined with my rather large movie collection and the 5.1 surround set that I bought a few years ago, my living room was starting to look slightly ridiculous. The new TV is a Panasonic 40" UHD ("4K") 3D-capable so-called smart thing. Unfortunately, all did not go well.

When I initially hooked it up, I found it confusingly difficult to figure out how to make it display something useful from the HDMI input rather than from the (not yet connected) broadcast cable. This eventually turned out to be due to the fact that the HDMI input was not selectable by a button marked "AUX" or similar, but by a button marked with some pictogram on a location near the teletext controls, which was pretty much the last place I'd look for such a thing.

After crossing that bridge, I popped in the correct film for that particular day and started watching it. The first thing I noticed, however, was that something was odd with the audio. It turned out that the TV as well as the 5.1 amplifier support the CEC protocol, which allows the TV to control some functionality of the A/V receiver. Unfortunately, the defaults were set in the TV to route audio to the TV's speakers, rather than to the 5.1 amp. This was obviously wrong, and it took me well over an hour to figure out why that was happening, and how I could fix it. My first solution at that time was to disable CEC on the amplifier, so that I could override where the audio would go there. Unfortunately, that caused the audio and video to go out of sync; not very pleasant. In addition, the audio would drop out every twenty seconds or so, which if you're trying to watch a movie is horribly annoying, and eventually I popped the DVD into a machine with analog 5.1 audio and component HD video outputs; not the best quality, but at least I could stop getting annoyed about it.

Over the next few days, I managed to get the setup working better and better:

  • The audio dropping was caused by an older HDMI cable being used. I didn't know this, but apparently there are several versions of HDMI wiring, and if an older cable is used then the amount of data that can be passed over the line is not as high. Since my older TV didn't do 1080p (only 1080i) I didn't notice this before getting the new set, but changing out some of the HDMI cables fixed that issue.
  • After searching the settings a bit, I found that the TV does have a setting for making it route audio to the 5.1 amp, so I'm back to using CEC, which has several advantages.
  • The hardcoding of one particular type of video to one particular input in the 5.1 amp that I complained about in that other post does seem to have at least some merit: it turns out that this is part of the CEC as well, and so when I change from HDMI input to broadcast data, the TV will automatically switch the amp's input to the "TV" input, too. That's pretty cool, even though I had to buy yet another cable (this time a TOSLINK one) to make it work well.

There's just on thing remaining: when I go into the channel list and try to move some channels around, the TV has the audacity to tell me that it's "not allowed". I mean, I paid for it, so I get to say what's allowed, not you, thankyouverymuch. Anyway, I'm sure I'll figure that out eventually.

The TV also has some 3D capability, but unfortunately it's one of those that require active 3D glasses, so the set that I bought at the movie theatre a while ago won't work. So after spending several tens of euros on extra cabling, I'll have to spend even more on a set of 3D glasses. They'll probably be brand-specific, too. Ah well.

It's a bit odd, in my opinion, that it takes me almost a week to get all that stuff to properly work. Ten years ago, the old TV had some connections, a remote, and that was it; you hook it up and you're done. Not anymore.

Posted Tue Oct 27 11:25:38 2015

I am not going to talk about Norbert Preining's continuous ranting against Debian's Code of Conduct. Suffice to say that it annoys me, and that I think he is wrong.

I am, however going to mention that contrary to his statements (penultimate paragraph), yes, the Code of Conduct is supposed to apply to Planet Debian as well. Its introductory paragraph states:

The Debian Project, the producers of the Debian system, have adopted a code of conduct for participants to its mailinglists, IRC channels and other modes of communication within the project.

Note the "other modes of communication within the project" bit. Planet Debian is planet.debian.org. If your blog is aggregated there, you are communicating within the project.

At least that's my interpretation of it.

Posted Thu Oct 15 23:16:42 2015

Transcoding video from one format to another seems to be a bit of a black art. There are many tools that allow doing this kind of stuff, but one issue that most seem to have is that they're not very well documented.

I ran against this a few years ago, when I was first doing video work for FOSDEM and did not yet have proper tools to do the review and transcoding workflow.

At the time, I just used mplayer to look at the .dv files, and wrote a text file with a simple structure to remember exactly what to do with it. That file was then fed to a perl script which wrote out a shell script that would use the avconv command to combine and extract the "interesting" data from the source DV files into a single DV file per talk, and which would then call a shell script which used gst-launch and sox to do a multi-pass transcode of those intermediate DV files into a WebM file.

While all that worked properly, it was a rather ugly hack, never cleaned up, and therefore I never really documented it properly either. Recently, however, someone asked me to do so anyway, so here goes. Before you want to complain about how this ate the videos of your firstborn child, however, note the above.

The perl script spent a somewhat large amount of code reading out the text file and parsing it into an array of hashes. I'm not going to reproduce that, since the actual format of the file isn't all that important anyway. However, here's the interesting bits:

foreach my $pfile(keys %parts) {
        my @files = @{$parts{$pfile}};

        say "#" x (length($pfile) + 4);
        say "# " . $pfile . " #";
        say "#" x (length($pfile) + 4);
        foreach my $file(@files) {
                my $start = "";
                my $stop = "";

                if(defined($file->{start})) {
                        $start = "-ss " . $file->{start};
                }
                if(defined($file->{stop})) {
                        $stop = "-t " . $file->{stop};
                }
                if(defined($file->{start}) && defined($file->{stop})) {
                        my @itime = split /:/, $file->{start};
                        my @otime = split /:/, $file->{stop};
                        $otime[0]-=$itime[0];
                        $otime[1]-=$itime[1];
                        if($otime[1]<0) {
                                $otime[0]-=1;
                                $otime[1]+=60;
                        }
                        $otime[2]-=$itime[2];
                        if($otime[2]<0) {
                                $otime[1]-=1;
                                $otime[2]+=60;
                        }
                        $stop = "-t " . $otime[0] . ":" . $otime[1] .  ":" . $otime[2];
                }
                if(defined($file->{start}) || defined($file->{stop})) {
                        say "ln " . $file->{name} . ".dv part-pre.dv";
                        say "avconv -i part-pre.dv $start $stop -y -acodec copy -vcodec copy part.dv";
                        say "rm -f part-pre.dv";
                } else {
                        say "ln " . $file->{name} . ".dv part.dv";
                }
                say "cat part.dv >> /tmp/" . $pfile . ".dv";
                say "rm -f part.dv";
        }
        say "dv2webm /tmp/" . $pfile . ".dv";
        say "rm -f /tmp/" . $pfile . ".dv";
        say "scp /tmp/" . $pfile . ".webm video.fosdem.org:$uploadpath || true";
        say "mv /tmp/" . $pfile . ".webm .";
}

That script uses avconv to read one or more .dv files and transcode them into a single .dv file with all the start- or end-junk removed. It uses /tmp rather than the working directory, since the working directory was somewhere on the network, and if you're going to write several gigabytes of data to an intermediate file, it's usually a good idea to write them to a local filesystem rather than to a networked one.

Pretty boring.

It finally calls dv2webm on the resulting .dv file. That script looks like this:

#!/bin/bash

set -e

newfile=$(basename $1 .dv).webm
wavfile=$(basename $1 .dv).wav
wavfile=$(readlink -f $wavfile)
normalfile=$(basename $1 .dv)-normal.wav
normalfile=$(readlink -f $normalfile)
oldfile=$(readlink -f $1)

echo -e "\033]0;Pass 1: $newfile\007"
gst-launch-0.10 webmmux name=mux ! fakesink \
  uridecodebin uri=file://$oldfile name=demux \
  demux. ! ffmpegcolorspace ! deinterlace ! vp8enc multipass-cache-file=/tmp/vp8-multipass multipass-mode=1 threads=2 ! queue ! mux.video_0 \
  demux. ! progressreport ! audioconvert ! audiorate ! tee name=t ! queue ! vorbisenc ! queue ! mux.audio_0 \
  t. ! queue ! wavenc ! filesink location=$wavfile
echo -e "\033]0;Audio normalize: $newfile\007"
sox --norm $wavfile $normalfile
echo -e "\033]0;Pass 2: $newfile\007"
gst-launch-0.10 webmmux name=mux ! filesink location=$newfile \
  uridecodebin uri=file://$oldfile name=video \
  uridecodebin uri=file://$normalfile name=audio \
  video. ! ffmpegcolorspace ! deinterlace ! vp8enc multipass-cache-file=/tmp/vp8-multipass multipass-mode=2 threads=2 ! queue ! mux.video_0 \
  audio. ! progressreport ! audioconvert ! audiorate ! vorbisenc ! queue ! mux.audio_0

rm $wavfile $normalfile

... and is a bit more involved.

Multi-pass encoding of video means that we ask the encoder to first encode the file but store some statistics into a temporary file (/tmp/vp8-multipass, in our script), which the second pass can then reuse to optimize the transcoding. Since DV uses different ways of encoding things than does VP8, we also need to do a color space conversion (ffmpegcolorspace) and deinterlacing (deinterlace), but beyond that the video line in the first gstreamer pipeline isn't very complicated.

Since we're going over the file anyway and we need the audio data for sox, we add a tee plugin at an appropriate place in the audio line in the first gstreamer pipeline, so that we can later on pick up that same audio data an write it to a wav file containing linear PCM data. Beyond the tee, we go on and do a vorbis encoding, as is needed for the WebM format. This is not actually required for a first pass, but ah well. There's some more conversion plugins in the pipeline (specifically, audioconvert and audiorate), but those are not very important.

We next run sox --norm on the .wav file, which does a fully automated audio normalisation on the input. Audio normalisation is the process of adjusting volume levels so that the audio is not too loud, but also not too quiet. Sox has pretty good support for this; the default settings of its --norm parameter make it adjust the volume levels so that the highest peak will just about reach the highest value that the output format can express. As such, you have no clipping anywhere in the file, but also have an audio level that is actually useful.

Next, we run a second-pass encoding on the input file. This second pass uses the statistics gathered in the first pass to decide where to put its I- and P-frames so that they are placed at the most optimal position. In addition, rather than reading the audio from the original file, we now read the audio from the .wav file containing the normalized audio which we produced with sox, ensuring the audio can be understood.

Finally, we remove the intermediate audio files we created; and the shell script which was generated by perl also contained an rm command for the intermediate .dv file.

Some of this is pretty horrid, and I never managed to clean it up enough so it would be pretty (and now is not really the time). However, it Just Works(tm), and I am happy to report that it continues to work with gstreamer 1.0, provided you replace the ffmpegcolorspace by an equally simple videoconvert, which performs what ffmpegcolorspace used to perform in gstreamer 0.10.

Posted Fri Aug 14 21:33:58 2015

The tape archiver, better known as tar, is one of the older backup programs in existence.

It's not very good at automated incremental backups (for which bacula is a good choice), but it can be useful for "let's take a quick snapshot of the current system" type of situations.

As I'm preparing to head off to debconf tomorrow, I'm taking a backup of my n-1 laptop (which still contains some data that I don't want to lose) so it can be reinstalled and used by the Debconf video team. While I could use a "proper" backup system, running tar to a large hard disk is much easier.

By default, however, tar won't preserve everything, so it is usually a good idea to add some extra options. This is what I' mrunning currently:

sudo tar cvpaSf player.local:carillon.tgz --rmt-command=/usr/sbin/rmt --one-file-system /

which breaks down to create tar archive, verbose output, preserve permissions, automatically determine compression based on file extension, handle Sparse files efficiently, write to a file on a remote host using /usr/sbin/rmt as the rmt program, don't descend into a separate filesystem (since I don't want /proc and /sys etc to be backed up), and back up my root partition.

Since I don't believe there's any value to separate file systems on a laptop, this will back up the entire contents of my n-1 laptop to the carillon.tgz in my home directory on player.local.

Posted Sat Aug 8 10:45:54 2015