Extrepo GitLab update

Earlier this month, GitLab B.V.'s package signing key expired, requiring them to rotate their key. This means that anyone who uses one of their packages needs to jump through a number of manual hoops to update their apt key configuration, which is an annoying manual process that also requires people to download random files from the Internet -- something extrepo was written to prevent. At least they're served over https, but still.

I didn't notice until today, but I just updated the extrepo metadata to carry the new key. That means that if you enable one of the GitLab repositories through extrepo enable, you will get the new key rather than the old one. On top of that, if you had already enabled the repository through extrepo, all that is needed for you right now to pull in the new key is to run extrepo update.

While I do apologise for the late update, hopefully this should make some people's lives a bit easier.

And if GitLab B.V. reads this: please send me a MR to the repository next time, so that we can make process be done in time ;-)

Posted
Married!

If you read this post through Planet Debian, then you may already know this through "Johnathan"'s post on the subject: on the 29th of February this year, Tammy and I got married. Yes. Really. No, I didn't expect this to happen myself about five years ago, but here we are.

Tammy and I met four years ago at DebConf16 in Cape Town, South Africa, where she was a local organizer. If you were at dc16, you may remember the beautifully designed conference stationery, T-shirts, and bag; this was all her work. In addition, the opening and closing credits on the videos of that conference were designed by her.

As it happens, that's how we met. I've been a member of the Debconf video team since about 2010, when I first volunteered to handle a few cameras. In 2015, since someone had to do it, I installed Carl's veyepar on Debian's on-site servers, configured, and ran it. I've been in charge of the postprocssing infrastructure -- first using veyepar, later using my own SReview -- ever since. So when I went to the Debconf organizers in 2016 to ask for preroll and postroll templates for the videos, they pointed me to Tammy; and we haven't quite stopped talking since.

I can speak from experience now when saying that a long-distance relationship is difficult. My previous place of residence, Mechelen, is about 9600km away from Cape Town, and so just seeing Tammy required about a half month's worth of pay -- not something I always have to spare. But after a number of back-and-forth visits and a lot of paperwork, I have now been living in Cape Town for just over a year. Obviously, this has simplified things a lot.

The only thing left for me to do now is to train my brain to stop thinking there's something stuck on my left ring finger. It's really meant to be there, after all...

Posted
SReview kubernetes update

About a week and a half ago, I mentioned that I'd been working on making SReview, my AGPLv3 video review and transcode system work from inside a Kubernetes cluster. I noted at the time that while I'd made it work inside minikube, it couldn't actually be run from within a real Kubernetes cluster yet, mostly because I misunderstood how Kubernetes works, and assumed you could just mount the same Kubernetes volume from multiple pods, and share data that way (answer: no you can't).

The way to fix that is to share the data not through volumes, but through something else. That would require that the individual job containers download and upload files somehow.

I had a look at how the Net::Amazon::S3 perl module works (answer: it's very simple really) and whether it would be doable to add a transparent file access layer to SReview which would access files either on the local file system, or an S3 service (answer: yes).

So, as of yesterday or so, SReview supports interfacing with an S3 service (only tested with MinIO for now) rather than "just" files on the local file system. As part of that, I also updated the code so it would not re-scan all files every time the sreview-detect job for detecting new files runs, but only when the "last changed" time (or mtime for local file system access) has changed -- otherwise it would download far too many files every time.

This turned out to be a lot easier than I anticipated, and I have now successfully managed, using MinIO, to run a full run of a review cycle inside Kubernetes, without using any volumes except for the "ReadWriteOnce" ones backing the database and MinIO containers.

Additionally, my kubernetes configuration files are now split up a bit (so you can apply the things that make sense for your configuration somewhat more easily), and are (somewhat) tested.

If you want to try out SReview and you've already got Kubernetes up and running, this may be for you! Please give it a try and don't forget to send some feedback my way.

Posted
Running SReview in minikube

I spent the last week or so building Docker images and a set of YAML files that allows one to run SReview, my 99%-automated video review and transcode system, inside minikube, a program that sets up a mini Kubernetes cluster inside a VM for development purposes.

I wish the above paragraph would say "inside Kubernetes", but alas, unless your Kubernetes implementation has a ReadWriteMany volume that can be used from multiple nodes, this is not quite the case yet. In order to fix that, I am working on adding an abstraction layer that will transparently download files from an S3-compatible object store; but until that is ready, this work is not yet useful for large installations.

But that's fine! If you're wanting to run SReview for a small conference, you can do so with minikube. It won't have the redundancy and reliability things that proper Kubernetes provides you, but then you don't really need that for a conference of a few days.

Here's what you do:

  • Download minikube (see the link above)
  • Run minikube start, and wait for it to finish
  • Run minikube addon enable ingress
  • Clone the SReview git repository
  • From the toplevel of that repository, run perl -I lib scripts/sreview-config -a dump|sensible-pager to see an overview of the available configuration options.
  • Edit the file dockerfiles/kube/master.yaml to add your configuration variables, following the instructions near the top
  • Once the file is configured to your liking, run kubectl apply -f master.yaml -f storage-minikube.yaml
  • Add sreview.example.com to /etc/hosts, and have it point to the output of minikube ip.
  • Create preroll and postroll templates, and download them to minikube in the location that the example config file suggests. Hint: minikube ssh has wget.
  • Store your raw recorded assets under /mnt/vda1/inputdata, using the format you specified for the $inputglob and $parse_re configuration values.
  • Profit!

This doesn't explain how to add a schedule to the database. My next big project (which probably won't happen until after the next FOSDEM is to add a more advanced administrator's interface, so that you can just log in and add things from there. For now though, you have to run kubectl port-forward svc/sreview-database 5432, and then use psql to localhost to issue SQL commands. Yes, that sucks.

Having said that, if you're interested in trying this out, give it a go. Feedback welcome!

(many thanks to the people on the #debian-devel IRC channel for helping me understand how Kubernetes is supposed to work -- wouldn't have worked nearly as nice without them)

Posted
GR 2019 002

Just sent in my vote. After carefully considering what I consider to be important, and reading all the options, I ended up with 84312756.

There are two options that rule out any compromise position; choice 1, "Focus on systemd", essentially says that anything not systemd is unimportant and we should just drop it. At the same time, choice 6, "support for multiple init systems is required", essentially says that you have to keep supporting other systems no matter what the rest of the world is doing lalala I'm not listening mom he's stealing my candy again.

Debian has always been a very diverse community; as a result, historically, we've provided a wide range of valid choices to our users. The result of that is that some of our derivatives have chosen Debian to base their very non-standard distribution on. It is therefore, to me, no surprise that Devuan, the very strongly no-systemd distribution, was based on Debian and not Fedora or openSUSE.

Personally I consider this variety in our users to be a good thing, and something to treasure; so any option that essentially throws that away, like choice 1, is not something I could live with. At the same time, the world has evolved, and most of our users do use systemd these days, which does provide a number of features over and above the older sysvinit system. Ignoring that fact, like option 6 does, is unreasonable in today's world.

So neither of those two options are acceptable to me.

That just leaves the other available options. All of those accept the fact that systemd has become the primary option, but differ in the amount of priority given to alternate options. The one outlier is option 5; it tries to give general guidance on portability issues, rather than commenting on the systemd situation specifically. While I can understand that desire, I don't agree with it, so it definitely doesn't get first position for me. At the same time, I don't think it's a terrible option, in that it still provides an opinion that I agree with. All in all, it barely made the cutoff above further discussion -- but really only barely so.

How did you vote?

Posted
extrepo followup

My announcement the other day has resulted in a small amount of feedback already (through various channels), and a few extra repositories to be added. There was, however, enough feedback (and the manner of it unstructured enough) that I think it's time for a bit of a follow-up:

  • If you have any ideas about future directions this should take, then thank you! Please file them as issues against the extrepo-data repository, using the wishlist tag (if appropriate).
  • If you have a repository that you'd like to see added, awesome! Ideally you would just file a merge request; Vincent Bernat already did so, and I merged it this morning. If you can't figure out how to do so, then please file a bug with the information needed for the repository (deb URI, gpg key signing the repository, description of what type of software the repository contains, supported suites and components), and preferably also pointing to the gap in the documentation that makes it hard for you to understand, so I can improve ;-)
  • One questioner asked why we don't put third-party repository metadata into .deb packages, and ship them that way. The reason for this is that the Debian archive doesn't change after release, which is the point where most external repositories would be updated for the new release. As such, while it might be useful to have something like that in certain cases (e.g., the (now-defunct) pkg-mozilla-archive-keyring package follows this scheme), it can't work in all cases. In contrast, extrepo is designed to be updateable after the release of a particular Debian suite.
Posted
Announcing extrepo

Debian ships with a lot of packages. This allows our users to easily install software without too much effort -- just run apt-get install foo, and foo gets installed.

However, Debian does not ship with everything, and for that reason there sometimes are things that are not installable with just the Debian repositories. Examples include:

  • Software that is not (yet) packaged for Debian
  • A repository for software that is in Debian, but for the bleeding-edge version of that software (e.g., maintained by upstream, or maintained by the Debian packager for that software)
  • Software that is not DFSG-free, and cannot be included into Debian's non-free repository due to licensing issues, but that can freely be installed by Debian users.

In order to enable and use such repositories on a Debian system, a user currently has to perform some actions that may be insecure:

  • Some repositories provide a link to a repository and a key, and expect the user to perform all actions to enable that repository manually. This works, but mistakes are easy to make (especially for beginners), and therefore it is not ideal.
  • Some repositories provide a script to enable the repository, which must be run as root. Such scripts are not signed when run. In other words, we expect users to run untrusted code downloaded from the Internet, when everywhere else we tell people that doing so is a bad idea.
  • Some repositories provide a single .deb file that can be installed, and which enables the necessary repositories in the apt configuration. Since, in contrast to RPM files, Debian packages are not signed, this means that the configuration is not enabled securely.

While there is a tool to enable package signatures in Debian packages, the dpkg tool does not enforce the existence of such signatures, and therefore it is possible for an attacker to replace the (signed) .deb file with an unsigned variant, bypassing the whole signature.

In an effort to remedy this whole situation, I looked at creating extrepo, a package that would download repository metadata from a special-purpose repository, verify the signatures placed on that metadata, and if everything matches, enable the repository by creating the necessary apt configuration files.

This should allow users to enable external repository "foo" by running extrepo enable foo, rather than downloading a script from foo's website and executing it as root -- or other similarly insecure options.

The extrepo package has been uploaded to Debian; and so once NEW processing has finished, will be available in Debian unstable.

I might upload it to backports too if no issues are found, but that won't be an immediate thing.

Posted
Announcing policy-rcd-declarative

A while ago, Debian's technical committee considered a request to figure out what a package should do if a service that is provided by that package does not restart properly on upgrade.

Traditional behavior in Debian has been to restart a service on upgrade, and to cause postinst (and, thus, the packaging system) to break if the daemon start fails. This has obvious disadvantages; when package installation is not initiated by a person running apt-get upgrade in a terminal, failure to restart a service may cause unexpected downtime, and that is not something you want to see.

At the same time, when restarting a service is done through the command line, having the packaging system fail is a pretty good indicator that there is a problem here, and therefore, it tells the system administrator early on that there is a problem, soon after the problem was created -- which is helpful for diagnosing that issue and possibly fixing it.

Eventually, the bug was closed with the TC declining to take a decision (for good reasons; see the bug report for details), but one takeaway for me was that the current interface on Debian for telling the system whether or not to restart a service upon package installation or upgrade, known as polic-rc.d, is flawed, and has several issues:

  1. The interface is too powerful; it requires an executable, which will probably be written in a turing-complete language, when all most people want is to change the standard configuration for pretty much every init script
  2. The interface is undetectable. That is, for someone who has never heard of it, it is actually very difficult to discover that it exists, since the default state ("allow everything") of the interface is defined by "there is no file on the filesystem that points to it".
  3. Although the design document states that policy-rc.d scripts MUST (in the RFC sense of that word) be installed through the alternatives system, in practice the truth is that most cases where a policy-rc.d script is used, this is not done. Since only one policy-rc.d can exist at any one time, the resulting situation is so that one policy script will be overwritten by the other one, or that the cleanup routine of the one policy script will in fact clean up the other one. This situation has caused at least one Debian derivative to end up such that they thought they were installing a policy-rc.d script when in fact they were not.
  4. In some cases, it might be useful to have partial policies installed through various packages that cooperate. Given point 3, this is currently not possible.

Because I believe the current situation leaves room for improvement, by way of experiment I wrote up a proof-of-concept script called policy-rc.d-declarative, whose intent it is to use the current framework to provide a declarative interface to replace policy-rc.d with. I uploaded it to experimental back in March (because the buster release was too close at the time for me to feel comfortable to upload it to unstable still), but I have just uploaded a new version to unstable.

This is currently still an experiment, and I might decide in the end that it's a bad idea; but I do think that most things are an improvement over a plan policy-rc.d interface, so let's see how this goes.

Comments are (most certainly) welcome.

Posted
DebConf Video player

Last weekend, I sat down to learn a bit more about angular, a TypeScript-based programming environment for rich client webapps. According to their website, "TypeScript is a typed superset of JavaScript that compiles to plain JavaScript", which makes the programming environment slightly more easy to work with. Additionally, since TypeScript compiles to whatever subset of JavaScript that you want to target, it compiles to something that should work on almost every browser (that is, if it doesn't, in most cases the fix is to just tweak the compatibility settings a bit).

Since I think learning about a new environment is best done by actually writing a project that uses it, and since I think it was something we could really use, I wrote a video player for the DebConf video team. It makes use of the metadata archive that Stefano Rivera has been working on the last few years (or so). It's not quite ready yet (notably, I need to add routing so you can deep-link to a particular video), but I think it's gotten to a state where it is useful for more general consumption.

We'll see where this gets us...

Posted
Moved!

A few months ago, I moved from Mechelen, Belgium to Cape Town, South Africa.

Muizenberg Beach

No, that's not the view outside my window. But Muizenberg beach is just a few minutes away by car. And who can resist such a beach? Well, if you ignore the seaweed, that is.

Getting settled after a cross-continental move takes some time, especially since the stuff that I decided I wanted to take with me would need to be shipped by boat, and that took a while to arrive. Additionally, the reason of the move was to move in with someone, and having two people live together for the first time requires some adjustment in their daily routine; and that is no different for us.

After having been here for several months now, however, things are pulling together:

I had a job lined up before I moved to cape town; but there were a few administrative issues in getting started, which meant that I had to wait at home for a few months while my savings were being reduced to almost nothingness. That is now mostly resolved (except that there seem to be a few delays with payment, but nothing that can't be resolved).

My stuff arrived a few weeks ago. Unfortunately the shipment was not complete; the 26" 16:10 FullHD monitor that I've had for a decade or so now, somehow got lost. I contacted the shipping company and they'll be looking into things, but I'm not hopeful. Beyond that, there were only a few minor items of damage (one of the feet of the TV broke off, and a few glasses broke), but nothing of real consequence. At least I did decide to go for the insurance which should cover loss and breakage, so worst case I'll just have to buy a new monitor (and throw away a few pieces of debris).

While getting settled, it was harder for me to spend quality time on doing free software- or Debian-related things, but I did manage to join Gerry and Mark to do some FOSDEM-related video review a while ago (meaning, all the 2019 videos that could be released have been released now), and spent some time configuring the Debian SReview instance for the Marseille miniconf last weekend, and will do the same for the Hamburg one that will be happening soon. Additionally, there have been some comments on the upstream nbd mailinglist that I cooperated with.

All in all, I guess it's safe to say that I'm slowly coming out of hibernation. Up next: once the payment issues have been fully resolved and I can spend money with a bit more impudence, join a local choir and/or orchestra and/or tennis club, and have some off time.

Posted