You are watching the part of my weblog that is syndicated to Planet Debian. You may instead be interested in the full weblog
At work, I've been maintaining a perl script that needs to run a number of steps as part of a release workflow.
Initially, that script was very simple, but over time it has grown to do a number of things. And then some of those things did not need to be run all the time. And then we wanted to do this one exceptional thing for this one case. And so on; eventually the script became a big mess of configuration options and unreadable flow, and so I decided that I wanted it to be more configurable. I sat down and spent some time on this, and eventually came up with what I now realize is a domain-specific language (DSL) in JSON, implemented by creating objects in Moose, extensible by writing more object classes.
Let me explain how it works.
In order to explain, however, I need to explain some perl and Moose basics first. If you already know all that, you can safely skip ahead past the "Preliminaries" section that's next.
Preliminaries
Moose object creation, references.
In Moose, creating a class is done something like this:
package Foo;
use v5.40;
use Moose;
has 'attribute' => (
is => 'ro',
isa => 'Str',
required => 1
);
sub say_something {
my $self = shift;
say "Hello there, our attribute is " . $self->attribute;
}
The above is a class that has a single attribute called attribute
.
To create an object, you use the Moose constructor on the class, and
pass it the attributes you want:
use v5.40;
use Foo;
my $foo = Foo->new(attribute => "foo");
$foo->say_something;
(output: Hello there, our attribute is foo
)
This creates a new object with the attribute attribute
set to bar
.
The attribute
accessor is a method generated by Moose, which functions
both as a getter and a setter (though in this particular case we made
the attribute "ro", meaning read-only, so while it can be set at object
creation time it cannot be changed by the setter anymore). So yay, an
object.
And it has methods, things that we set ourselves. Basic OO, all that.
One of the peculiarities of perl is its concept of "lists". Not to be confused with the lists of python -- a concept that is called "arrays" in perl and is somewhat different -- in perl, lists are enumerations of values. They can be used as initializers for arrays or hashes, and they are used as arguments to subroutines. Lists cannot be nested; whenever a hash or array is passed in a list, the list is "flattened", that is, it becomes one big list.
This means that the below script is functionally equivalent to the above script that uses our "Foo" object:
use v5.40;
use Foo;
my %args;
$args{attribute} = "foo";
my $foo = Foo->new(%args);
$foo->say_something;
(output: Hello there, our attribute is foo
)
This creates a hash %args
wherein we set the attributes that we want
to pass to our constructor. We set one attribute in %args
, the one
called attribute
, and then use %args
and rely on list flattening to
create the object with the same attribute set (list flattening turns a
hash into a list of key-value pairs).
Perl also has a concept of "references". These are scalar values that point to other values; the other value can be a hash, a list, or another scalar. There is syntax to create a non-scalar value at assignment time, called anonymous references, which is useful when one wants to remember non-scoped values. By default, references are not flattened, and this is what allows you to create multidimensional values in perl; however, it is possible to request list flattening by dereferencing the reference. The below example, again functionally equivalent to the previous two examples, demonstrates this:
use v5.40;
use Foo;
my $args = {};
$args->{attribute} = "foo";
my $foo = Foo->new(%$args);
$foo->say_something;
(output: Hello there, our attribute is foo
)
This creates a scalar $args
, which is a reference to an anonymous
hash. Then, we set the key attribute
of that anonymous hash to bar
(note the use arrow operator here, which is used to indicate that we
want to dereference a reference to a hash), and create the object using
that reference, requesting hash dereferencing and flattening by using a
double sigil, %$
.
As a side note, objects in perl are references too, hence the fact that we have to use the dereferencing arrow to access the attributes and methods of Moose objects.
Moose attributes don't have to be strings or even simple scalars. They can also be references to hashes or arrays, or even other objects:
package Bar;
use v5.40;
use Moose;
extends 'Foo';
has 'hash_attribute' => (
is => 'ro',
isa => 'HashRef[Str]',
predicate => 'has_hash_attribute',
);
has 'object_attribute' => (
is => 'ro',
isa => 'Foo',
predicate => 'has_object_attribute',
);
sub say_something {
my $self = shift;
if($self->has_object_attribute) {
$self->object_attribute->say_something;
}
$self->SUPER::say_something unless $self->has_hash_attribute;
say "We have a hash attribute!"
}
This creates a subclass of Foo
called Bar
that has a hash
attribute called hash_attribute
, and an object attribute called
object_attribute
. Both of them are references; one to a hash, the
other to an object. The hash ref is further limited in that it requires
that each value in the hash must be a string (this is optional but can
occasionally be useful), and the object ref in that it must refer to an
object of the class Foo
, or any of its subclasses.
The predicates
used here are extra subroutines that Moose provides if
you ask for them, and which allow you to see if an object's attribute
has a value or not.
The example script would use an object like this:
use v5.40;
use Bar;
my $foo = Foo->new(attribute => "foo");
my $bar = Bar->new(object_attribute => $foo, attribute => "bar");
$bar->say_something;
(output: Hello there, our attribute is foo
)
This example also shows object inheritance, and methods implemented in child classes.
Okay, that's it for perl and Moose basics. On to...
Moose Coercion
Moose has a concept of "value coercion". Value coercion allows you to tell Moose that if it sees one thing but expects another, it should convert is using a passed subroutine before assigning the value.
That sounds a bit dense without example, so let me show you how it
works. Reimaginging the Bar
package, we could use coercion to
eliminate one object creation step from the creation of a Bar
object:
package "Bar";
use v5.40;
use Moose;
use Moose::Util::TypeConstraints;
extends "Foo";
coerce "Foo",
from "HashRef",
via { Foo->new(%$_) };
has 'hash_attribute' => (
is => 'ro',
isa => 'HashRef',
predicate => 'has_hash_attribute',
);
has 'object_attribute' => (
is => 'ro',
isa => 'Foo',
coerce => 1,
predicate => 'has_object_attribute',
);
sub say_something {
my $self = shift;
if($self->has_object_attribute) {
$self->object_attribute->say_something;
}
$self->SUPER::say_something unless $self->has_hash_attribute;
say "We have a hash attribute!"
}
Okay, let's unpack that a bit.
First, we add the Moose::Util::TypeConstraints
module to our package.
This is required to declare coercions.
Then, we declare a coercion to tell Moose how to convert a HashRef
to
a Foo
object: by using the Foo
constructor on a flattened list
created from the hashref that it is given.
Then, we update the definition of the object_attribute
to say that it
should use coercions. This is not the default, because going through the
list of coercions to find the right one has a performance penalty, so if
the coercion is not requested then we do not do it.
This allows us to simplify declarations. With the updated Bar
class,
we can simplify our example script to this:
use v5.40;
use Bar;
my $bar = Bar->new(attribute => "bar", object_attribute => { attribute => "foo" });
$bar->say_something
(output: Hello there, our attribute is foo
)
Here, the coercion kicks in because the value object_attribute
, which
is supposed to be an object of class Foo
, is instead a hash ref.
Without the coercion, this would produce an error message saying that
the type of the object_attribute
attribute is not a Foo
object. With
the coercion, however, the value that we pass to object_attribute
is
passed to a Foo constructor using list flattening, and then the
resulting Foo
object is assigned to the object_attribute
attribute.
Coercion works for more complicated things, too; for instance, you can use coercion to coerce an array of hashes into an array of objects, by creating a subtype first:
package MyCoercions;
use v5.40;
use Moose;
use Moose::Util::TypeConstraints;
use Foo;
subtype "ArrayOfFoo", as "ArrayRef[Foo]";
subtype "ArrayOfHashes", as "ArrayRef[HashRef]";
coerce "ArrayOfFoo", from "ArrayOfHashes", via { [ map { Foo->create(%$_) } @{$_} ] };
Ick. That's a bit more complex.
What happens here is that we use the map
function to iterate over a
list of values.
The given list of values is @{$_}
, which is perl for "dereference the
default value as an array reference, and flatten the list of values in
that array reference".
So the ArrayRef
of HashRef
s is dereferenced and flattened, and each
HashRef
in the ArrayRef is passed to the map
function.
The map function then takes each hash ref in turn and passes it to the
block of code that it is also given. In this case, that block is
{ Foo->create(%$_) }
. In other words, we invoke the create
factory
method with the flattened hashref as an argument. This returns an object
of the correct implementation (assuming our hash ref has a type
attribute set), and with all attributes of their object set to the
correct value. That value is then returned from the block (this could be
made more explicit with a return
call, but that is optional, perl
defaults a return value to the rvalue of the last expression in a
block).
The map
function then returns a list of all the created objects, which
we capture in an anonymous array ref (the []
square brackets), i.e.,
an ArrayRef of Foo object, passing the Moose requirement of
ArrayRef[Foo]
.
Usually, I tend to put my coercions in a special-purpose package. Although it is not strictly required by Moose, I find that it is useful to do this, because Moose does not allow a coercion to be defined if a coercion for the same type had already been done in a different package. And while it is theoretically possible to make sure you only ever declare a coercion once in your entire codebase, I find that doing so is easier to remember if you put all your coercions in a specific package.
Okay, now you understand Moose object coercion! On to...
Dynamic module loading
Perl allows loading modules at runtime. In the most simple case, you just use require inside a stringy eval:
my $module = "Foo";
eval "require $module";
This loads "Foo" at runtime. Obviously, the $module string could be a computed value, it does not have to be hardcoded.
There are some obvious downsides to doing things this way, mostly in the fact that a computed value can basically be anything and so without proper checks this can quickly become an arbitrary code vulnerability. As such, there are a number of distributions on CPAN to help you with the low-level stuff of figuring out what the possible modules are, and how to load them.
For the purposes of my script, I used Module::Pluggable. Its API is fairly simple and straightforward:
package Foo;
use v5.40;
use Moose;
use Module::Pluggable require => 1;
has 'attribute' => (
is => 'ro',
isa => 'Str',
);
has 'type' => (
is => 'ro',
isa => 'Str',
required => 1,
);
sub handles_type {
return 0;
}
sub create {
my $class = shift;
my %data = @_;
foreach my $impl($class->plugins) {
if($impl->can("handles_type") && $impl->handles_type($data{type})) {
return $impl->new(%data);
}
}
die "could not find a plugin for type " . $data{type};
}
sub say_something {
my $self = shift;
say "Hello there, I am a " . $self->type;
}
The new concept here is the plugins
class method, which is added by
Module::Pluggable
, and which searches perl's library paths for all
modules that are in our namespace. The namespace is configurable, but by
default it is the name of our module; so in the above example, if there
were a package "Foo::Bar" which
- has a subroutine
handles_type
- that returns a truthy value when passed the value of the
type
key in a hash that is passed to thecreate
subroutine, - then the
create
subroutine creates a new object with the passed key/value pairs used as attribute initializers.
Let's implement a Foo::Bar
package:
package Foo::Bar;
use v5.40;
use Moose;
extends 'Foo';
has 'type' => (
is => 'ro',
isa => 'Str',
required => 1,
);
has 'serves_drinks' => (
is => 'ro',
isa => 'Bool',
default => 0,
);
sub handles_type {
my $class = shift;
my $type = shift;
return $type eq "bar";
}
sub say_something {
my $self = shift;
$self->SUPER::say_something;
say "I serve drinks!" if $self->serves_drinks;
}
We can now indirectly use the Foo::Bar
package in our script:
use v5.40;
use Foo;
my $obj = Foo->create(type => bar, serves_drinks => 1);
$obj->say_something;
output:
Hello there, I am a bar.
I serve drinks!
Okay, now you understand all the bits and pieces that are needed to understand how I created the DSL engine. On to...
Putting it all together
We're actually quite close already. The create
factory method in the
last version of our Foo
package allows us to decide at run time which
module to instantiate an object of, and to load that module at run time.
We can use coercion and list flattening to turn a reference to a hash
into an object of the correct type.
We haven't looked yet at how to turn a JSON data structure into a hash, but that bit is actually ridiculously trivial:
use JSON::MaybeXS;
my $data = decode_json($json_string);
Tada, now $data is a reference to a deserialized version of the JSON string: if the JSON string contained an object, $data is a hashref; if the JSON string contained an array, $data is an arrayref, etc.
So, in other words, to create an extensible JSON-based DSL that is implemented by Moose objects, all we need to do is create a system that
- takes hash refs to set arguments
has factory methods to create objects, which
- uses
Module::Pluggable
to find the available object classes, and - uses the
type
attribute to figure out which object class to use to create the object
- uses
uses coercion to convert hash refs into objects using these factory methods
In practice, we could have a JSON file with the following structure:
{
"description": "do stuff",
"actions": [
{
"type": "bar",
"serves_drinks": true,
},
{
"type": "bar",
"serves_drinks": false,
}
]
}
... and then we could have a Moose object definition like this:
package MyDSL;
use v5.40;
use Moose;
use MyCoercions;
has "description" => (
is => 'ro',
isa => 'Str',
);
has 'actions' => (
is => 'ro',
isa => 'ArrayOfFoo'
coerce => 1,
required => 1,
);
sub say_something {
say "Hello there, I am described as " . $self->description . " and I am performing my actions: ";
foreach my $action(@{$self->actions}) {
$action->say_something;
}
}
Now, we can write a script that loads this JSON file and create a new object using the flattened arguments:
use v5.40;
use MyDSL;
use JSON::MaybeXS;
my $input_file_name = shift;
my $args = do {
local $/ = undef;
open my $input_fh, "<", $input_file_name or die "could not open file";
<$input_fh>;
};
$args = decode_json($args);
my $dsl = MyDSL->new(%$args);
$dsl->say_something
Output:
Hello there, I am described as do stuff and I am performing my actions:
Hello there, I am a bar
I am serving drinks!
Hello there, I am a bar
In some more detail, this will:
- Read the JSON file and deserialize it;
- Pass the object keys in the JSON file as arguments to a constructor of
the
MyDSL
class; - The
MyDSL
class then uses those arguments to set its attributes, using Moose coercion to convert the "actions" array of hashes into an array ofFoo::Bar
objects. - Perform the
say_something
method on theMyDSL
object
Once this is written, extending the scheme to also support a "quux" type
simply requires writing a Foo::Quux
class, making sure it has a method
handles_type
that returns a truthy value when called with quux
as
the argument, and installing it into the perl library path. This is
rather easy to do.
It can even be extended deeper, too; if the quux
type requires a list
of arguments rather than just a single argument, it could itself also
have an array attribute with relevant coercions. These coercions could
then be used to convert the list of arguments into an array of objects
of the correct type, using the same schema as above.
The actual DSL is of course somewhat more complex, and also actually does something useful, in contrast to the DSL that we define here which just says things.
Creating an object that actually performs some action when required is left as an exercise to the reader.
The NBD protocol has grown a number of new features over the years. Unfortunately, some of those features are not (yet?) supported by the Linux kernel.
I suggested a few times over the years that the maintainer of the NBD driver in the kernel, Josef Bacik, take a look at these features, but he hasn't done so; presumably he has other priorities. As with anything in the open source world, if you want it done you must do it yourself.
I'd been off and on considering to work on the kernel driver so that I could implement these new features, but I never really got anywhere.
A few months ago, however, Christoph Hellwig posted a patch set that reworked a number of block device drivers in the Linux kernel to a new type of API. Since the NBD mailinglist is listed in the kernel's MAINTAINERS file, this patch series were crossposted to the NBD mailinglist, too, and when I noticed that it explicitly disabled the "rotational" flag on the NBD device, I suggested to Christoph that perhaps "we" (meaning, "he") might want to vary the decision on whether a device is rotational depending on whether the NBD server signals, through the flag that exists for that very purpose, whether the device is rotational.
To which he replied "Can you send a
patch
".
That got me down the rabbit hole, and now, for the first time in the 20+ years of being a C programmer who uses Linux exclusively, I got a patch merged into the Linux kernel... twice.
So, what do these things do?
The first patch adds support for the ROTATIONAL flag. If the NBD server mentions that the device is rotational, it will be treated as such, and the elevator algorithm will be used to optimize accesses to the device. For the reference implementation, you can do this by adding a line "rotational = true" to the relevant section (relating to the export where you want it to be used) of the config file.
It's unlikely that this will be of much benefit in most cases (most nbd-server installations will be exporting a file on a filesystem and have the elevator algorithm implemented server side and then it doesn't matter whether the device has the rotational flag set), but it's there in case you wish to use it.
The second set of patches adds support for the WRITE_ZEROES
command.
Most devices these days allow you to tell them "please write a N zeroes
starting at this offset", which is a lot more efficient than sending
over a buffer of N zeroes and asking the device to do DMA to copy
buffers etc etc for just zeroes.
The NBD protocol has supported its own WRITE_ZEROES
command for a
while now, and hooking it up was reasonably simple in the end. The only
problem is that it expects length values in bytes, whereas the kernel
uses it in blocks. It took me a few tries to get that right -- and then
I also fixed up handling of discard messages, which required the same
conversion.
Getting the Belgian eID to work on Linux systems should be fairly easy, although some people do struggle with it.
For that reason, there is a lot of third-party documentation out there in the form of blog posts, wiki pages, and other kinds of things. Unfortunately, some of this documentation is simply wrong. Written by people who played around with things until it kind of worked, sometimes you get a situation where something that used to work in the past (but wasn't really necessary) now stopped working, but it's still added to a number of locations as though it were the gospel.
And then people follow these instructions and now things don't work anymore.
One of these revolves around OpenSC.
OpenSC is an open source smartcard library that has support for a pretty large number of smartcards, amongst which the Belgian eID. It provides a PKCS#11 module as well as a number of supporting tools.
For those not in the know, PKCS#11 is a standardized C API for offloading cryptographic operations. It is an API that can be used when talking to a hardware cryptographic module, in order to make that module perform some actions, and it is especially popular in the open source world, with support in NSS, amongst others. This library is written and maintained by mozilla, and is a low-level cryptographic library that is used by Firefox (on all platforms it supports) as well as by Google Chrome and other browsers based on that (but only on Linux, and as I understand it, only for linking with smartcards; their BoringSSL library is used for other things).
The official eID software that we ship through eid.belgium.be, also known as "BeID", provides a PKCS#11 module for the Belgian eID, as well as a number of support tools to make interacting with the card easier, such as the "eID viewer", which provides the ability to read data from the card, and validate their signatures. While the very first public version of this eID PKCS#11 module was originally based on OpenSC, it has since been reimplemented as a PKCS#11 module in its own right, with no lineage to OpenSC whatsoever anymore.
About five years ago, the Belgian eID card was renewed. At the time, a new physical appearance was the most obvious difference with the old card, but there were also some technical, on-chip, differences that are not so apparent. The most important one here, although it is not the only one, is the fact that newer eID cards now use a NIST P-384 elliptic curve-based private keys, rather than the RSA-based ones that were used in the past. This change required some changes to any PKCS#11 module that supports the eID; both the BeID one, as well as the OpenSC card-belpic driver that is written in support of the Belgian eID.
Obviously, the required changes were implemented for the BeID module; however, the OpenSC card-belpic driver was not updated. While I did do some preliminary work on the required changes, I was unable to get it to work, and eventually other things took up my time so I never finished the implementation. If someone would like to finish the work that I started, the preliminal patch that I wrote could be a good start -- but like I said, it doesn't yet work. Also, you'll probably be interested in the official documentation of the eID card.
Unfortunately, in the mean time someone added the Applet 1.8 ATR to the card-belpic.c file, without also implementing the required changes to the driver so that the PKCS#11 driver actually supports the eID card. The result of this is that if you have OpenSC installed in NSS for either Firefox or any Chromium-based browser, and it gets picked up before the BeID PKCS#11 module, then NSS will stop looking and pass all crypto operations to the OpenSC PKCS#11 module rather than to the official eID PKCS#11 module, and things will not work at all, causing a lot of confusion.
I have therefore taken the following two steps:
- The official eID packages now conflict with the OpenSC PKCS#11 module. Specifically only the PKCS#11 module, not the rest of OpenSC, so you can theoretically still use its tools. This means that once we release this new version of the eID software, when you do an upgrade and you have OpenSC installed, it will remove the PKCS#11 module and anything that depends on it. This is normal and expected.
- I have filed a pull request against OpenSC that removes the Applet 1.8 ATR from the driver, so that OpenSC will stop claiming that it supports the 1.8 applet.
When the pull request is accepted, we will update the official eID software to make the conflict versioned, so that as soon as it works again you will again be able to install the OpenSC and BeID packages at the same time.
In the mean time, if you have the OpenSC PKCS#11 module installed on your system, and your eID authentication does not work, try removing it.
A while ago, I saw Stefano's portable monitor, and thought it was very useful. Personally, I rent a desk at an office space where I have a 27" Dell monitor; but I do sometimes use my laptop away from that desk, and then I do sometimes miss the external monitor.
So a few weeks before DebConf, I bought me one myself. The one I got is about a mid-range model; there are models that are less than half the price of the one that I bought, and there are models that are more than double its price, too. ASUS has a very wide range of these monitors; the cheapest model that I could find locally is a 720p monitor that only does USB-C and requires power from the connected device, which presumably if I were to connect it to my laptop with no power connected would half its battery life. More expensive models have features such as wifi connectivity and miracast support, builtin batteries, more connection options, and touchscreen fancyness.
While I think some of these features are not worth the money, I do think that a builtin battery has its uses, and that I would want a decent resolution, so I got a FullHD model with builtin battery.
The device comes with a number of useful accessories: a USB-C to USB-C cable for the USB-C connectivity as well as to charge the battery; an HDMI-to-microHDMI cable for HDMI connectivity; a magnetic sleeve that doubles as a back stand; a beefy USB-A charger and USB-A-to-USB-C convertor (yes, I know); and a... pen.
No, really, a pen. You can write with it. Yes, on paper. No, not a stylus. It's really a pen.
Sigh, OK. This one:
OK, believe me now?
Good.
Don't worry, I was as confused about this as you just were when I first found that pen. Why would anyone do that, I thought. So I read the manual. Not something I usually do with new hardware, but here you go.
It turns out that the pen doubles as a kickstand. If you look closely at the picture of the laptop and the monitor above, you may see a little hole at the bottom right of the monitor, just to the right of the power button/LED. The pen fits right there.
Now I don't know what the exact thought process was here, but I imagine it went something like this:
- ASUS wants to make money through selling monitors, but they don't want to spend too much money making them.
- A kickstand is expensive.
- So they choose not to make one, and add a little hole instead where you can put any little stick and make that function as a kickstand.
- They explain in the manual that you can use a pen with the hole as a kickstand. Problem solved, and money saved.
Some paper pusher up the chain decides that if you mention a pen in the manual, you can't not ship a pen
- Or perhaps some lawyer tells them that this is illegal to do in some jurisdictions
- Or perhaps some large customer with a lot of clout is very annoying
So in a meeting, it is decided that the monitor will have a pen going along with it
- So someone in ASUS then goes through the trouble of either designing and manufacturing a whole set of pens that use the same color scheme as the monitor itself, or just sourcing them from somewhere; and those pens are then branded (cheaply) and shipped with the monitors.
It's an interesting concept, especially given the fact that the magnetic sleeve works very well as a stand. But hey.
Anyway, the monitor is very nice; the battery lives longer than the battery of my laptop usually does, so that's good, and it allows me to have a dual-monitor setup when I'm on the road.
And when I'm at the office? Well, now I have a triple-monitor setup. That works well, too.
I've been maintaining a number of Perl software packages recently. There's SReview, my video review and transcoding system of which I split off Media::Convert a while back; and as of about a year ago, I've also added PtLink, an RSS aggregator (with future plans for more than just that).
All these come with extensive test suites which can help me ensure that things continue to work properly when I play with things; and all of these are hosted on salsa.debian.org, Debian's gitlab instance. Since we're there anyway, I configured GitLab CI/CD to run a full test suite of all the software, so that I can't forget, and also so that I know sooner rather than later when things start breaking.
GitLab has extensive support for various test-related reports, and while it took a while to be able to enable all of them, I'm happy to report that today, my perl test suites generate all three possible reports. They are:
- The
coverage
regex, which captures the total reported coverage for all modules of the software; it will show the test coverage on the right-hand side of the job page (as in this example), and it will show what the delta in that number is in merge request summaries (as in this example - The JUnit report, which tells GitLab in detail which tests were run, what their result was, and how long the test took (as in this example)
- The cobertura report, which tells GitLab which lines in the software were ran in the test suite; it will show up coverage of affected lines in merge requests, but nothing more. Unfortunately, I can't show an example here, as the information seems to be no longer available once the merge request has been merged.
Additionally, I also store the native perl Devel::Cover report as job artifacts, as they show some information that GitLab does not.
It's important to recognize that not all data is useful. For instance, the JUnit report allows for a test name and for details of the test. However, the module that generates the JUnit report from TAP test suites does not make a distinction here; both the test name and the test details are reported as the same. Additionally, the time a test took is measured as the time between the end of the previous test and the end of the current one; there is no "start" marker in the TAP protocol.
That being said, it's still useful to see all the available information in GitLab. And it's not even all that hard to do:
test:
stage: test
image: perl:latest
coverage: '/^Total.* (\d+.\d+)$/'
before_script:
- cpanm ExtUtils::Depends Devel::Cover TAP::Harness::JUnit Devel::Cover::Report::Cobertura
- cpanm --notest --installdeps .
- perl Makefile.PL
script:
- cover -delete
- HARNESS_PERL_SWITCHES='-MDevel::Cover' prove -v -l -s --harness TAP::Harness::JUnit
- cover
- cover -report cobertura
artifacts:
paths:
- cover_db
reports:
junit: junit_output.xml
coverage_report:
path: cover_db/cobertura.xml
coverage_format: cobertura
Let's expand on that a bit.
The first three lines should be clear for anyone who's used GitLab CI/CD
in the past. We create a job called test
; we start it in the test
stage, and we run it in the perl:latest
docker image. Nothing
spectacular here.
The coverage
line contains a regular expression. This is applied by
GitLab to the output of the job; if it matches, then the first bracket
match is extracted, and whatever that contains is assumed to contain the
code coverage percentage for the code; it will be reported as such in
the GitLab UI for the job that was ran, and graphs may be drawn to show
how the coverage changes over time. Additionally, merge requests will
show the delta in the code coverage, which may help deciding whether to
accept a merge request. This regular expression will match on a line of
that the cover
program will generate on standard output.
The before_script
section installs various perl modules we'll need
later on. First, we intall
ExtUtils::Depends. My code
uses
ExtUtils::MakeMaker,
which ExtUtils::Depends depends on (no pun intended); obviously, if your
perl code doesn't use that, then you don't need to install it. The next
three modules -- Devel::Cover,
TAP::Harness::JUnit and
Devel::Cover::Report::Cobertura
are necessary for the reports, and you should include them if you want
to copy what I'm doing.
Next, we install declared dependencies, which is probably a good idea
for you as well, and then we run perl Makefile.PL
, which will generate
the Makefile. If you don't use ExtUtils::MakeMaker, update that part to
do what your build system uses. That should be fairly straightforward.
You'll notice that we don't actually use the Makefile. This is because
we only want to run the test suite, which in our case (since these are
PurePerl modules) doesn't require us to build the software first. One
might consider that this makes the call of perl Makefile.PL
useless,
but I think it's a useful test regardless; if that fails, then obviously
we did something wrong and shouldn't even try to go further.
The actual tests are run inside a script
snippet, as is usual for
GitLab. However we do a bit more than you would normally expect; this is
required for the reports that we want to generate. Let's unpack what we
do there:
cover -delete
This deletes any coverage database that might exist (e.g., due to caching or some such). We don't actually expect any coverage database, but it doesn't hurt.
HARNESS_PERL_SWITCHES='-MDevel::Cover'
This tells the TAP harness that we want it to load the Devel::Cover
addon, which can generate code coverage statistics. It stores that in
the cover_db
directory, and allows you to generate all kinds of
reports on the code coverage later (but we don't do that here, yet).
prove -v -l -s
Runs the actual test suite, with v
erbose output, s
huffling (aka,
randomizing) the test suite, and adding the l
ib directory to perl's
include path. This works for us, again, because we don't actually need
to compile anything; if you do, then -b
(for blib
) may be required.
ExtUtils::MakeMaker creates a test
target in its Makefile, and usually
this is how you invoke the test suite. However, it's not the only way to
do so, and indeed if you want to generate a JUnit XML report then you
can't do that. Instead, in that case, you need to use the prove
, so
that you can tell it to load the TAP::Harness::JUnit module by way of
the --harness
option, which will then generate the JUnit XML report.
By default, the JUnit XML report is generated in a file
junit_output.xml
. It's possible to customize the filename for this
report, but GitLab doesn't care and neither do I, so I don't. Uploading
the JUnit XML format tells GitLab which tests were run and
Finally, we invoke the cover
script twice to generate two coverage
reports; once we generate the default report (which generates HTML files
with detailed information on all the code that was triggered in your
test suite), and once with the -report cobertura
parameter, which
generates the cobertura XML format.
Once we've generated all our reports, we then need to upload them to
GitLab in the right way. The native perl report, which is in the
cover_db
directory, is uploaded as a regular job artifact, which we
can then look at through a web browser, and the two XML reports are
uploaded in the correct way for their respective formats.
All in all, I find that doing this makes it easier to understand how my code is tested, and why things go wrong when they do.
The DebConf video team has been sprinting in preparation for DebConf 23 which will happen in Kochi, India, in September of this year.
Present were Nicolas "olasd" Dandrimont, Stefano "tumbleweed" Rivera, and yours truly. Additionally, Louis-Philippe "pollo" Véronneau and Carl "CarlFK" Karsten joined the sprint remotely from across the pond.
Thank you to the DPL for agreeing to fund flights, food, and accomodation for the team members. We would also like to extend a special thanks to the Association April for hosting our sprint at their offices.
We made a lot of progress:
- Now that Debian Bookworm has been released, we updated our ansible repository to work with Debian Bookworm. This encountered some issues, but nothing earth-shattering, and almost all of them are handled. The one thing that is still outstanding is that jibri requires OpenJDK 11, which is no longer in bookworm; a solution for that will need to be found in the longer term, but as jibri is only needed for online conferences, it is not quite as urgent (Stefano, Louis-Philippe).
- In past years, we used open "opsis" hardware to do screen grabbing. While these work, upstream development has stalled, and their intended functionality is also somewhat more limited than we would like. As such, we experimented with a USB-based HDMI capture device, and after playing with it for some time, decided that it is a good option and that we would like to switch to it. Support for the specific capture device that we played with has now also been added to all the relevant places. (Stefano, Carl)
- Another open tool that we have been using is voctomix, a software video mixer. Its upstream development has also stalled somewhat . While we managed to make it work correctly on Bookworm, we decided that to ensure long-term viability for the team, it would be preferable if we had an alternative. As such, we quickly investigated Nageru, Sesse's software video mixer, and decided that it can everything we need (and, probably, more). As such, we worked on implementing a user interface theme that would work with our specific requirements. Work on this is still ongoing, and we may decide that we are not ready yet for the switch by the time DebConf23 comes along, but we do believe that the switch is at least feasible. While working on the theme, we found a bug which Sesse quickly fixed for us after a short amount of remote debugging, so, thanks for that! (Stefano, Nicolas, Sesse)
- Our current streaming architecture uses HLS, which requires MPEG-4-based codecs. While fully functional, MPEG-4 is not the most modern of codecs anymore, not to mention the fact that it is somewhat patent-encumbered (even though some of these patents are expired by now). As such, we investigated switching to the AV1 codec for live streaming. Our ansible repository has been updated to support live streaming using that codec; the post-event transcoding part will follow soon enough. Special thanks, again, to Sesse, for pointing out a few months ago on Planet Debian that this is, in fact, possible to do. (Wouter)
- Apart from these big-ticket items, we also worked on various small maintenance things: upgrading, fixing, and reinstalling hosts and services, filing budget requests, and requesting role emails. (all of the team, really).
It is now Sunday the 23rd at 14:15, and while the sprint is coming to an end, we haven't quite finished yet, so some more progress can still be made. Let's see what happens by tonight.
All in all, though, we believe that the progress we made will make the DebConf Videoteam's work a bit easier in some areas, and will make things work better in the future.
See you in Kochi!
Since before I got involved in the eID back in 2014, we have provided official packages of the eID for Red Hat Enterprise Linux. Since RHEL itself requires a license, we did this, first, by using buildbot and mock on a Fedora VM to set up a CentOS chroot in which to build the RPM package. Later this was migrated to using GitLab CI and to using docker rather than VMs, in an effort to save some resources. Even later still, when Red Hat made CentOS no longer be a downstream of RHEL, we migrated from building in a CentOS chroot to building in a Rocky chroot, so that we could continue providing RHEL-compatible packages. Now, as it seems that Red Hat is determined to make that impossible too, I investigated switching to actually building inside a RHEL chroot rather than a derivative one. Let's just say that might be a challenge...
[root@b09b7eb7821d ~]# mock --dnf --isolation=simple --verbose -r rhel-9-x86_64 --rebuild eid-mw-5.1.11-0.v5.1.11.fc38.src.rpm --resultdir /root --define "revision v5.1.11"
ERROR: /etc/pki/entitlement is not a directory is subscription-manager installed?
Okay, so let's fix that.
[root@b09b7eb7821d ~]# dnf install -y subscription-manager
(...)
Complete!
[root@b09b7eb7821d ~]# mock --dnf --isolation=simple --verbose -r rhel-9-x86_64 --rebuild eid-mw-5.1.11-0.v5.1.11.fc38.src.rpm --resultdir /root --define "revision v5.1.11"
ERROR: No key found in /etc/pki/entitlement directory. It means this machine is not subscribed. Please use
1. subscription-manager register
2. subscription-manager list --all --available (available pool IDs)
3. subscription-manager attach --pool <POOL_ID>
If you don't have Red Hat subscription yet, consider getting subscription:
https://access.redhat.com/solutions/253273
You can have a free developer subscription:
https://developers.redhat.com/faq/
Okay... let's fix that too, then.
[root@b09b7eb7821d ~]# subscription-manager register
subscription-manager is disabled when running inside a container. Please refer to your host system for subscription management.
Wut.
[root@b09b7eb7821d ~]# exit
wouter@pc220518:~$ apt-cache search subscription-manager
wouter@pc220518:~$
As I thought, yes.
Having to reinstall the docker host machine with Fedora just so I can build Red Hat chroots seems like a somewhat excessive requirement, which I don't think we'll be doing that any time soon.
We'll see what the future brings, I guess.
As I blogged before, I've been working on a Planet Venus replacement. This is necessary, because Planet Venus, unfortunately, has not been maintained for a long time, and is a Python 2 (only) application which has never been updated to Python 3.
Python not being my language of choice, and my having plans to do far more than just the "render RSS streams" functionality that Planet Venus does, meant that I preferred to write "something else" (in Perl) rather than updating Planet Venus to modern Python.
Planet Grep has been running PtLink for over a year now, and my plan had been to update the code so that Planet Debian could run it too, but that has been taking a bit longer.
This month, I have finally been able to work on this, however. This screenshot shows two versions of Planet Debian:
The rendering on the left is by Planet Venus, the one on the right is by PtLink.
It's not quite ready yet, but getting there.
Stay tuned.
The Debian Videoteam has been sprinting in Cape Town, South Africa -- mostly because with Stefano here for a few months, four of us (Jonathan, Kyle, Stefano, and myself) actually are in the country on a regular basis. In addition to that, two more members of the team (Nicolas and Louis-Philippe) are joining the sprint remotely (from Paris and Montreal).
(Kyle and Stefano working on things, with me behind the camera and Jonathan busy elsewhere.)
We've made loads of progress! Some highlights:
- We did a lot of triaging of outstanding bugs and merge requests against our ansible repository. Stale issues were closed, merge requests have been merged (or closed when they weren't relevant anymore), and new issues that we found while working on them were fixed. We also improved our test coverage for some of our ansible roles, and modernized as well as improved the way our documentation is built. (Louis-Philippe, Stefano, Kyle, Wouter, Nicolas)
- Some work was done on SReview, our video review and transcode tool: I fixed up the metadata export code and did some other backend work, while Stefano worked a bit on the frontend, bringing it up to date to use bootstrap 4, and adding client-side filtering using vue. Future work on this will allow editing various things from the webinterface -- currently that requires issuing SQL commands directly. (Wouter and Stefano)
- Jonathan explored new features in OBS. We've been using OBS for our "loopy" setup since DebConf20, which is used for the slightly more interactive sponsor loop that is shown in between talks. The result is that we'll be able to simplify and improve that setup in future (mini)DebConf instances. (Jonathan)
- Kyle had a look at options for capturing hardware. We currently use Opsis boards, but they are not an ideal solution, and we are exploring alternatives. (Kyle)
- Some package uploads happened! libmedia-convert-perl will now (hopefully) migrate to testing; and if all goes well, a new version of SReview will be available in unstable soon.
The sprint isn't over yet (we're continuing until Sunday), but loads of things have already happened. Stay tuned!
A notorious ex-DD decided to post garbage on his site in which he links my name to the suicide of Frans Pop, and mentions that my GPG key is currently disabled in the Debian keyring, along with some manufactured screenshots of the Debian NM site that allegedly show I'm no longer a DD. I'm not going to link to the post -- he deserves to be ridiculed, not given attention.
Just to set the record straight, however:
Frans Pop was my friend. I never treated him with anything but respect. I do not know why he chose to take his own life, but I grieved for him for a long time. It saddens me that Mr. Notorious believes it a good idea to drag Frans' name through the mud like this, but then, one can hardly expect anything else from him by this point.
Although his post is mostly garbage, there is one bit of information that is correct, and that is that my GPG key is currently no longer in the Debian keyring. Nothing sinister is going on here, however; the simple fact of the matter is that I misplaced my OpenPGP key card, which means there is a (very very slight) chance that a malicious actor (like, perhaps, Mr. Notorious) would get access to my GPG key and abuse that to upload packages to Debian. Obviously we can't have that -- certainly not from him -- so for that reason, I asked the Debian keyring maintainers to please disable my key in the Debian keyring.
I've ordered new cards; as soon as they arrive I'll generate a new key and perform the necessary steps to get my new key into the Debian keyring again. Given that shipping key cards to South Africa takes a while, this has taken longer than I would have initially hoped, but I'm hoping at this point that by about halfway September this hurdle will have been taken, meaning, I will be able to exercise my rights as a Debian Developer again.
As for Mr. Notorious, one can only hope he will get the psychiatric help he very obviously needs, sooner rather than later, because right now he appears to be more like a goat yelling in the desert.
Ah well.