pdWEBlog -- Wouter's Eclectic Bloghttps://grep.be/blog//pd/WEBlog -- Wouter's Eclectic Blogikiwiki2023-10-16T20:38:02ZNew toy: ASUS ZenScreen Go MB16AHPhttps://grep.be/blog//en/computer/hardware/New_toy:_ASUS_ZenScreen_Go_MB16AHP/2023-10-16T20:38:02Z2023-10-16T20:36:42Z
<p>A while ago, I saw <a href="https://people.debian.org/~stefanor">Stefano</a>'s
portable monitor, and thought it was very useful. Personally, I rent a
desk at an office space where I have a 27" Dell monitor; but I do
sometimes use my laptop away from that desk, and then I do sometimes
miss the external monitor.</p>
<p>So a few weeks before DebConf, I bought me one myself. The <a href="https://www.asus.com/za/displays-desktops/monitors/zenscreen/zenscreen-go-mb16ahp/">one I
got</a>
is about a mid-range model; there are models that are less than half the
price of the one that I bought, and there are models that are more than
double its price, too. ASUS has a very wide range of these monitors; the
cheapest model that I could find locally is a 720p monitor that only
does USB-C and requires power from the connected device, which
presumably if I were to connect it to my laptop with no power connected
would half its battery life. More expensive models have features such as
wifi connectivity and miracast support, builtin batteries, more
connection options, and touchscreen fancyness.</p>
<p>While I think some of these features are not worth the money, I do think
that a builtin battery has its uses, and that I would want a decent
resolution, so I got a FullHD model with builtin battery.</p>
<p><a href="https://www.flickr.com/photos/wouterverhelst/53263267486/in/datetaken-public/" title="20231016_215332"><img alt="20231016_215332" height="225" src="https://live.staticflickr.com/65535/53263267486_5a3e70707f.jpg" width="500" /></a></p>
<p>The device comes with a number of useful accessories: a USB-C to USB-C
cable for the USB-C connectivity as well as to charge the battery; an
HDMI-to-microHDMI cable for HDMI connectivity; a magnetic sleeve that
doubles as a back stand; a beefy USB-A charger and USB-A-to-USB-C
convertor (yes, I know); and a... pen.</p>
<p>No, really, a pen. You can write with it. Yes, on paper. No, not a
stylus. It's really a pen.</p>
<p>Sigh, OK. This one:</p>
<p><a href="https://www.flickr.com/photos/wouterverhelst/53263755900/in/datetaken-public/" title="20231016_222024"><img alt="20231016_222024" height="225" src="https://live.staticflickr.com/65535/53263755900_e3ac7146fc.jpg" width="500" /></a></p>
<p>OK, believe me now?</p>
<p>Good.</p>
<p>Don't worry, I was as confused about this as you just were when I first
found that pen. Why would anyone do that, I thought. So I read the
manual. Not something I usually do with new hardware, but here you go.</p>
<p>It turns out that the pen doubles as a kickstand. If you look closely at
the picture of the laptop and the monitor above, you may see a little
hole at the bottom right of the monitor, just to the right of the power
button/LED. The pen fits right there.</p>
<p>Now I don't know what the exact thought process was here, but I imagine
it went something like this:</p>
<ul>
<li>ASUS wants to make money through selling monitors, but they don't want
to spend too much money making them.</li>
<li>A kickstand is expensive.</li>
<li>So they choose not to make one, and add a little hole instead where
you can put any little stick and make that function as a kickstand.</li>
<li>They explain in the manual that you can use a pen with the hole as a
kickstand. Problem solved, and money saved.</li>
<li><p>Some paper pusher up the chain decides that if you mention a pen in
the manual, you can't not ship a pen</p>
<ul>
<li>Or perhaps some lawyer tells them that this is illegal to do in
some jurisdictions</li>
<li>Or perhaps some large customer with a lot of clout is very
annoying</li>
</ul></li>
<li><p>So in a meeting, it is decided that the monitor will have a pen going
along with it</p></li>
<li>So someone in ASUS then goes through the trouble of either designing
and manufacturing a whole set of pens that use the same color scheme
as the monitor itself, or just sourcing them from somewhere; and those pens
are then branded (cheaply) and shipped with the monitors.</li>
</ul>
<p>It's an interesting concept, especially given the fact that the magnetic
sleeve works very well as a stand. But hey.</p>
<p>Anyway, the monitor is very nice; the battery lives longer than the
battery of my laptop usually does, so that's good, and it allows me to
have a dual-monitor setup when I'm on the road.</p>
<p>And when I'm at the office? Well, now I have a triple-monitor setup.
That works well, too.</p>
Perl test suites in GitLabhttps://grep.be/blog//en/computer/Perl_test_suites_in_GitLab/2023-08-16T08:05:40Z2023-08-16T08:05:40Z
<p>I've been maintaining a number of Perl software packages recently.
There's <a href="https://salsa.debian.org/debconf-video-team/sreview">SReview</a>,
my video review and transcoding system of which I split off
<a href="https://salsa.debian.org/wouter/media-convert">Media::Convert</a> a while
back; and as of about a year ago, I've also added
<a href="https://salsa.debian.org/wouter/ptlink">PtLink</a>, an RSS aggregator
(with future plans for more than just that).</p>
<p>All these come with extensive test suites which can help me ensure that
things continue to work properly when I play with things; and all of
these are hosted on salsa.debian.org, Debian's gitlab instance. Since
we're there anyway, I configured GitLab CI/CD to run a full test suite
of all the software, so that I can't forget, and also so that I know
sooner rather than later when things start breaking.</p>
<p>GitLab has extensive support for various test-related reports, and while
it took a while to be able to enable all of them, I'm happy to report
that today, my perl test suites generate all three possible reports.
They are:</p>
<ul>
<li>The <code>coverage</code> regex, which captures the total reported coverage for
all modules of the software; it will show the test coverage on the
right-hand side of the job page (as in <a href="https://salsa.debian.org/wouter/ptlink/-/jobs/4412258">this
example</a>), and
it will show what the delta in that number is in merge request
summaries (as in <a href="https://salsa.debian.org/wouter/ptlink/-/merge_requests/20">this
example</a></li>
<li>The <em>JUnit</em> report, which tells GitLab in detail which tests were run,
what their result was, and how long the test took (as in <a href="https://salsa.debian.org/wouter/ptlink/-/pipelines/549663/test_report">this
example</a>)</li>
<li>The <em>cobertura</em> report, which tells GitLab which lines in the software
were ran in the test suite; it will show up coverage of affected lines
in merge requests, but nothing more. Unfortunately, I can't show an
example here, as the information seems to be no longer available once
the merge request has been merged.</li>
</ul>
<p>Additionally, I also store the native perl Devel::Cover report as job
artifacts, as they show some information that GitLab does not.</p>
<p>It's important to recognize that not all data is useful. For instance,
the JUnit report allows for a test name and for details of the test.
However, the module that generates the JUnit report from <abbr title="Test Anything Protocol">TAP</abbr> test suites does not make a
distinction here; both the test name and the test details are reported
as the same. Additionally, the time a test took is measured as the time
between the end of the previous test and the end of the current one;
there is no "start" marker in the TAP protocol.</p>
<p>That being said, it's still useful to see all the available information
in GitLab. And it's not even all that hard to do:</p>
<pre><code>test:
stage: test
image: perl:latest
coverage: '/^Total.* (\d+.\d+)$/'
before_script:
- cpanm ExtUtils::Depends Devel::Cover TAP::Harness::JUnit Devel::Cover::Report::Cobertura
- cpanm --notest --installdeps .
- perl Makefile.PL
script:
- cover -delete
- HARNESS_PERL_SWITCHES='-MDevel::Cover' prove -v -l -s --harness TAP::Harness::JUnit
- cover
- cover -report cobertura
artifacts:
paths:
- cover_db
reports:
junit: junit_output.xml
coverage_report:
path: cover_db/cobertura.xml
coverage_format: cobertura
</code></pre>
<p>Let's expand on that a bit.</p>
<p>The first three lines should be clear for anyone who's used GitLab CI/CD
in the past. We create a job called <code>test</code>; we start it in the <code>test</code>
stage, and we run it in the <code>perl:latest</code> docker image. Nothing
spectacular here.</p>
<p>The <code>coverage</code> line contains a regular expression. This is applied by
GitLab to the output of the job; if it matches, then the first bracket
match is extracted, and whatever that contains is assumed to contain the
code coverage percentage for the code; it will be reported as such in
the GitLab UI for the job that was ran, and graphs may be drawn to show
how the coverage changes over time. Additionally, merge requests will
show the delta in the code coverage, which may help deciding whether to
accept a merge request. This regular expression will match on a line of
that the <code>cover</code> program will generate on standard output.</p>
<p>The <code>before_script</code> section installs various perl modules we'll need
later on. First, we intall
<a href="https://metacpan.org/pod/ExtUtils::Depends">ExtUtils::Depends</a>. My code
uses
<a href="https://metacpan.org/pod/ExtUtils::MakeMaker">ExtUtils::MakeMaker</a>,
which ExtUtils::Depends depends on (no pun intended); obviously, if your
perl code doesn't use that, then you don't need to install it. The next
three modules -- <a href="https://metacpan.org/pod/Devel::Cover">Devel::Cover</a>,
<a href="https://metacpan.org/pod/TAP::Harness::JUnit">TAP::Harness::JUnit</a> and
<a href="https://metacpan.org/pod/Devel::Cover::Report::Cobertura">Devel::Cover::Report::Cobertura</a>
are necessary for the reports, and you should include them if you want
to copy what I'm doing.</p>
<p>Next, we install declared dependencies, which is probably a good idea
for you as well, and then we run <code>perl Makefile.PL</code>, which will generate
the Makefile. If you don't use ExtUtils::MakeMaker, update that part to
do what your build system uses. That should be fairly straightforward.</p>
<p>You'll notice that we don't actually <em>use</em> the Makefile. This is because
we only want to run the test suite, which in our case (since these are
PurePerl modules) doesn't require us to build the software first. One
might consider that this makes the call of <code>perl Makefile.PL</code> useless,
but I think it's a useful test regardless; if that fails, then obviously
we did something wrong and shouldn't even try to go further.</p>
<p>The actual tests are run inside a <code>script</code> snippet, as is usual for
GitLab. However we do a bit more than you would normally expect; this is
required for the reports that we want to generate. Let's unpack what we
do there:</p>
<pre><code>cover -delete
</code></pre>
<p>This deletes any coverage database that might exist (e.g., due to
caching or some such). We don't actually expect any coverage database,
but it doesn't hurt.</p>
<pre><code>HARNESS_PERL_SWITCHES='-MDevel::Cover'
</code></pre>
<p>This tells the TAP harness that we want it to load the Devel::Cover
addon, which can generate code coverage statistics. It stores that in
the <code>cover_db</code> directory, and allows you to generate all kinds of
reports on the code coverage later (but we don't do that here, yet).</p>
<pre><code>prove -v -l -s
</code></pre>
<p>Runs the actual test suite, with <code>v</code>erbose output, <code>s</code>huffling (aka,
randomizing) the test suite, and adding the <code>l</code>ib directory to perl's
include path. This works for us, again, because we don't actually need
to compile anything; if you do, then <code>-b</code> (for <code>blib</code>) may be required.</p>
<p>ExtUtils::MakeMaker creates a <code>test</code> target in its Makefile, and usually
this is how you invoke the test suite. However, it's not the only way to
do so, and indeed if you want to generate a JUnit XML report then you
can't do that. Instead, in that case, you need to use the <code>prove</code>, so
that you can tell it to load the TAP::Harness::JUnit module by way of
the <code>--harness</code> option, which will then generate the JUnit XML report.
By default, the JUnit XML report is generated in a file
<code>junit_output.xml</code>. It's possible to customize the filename for this
report, but GitLab doesn't care and neither do I, so I don't. Uploading
the JUnit XML format tells GitLab which tests were run and </p>
<p>Finally, we invoke the <code>cover</code> script twice to generate two coverage
reports; once we generate the default report (which generates HTML files
with detailed information on all the code that was triggered in your
test suite), and once with the <code>-report cobertura</code> parameter, which
generates the cobertura XML format.</p>
<p>Once we've generated all our reports, we then need to upload them to
GitLab in the right way. The native perl report, which is in the
<code>cover_db</code> directory, is uploaded as a regular job artifact, which we
can then look at through a web browser, and the two XML reports are
uploaded in the correct way for their respective formats.</p>
<p>All in all, I find that doing this makes it easier to understand how my
code is tested, and why things go wrong when they do.</p>
Debconf Videoteam sprint in Paris, France, 2023-07-20 - 2023-07-23https://grep.be/blog//en/computer/debian/DebConf_Videoteam_Sprint_in_Paris/2023-07-23T12:12:34Z2023-07-23T12:12:34Z
<p>The DebConf video team has been
<a href="https://wiki.debian.org/Sprints/2023/DebConfVideoteam">sprinting</a> in
preparation for <a href="https://debconf23.debconf.org">DebConf 23</a> which will
happen in Kochi, India, in September of this year.</p>
<p><a href="https://www.flickr.com/photos/wouterverhelst/53065754365/" title="Video team sprint"><img alt="Video team sprint" height="576" src="https://live.staticflickr.com/65535/53065754365_e532d5405d_b.jpg" width="1024" /></a></p>
<p>Present were Nicolas "olasd" Dandrimont, Stefano "tumbleweed" Rivera,
and yours truly. Additionally, Louis-Philippe "pollo" Véronneau and Carl
"CarlFK" Karsten joined the sprint remotely from across the pond.</p>
<p>Thank you to the DPL for agreeing to fund flights, food, and
accomodation for the team members. We would also like to extend a
special thanks to the <a href="https://april.org/">Association April</a> for
hosting our sprint at their offices.</p>
<p>We made a lot of progress:</p>
<ul>
<li>Now that Debian Bookworm <a href="https://www.debian.org/News/2023/20230610">has been
released</a>, we updated our
<a href="https://salsa.debian.org/debconf-video-team/ansible">ansible
repository</a> to
work with Debian Bookworm. This encountered some issues, but nothing
earth-shattering, and almost all of them are handled. The one thing
that is still outstanding is that
<a href="https://github.com/jitsi/jibri">jibri</a> requires OpenJDK 11, which is
no longer in bookworm; a solution for that will need to be found in
the longer term, but as jibri is only needed for online conferences,
it is not quite as urgent (Stefano, Louis-Philippe).</li>
<li>In past years, we used open "opsis" hardware to do screen grabbing.
While these work, upstream development has stalled, and their
intended functionality is also somewhat more limited than we would
like. As such, we experimented with a USB-based HDMI capture device,
and after playing with it for some time, decided that it is a good
option and that we would like to switch to it. Support for the
specific capture device that we played with has now also been added to
all the relevant places. (Stefano, Carl)</li>
<li>Another open tool that we have been using is voctomix, a software
video mixer. Its upstream development has also stalled somewhat . While
we managed to make it work correctly on Bookworm, we decided that to
ensure long-term viability for the team, it would be preferable if we
had an alternative. As such, we quickly investigated
<a href="https://packages.debian.org/sid/nageru">Nageru</a>,
<a href="https://sesse.net/">Sesse's</a> software video mixer, and decided that
it can everything we need (and, probably, more). As such, we worked on
implementing a <a href="https://nageru.sesse.net/doc/theme.html">user interface
theme</a> that would work with
our specific requirements. Work on this is still ongoing, and we may
decide that we are not ready yet for the switch by the time DebConf23
comes along, but we do believe that the switch is at least feasible.
While working on the theme, we found a bug which Sesse <a href="https://git.sesse.net/?p=movit;a=commitdiff;h=61799c1d01e79b7d203cf2c89798aa567a341aba">quickly
fixed for
us</a>
after a short amount of remote debugging, so, thanks for that!
(Stefano, Nicolas, Sesse)</li>
<li>Our current streaming architecture uses
<a href="https://en.wikipedia.org/wiki/HTTP_Live_Streaming">HLS</a>, which
requires MPEG-4-based codecs. While fully functional, MPEG-4 is not
the most modern of codecs anymore, not to mention the fact that it is
somewhat patent-encumbered (even though some of these patents are
expired by now). As such, we investigated switching to the
<a href="https://en.wikipedia.org/wiki/AV1">AV1</a> codec for live streaming. Our
ansible repository has been updated to support live streaming using
that codec; the post-event transcoding part will follow soon enough.
Special thanks, again, to Sesse, for pointing out a few months ago on
<a href="https://planet.debian.org">Planet Debian</a> that this is, in fact,
possible to do. (Wouter)</li>
<li>Apart from these big-ticket items, we also worked on various small
maintenance things: upgrading, fixing, and reinstalling hosts and
services, filing budget requests, and requesting role emails. (all of
the team, really).</li>
</ul>
<p>It is now Sunday the 23rd at 14:15, and while the sprint is coming to an
end, we haven't <em>quite</em> finished yet, so some more progress can still be
made. Let's see what happens by tonight.</p>
<p>All in all, though, we believe that the progress we made will make the
DebConf Videoteam's work a bit easier in some areas, and will make
things work better in the future.</p>
<p>See you in Kochi!</p>
The future of the eID on RHELhttps://grep.be/blog//en/work/The_future_of_the_eID_on_RHEL/2023-06-27T12:08:01Z2023-06-27T12:08:01Z
<p>Since before I got involved in the eID <a href="https://grep.be/blog/en/work/Supporting_the_eID">back in
2014</a>, we have
provided official packages of the eID for Red Hat Enterprise Linux.
Since RHEL itself requires a license, we did this, first, by using
<a href="https://www.buildbot.net">buildbot</a> and
<a href="https://fedoraproject.org/wiki/Using_Mock_to_test_package_builds">mock</a>
on a Fedora VM to set up a <a href="https://www.centos.org/">CentOS</a> chroot in
which to build the RPM package. Later this was migrated to using <a href="https://docs.gitlab.com/ee/ci/">GitLab
CI</a> and to using
<a href="https://www.docker.com/">docker</a> rather than VMs, in an effort to save
some resources. Even later still, when Red Hat made CentOS no longer be
a downstream of RHEL, we migrated from building in a CentOS chroot to
building in a <a href="https://rockylinux.org/">Rocky</a> chroot, so that we could
continue providing RHEL-compatible packages. Now, as it seems that Red
Hat is <a href="https://www.redhat.com/en/blog/red-hats-commitment-open-source-response-gitcentosorg-changes">determined to make that impossible
too</a>,
I investigated switching to actually building inside a RHEL chroot
rather than a derivative one. Let's just say that might be a
challenge...</p>
<pre><code>[root@b09b7eb7821d ~]# mock --dnf --isolation=simple --verbose -r rhel-9-x86_64 --rebuild eid-mw-5.1.11-0.v5.1.11.fc38.src.rpm --resultdir /root --define "revision v5.1.11"
ERROR: /etc/pki/entitlement is not a directory is subscription-manager installed?
</code></pre>
<p>Okay, so let's fix that.</p>
<pre><code>[root@b09b7eb7821d ~]# dnf install -y subscription-manager
</code></pre>
<p>(...)</p>
<pre><code>Complete!
[root@b09b7eb7821d ~]# mock --dnf --isolation=simple --verbose -r rhel-9-x86_64 --rebuild eid-mw-5.1.11-0.v5.1.11.fc38.src.rpm --resultdir /root --define "revision v5.1.11"
ERROR: No key found in /etc/pki/entitlement directory. It means this machine is not subscribed. Please use
1. subscription-manager register
2. subscription-manager list --all --available (available pool IDs)
3. subscription-manager attach --pool <POOL_ID>
If you don't have Red Hat subscription yet, consider getting subscription:
https://access.redhat.com/solutions/253273
You can have a free developer subscription:
https://developers.redhat.com/faq/
</code></pre>
<p>Okay... let's fix that too, then.</p>
<pre><code>[root@b09b7eb7821d ~]# subscription-manager register
subscription-manager is disabled when running inside a container. Please refer to your host system for subscription management.
</code></pre>
<p>Wut.</p>
<pre><code>[root@b09b7eb7821d ~]# exit
wouter@pc220518:~$ apt-cache search subscription-manager
wouter@pc220518:~$
</code></pre>
<p>As I thought, yes.</p>
<p>Having to reinstall the docker host machine with Fedora just so I can
build Red Hat chroots seems like a somewhat excessive requirement, which
I don't think we'll be doing that any time soon.</p>
<p>We'll see what the future brings, I guess.</p>
Planet Debian rendered with PtLinkhttps://grep.be/blog//en/computer/ptlink/Planet_Debian_rendered_with_PtLink/2023-06-09T07:55:12Z2023-06-09T07:52:11Z
<p>As I blogged
<a href="https://grep.be/blog/en/computer/Planet_Grep_now_running_PtLink/">before</a>,
I've been working on a <a href="https://intertwingly.net/code/venus/">Planet
Venus</a> replacement. This is
necessary, because Planet Venus, unfortunately, has not been maintained
for a long time, and is a Python 2 (only) application which has never
been updated to Python 3.</p>
<p>Python not being my language of choice, and my having plans to do far
more than just the "render RSS streams" functionality that Planet Venus
does, meant that I preferred to write "something else" (in Perl) rather
than updating Planet Venus to modern Python.</p>
<p>Planet Grep has been running PtLink for over a year now, and my plan had
been to update the code so that Planet Debian could run it too, but that
has been taking a bit longer.</p>
<p>This month, I have finally been able to work on this, however. This
screenshot shows two versions of Planet Debian:</p>
<p><a href="https://grep.be/blog//en/computer/ptlink/pd.png"><img class="img" height="290" src="https://grep.be/blog//en/computer/ptlink/Planet_Debian_rendered_with_PtLink/600x-pd.png" width="600" /></a></p>
<p>The rendering on the left is by Planet Venus, the one on the right is by
PtLink.</p>
<p>It's not <em>quite</em> ready yet, but getting there.</p>
<p>Stay tuned.</p>
Day 3 of the Debian Videoteam Sprint in Cape Townhttps://grep.be/blog//en/computer/debian/Day_3_of_the_Debian_Videoteam_Sprint_in_Cape_Town/2022-11-12T09:06:10Z2022-11-12T09:06:10Z
<p>The Debian Videoteam has been
<a href="https://wiki.debian.org/Sprints/2022/DebConfVideoteam">sprinting</a> in
Cape Town, South Africa -- mostly because with Stefano here for a few
months, four of us (Jonathan, Kyle, Stefano, and myself) actually are in
the country on a regular basis. In addition to that, two more members of
the team (Nicolas and Louis-Philippe) are joining the sprint remotely
(from Paris and Montreal).</p>
<p><a href="https://www.flickr.com/photos/wouterverhelst/52493504512/in/datetaken-public/" title="Videoteam sprint"><img alt="Videoteam sprint" height="533" src="https://live.staticflickr.com/65535/52493504512_d9c667bb3c_c.jpg" width="800" /></a></p>
<p><em>(Kyle and Stefano working on things, with me behind the camera and
Jonathan busy elsewhere.)</em></p>
<p>We've made loads of
<a href="https://wiki.debian.org/Sprints/2022/DebConfVideoteam/Work">progress</a>!
Some highlights:</p>
<ul>
<li>We did a lot of triaging of outstanding bugs and merge requests
against our <a href="https://salsa.debian.org/debconf-video-team/ansible">ansible
repository</a>.
Stale issues were closed, merge requests have been merged (or closed
when they weren't relevant anymore), and new issues that we found
while working on them were fixed. We also improved our test coverage
for some of our ansible roles, and modernized as well as improved the
way our
<a href="https://debconf-video-team.pages.debian.net/docs/">documentation</a>
is built. (Louis-Philippe, Stefano, Kyle, Wouter, Nicolas)</li>
<li>Some work was done on SReview, our video review and transcode tool:
I fixed up the metadata export code and did some other backend work,
while Stefano worked a bit on the frontend, bringing it up to date to
use bootstrap 4, and adding client-side filtering using vue. Future
work on this will allow editing various things from the webinterface
-- currently that requires issuing SQL commands directly. (Wouter and
Stefano)</li>
<li>Jonathan explored new features in OBS. We've been using OBS for our
"loopy" setup since DebConf20, which is used for the slightly more
interactive sponsor loop that is shown in between talks. The result is
that we'll be able to simplify and improve that setup in future
(mini)DebConf instances. (Jonathan)</li>
<li>Kyle had a look at options for capturing hardware. We currently use
Opsis boards, but they are not an ideal solution, and we are exploring
alternatives. (Kyle)</li>
<li>Some package uploads happened! libmedia-convert-perl will now
(hopefully) migrate to testing; and if all goes well, a new version of
SReview will be available in unstable soon.</li>
</ul>
<p>The sprint isn't over yet (we're continuing until Sunday), but loads of
things have already happened. Stay tuned!</p>
Not currently uploadinghttps://grep.be/blog//en/life/debian/Not_currently_uploading/2022-08-30T23:22:06Z2022-08-30T23:22:06Z
<p>A <a href="https://www.debian.org/News/2021/20211117">notorious ex-DD</a> decided
to post garbage on his site in which he links my name to the <a href="https://www.debian.org/News/2010/20100831">suicide of
Frans Pop</a>, and mentions that
my GPG key is currently disabled in the Debian keyring, along with some
manufactured screenshots of the Debian NM site that allegedly show I'm
no longer a DD. I'm not going to link to the post -- he deserves to be
ridiculed, not given attention.</p>
<p>Just to set the record straight, however:</p>
<p>Frans Pop was my friend. I never treated him with anything but respect.
I do not know <em>why</em> he chose to take his own life, but I grieved for him
for a long time. It saddens me that Mr. Notorious believes it a good
idea to drag Frans' name through the mud like this, but then, one can
hardly expect anything else from him by this point.</p>
<p>Although his post is mostly garbage, there is <em>one</em> bit of information
that is correct, and that is that my GPG key is currently no longer in
the Debian keyring. Nothing sinister is going on here, however; the
simple fact of the matter is that I misplaced my <a href="https://www.floss-shop.de/en/security-privacy/smartcards/13/openpgp-smart-card-v3.4">OpenPGP key
card</a>,
which means there is a (very very slight) chance that a malicious actor
(like, perhaps, Mr. Notorious) would get access to my GPG key and abuse
that to upload packages to Debian. Obviously we can't have that --
certainly not from him -- so for that reason, I asked the Debian keyring
maintainers to please disable my key in the Debian keyring. </p>
<p>I've ordered new cards; as soon as they arrive I'll generate a new key
and perform the necessary steps to get my new key into the Debian
keyring again. Given that shipping key cards to South Africa takes a
while, this has taken longer than I would have initially hoped, but I'm
hoping at this point that by about halfway September this hurdle will
have been taken, meaning, I will be able to exercise my rights as a
Debian Developer again.</p>
<p>As for Mr. Notorious, one can only hope he will get the psychiatric help
he very obviously needs, sooner rather than later, because right now he
appears to be more like a goat yelling in the desert.</p>
<p>Ah well.</p>
Remote notificationhttps://grep.be/blog//en/computer/play/Remote_notification/2022-08-22T14:17:49Z2022-08-22T14:15:48Z
<p>Sometimes, it's useful to get a notification that a command has finished
doing something you were waiting for:</p>
<pre><code>make my-large-program && notify-send "compile finished" "success" || notify-send "compile finished" "failure"
</code></pre>
<p>This will send a notification message with the title "compile finished",
and a body of "success" or "failure" depending on whether the command
completed successfully, and allows you to minimize (or otherwise hide)
the terminal window while you do something else, which can be a very
useful thing to do.</p>
<p>It works great when you're running something on your own machine, but
what if you're running it remotely?</p>
<p>There might be something easy to do, but I whipped up a bit of Perl
instead:</p>
<pre><code>#!/usr/bin/perl -w
use strict;
use warnings;
use Glib::Object::Introspection;
Glib::Object::Introspection->setup(
basename => "Notify",
version => "0.7",
package => "Gtk3::Notify",
);
use Mojolicious::Lite -signatures;
Gtk3::Notify->init();
get '/notify' => sub ($c) {
my $msg = $c->param("message");
if(!defined($msg)) {
$msg = "message";
}
my $title = $c->param("title");
if(!defined($title)) {
$title = "title";
}
app->log->debug("Sending notification '$msg' with title '$title'");
my $n = Gtk3::Notify::Notification->new($title, $msg, "");
$n->show;
$c->render(text => "OK");
};
app->start;
</code></pre>
<p>This requires the packages <code>libglib-object-introspection-perl</code>,
<code>gir1.2-notify-0.7</code>, and <code>libmojolicious-perl</code> to be installed, and can
then be started like so:</p>
<pre><code>./remote-notify daemon -l http://0.0.0.0:3000/
</code></pre>
<p>(assuming you did what I did and saved the above as "remote-notify")</p>
<p>Once you've done that, you can just curl a notification message to yourself:</p>
<pre><code>curl 'http://localhost:3000/notify?title=test&message=test+body'
</code></pre>
<p>Doing this via localhost is rather silly (much better to use notify-send
for that), but it becomes much more interesting if you're going to run
this to your laptop from a remote system.</p>
<p>An obvious TODO would be to add in some form of security, but that's
left as an exercise to the reader...</p>
Upgrading a Windows 10 VM to Windows 11https://grep.be/blog//en/computer/Upgrading_a_Windows_10_VM_to_Windows_11/2022-08-12T15:37:00Z2022-08-12T15:00:05Z
<p>I run Debian on my laptop (obviously); but occasionally, for $DAYJOB, I
have some work to do on Windows. In order to do so, I have had a Windows
10 VM in my <a href="https://libvirt.org">libvirt</a> configuration that I can use.</p>
<p>A while ago, Microsoft issued Windows 11. I recently found out that all
the components for running Windows 11 inside a libvirt VM are available,
and so I set out to upgrade my VM from Windows 10 to Windows 11. This
wasn't as easy as I thought, so here's a bit of a writeup of all the
things I ran against, and how I fixed them.</p>
<p>Windows 11 has a number of hardware requirements that aren't necessary
for Windows 10. There are a number of them, but the most important three
are:</p>
<ul>
<li>Secure Boot is required (Windows 10 would still boot on a machine
without Secure Boot, although <em>buying</em> hardware without at least
support for that hasn't been possible for several years now)</li>
<li>A v2.0 TPM module (Windows 10 didn't need <em>any</em> TPM)</li>
<li>A modern enough processor.</li>
</ul>
<p>So let's see about all three.</p>
<h2 id="amodernenoughprocessor">A modern enough processor</h2>
<p>If your processor isn't modern enough to run Windows 11, then you can
probably forget about it (unless you want to use qemu JIT compilation --
I dunno, probably not going to work, and also not worth it if it were).
If it is, all you need is the "host-passthrough" setting in libvirt,
which I've been using for a long time now. Since my laptop is <a href="https://grep.be/blog/en/computer/play/Faster_tar">less than
two months
old</a>, that's not a
problem for me.</p>
<h2 id="atpm2.0module">A TPM 2.0 module</h2>
<p>My Windows 10 VM did not have a TPM configured, because it wasn't
needed. Luckily, a quick web search told me that enabling that is <a href="https://www.smoothnet.org/qemu-tpm/">not
hard</a>. All you need to do is:</p>
<ul>
<li>Install the <code>swtpm</code> and <code>swtpm-tools</code> packages</li>
<li><p>Adding the TPM module, by adding the following XML snippet to your VM
configuration:</p>
<pre><code><devices>
<tpm model='tpm-tis'>
<backend type='emulator' version='2.0'/>
</tpm>
</devices>
</code></pre>
<p>Alternatively, if you prefer the graphical interface, click on the
"Add hardware" button in the VM properties, choose the TPM, set it
to Emulated, model TIS, and set its version to 2.0.</p></li>
</ul>
<p>You're done!</p>
<p>Well, with this part, anyway. Read on.</p>
<h2 id="secureboot">Secure boot</h2>
<p>Here is where it gets interesting.</p>
<p>My Windows 10 VM was old enough that it was configured for the older
<code>i440fx</code> chipset. This one is limited to PCI and IDE, unlike the more
modern <code>q35</code> chipset (which supports PCIe and SATA, and does not support
IDE nor SATA in IDE mode).</p>
<p>There is a UEFI/Secure Boot-capable BIOS for qemu, but it apparently
requires the <code>q35</code> chipset, </p>
<p>Fun fact (which I found out the hard way): Windows stores where its boot
partition is somewhere. If you change the hard drive controller from an
IDE one to a SATA one, you will get a BSOD at startup. In order to fix
that, you need a <a href="https://support.microsoft.com/en-us/windows/create-a-recovery-drive-abb4691b-5324-6d4a-8766-73fab304c246#WindowsVersion=Windows_10">recovery
drive</a>.
To create the virtual USB disk, go to the VM properties, click "Add
hardware", choose "Storage", choose the USB bus, and then under
"Advanced options", select the "Removable" option, so it shows up as a
USB stick in the VM. Note: this takes a while to do (took about an hour
on my system), and your virtual USB drive needs to be 16G or larger (I
used the libvirt default of 20G).</p>
<p>There is no possibility, using the buttons in the <code>virt-manager</code> GUI, to
convert the machine from <code>i440fx</code> to <code>q35</code>. However, that doesn't mean
it's not possible to do so. I found that the easiest way is to use the
direct XML editing capabilities in the <code>virt-manager</code> interface; if you
edit the XML in an editor it will produce error messages if something
doesn't look right and tell you to go and fix it, whereas the
<code>virt-manager</code> GUI will actually fix things itself in some cases (and
will produce helpful error messages if not).</p>
<p>What I did was:</p>
<ul>
<li>Take backups of <em>everything</em>. No, really. If you fuck up, you'll have
to start from scratch. I'm not responsible if you do.</li>
<li>Go to the Edit->Preferences option in the VM manager, then on the
"General" tab, choose "Enable XML editing"</li>
<li>Open the Windows VM properties, and in the "Overview" section, go to
the "XML" tab.</li>
<li>Change the value of the <code>machine</code> attribute of the <code>domain.os.type</code>
element, so that it says <code>pc-q35-7.0</code>.</li>
<li>Search for the <code>domain.devices.controller</code> element that has <code>pci</code> in
its <code>type</code> attribute and <code>pci-root</code> in its <code>model</code> one, and set the
<code>model</code> attribute to <code>pcie-root</code> instead.</li>
<li>Find all <code>domain.devices.disk.target</code> elements, setting their
<code>dev=hdX</code> to <code>dev=sdX</code>, and <code>bus="ide"</code> to <code>bus="sata"</code></li>
<li>Find the USB controller (<code>domain.devices.controller</code> with
<code>type="usb"</code>, and set its <code>model</code> to <code>qemu-xhci</code>. You may also want to
add <code>ports="15"</code> if you didn't have that yet.</li>
<li><p>Perhaps also add a few PCIe root ports:</p>
<pre><code><controller type="pci" index="1" model="pcie-root-port"/>
<controller type="pci" index="2" model="pcie-root-port"/>
<controller type="pci" index="3" model="pcie-root-port"/>
</code></pre></li>
</ul>
<p>I figured out most of this by starting the process for creating a new
VM, on the last page of the wizard that pops up selecting the "Modify
configuration before installation" option, going to the "XML" tab on the
"Overview" section of the new window that shows up, and then comparing
that against what my current VM had.</p>
<p>Also, it took me a while to get this right, so I might have forgotten
something. If <code>virt-manager</code> gives you an error when you hit the <code>Apply</code>
button, compare notes against the VM that you're in the process of
creating, and copy/paste things from there to the old VM to make the
errors go away. As long as you don't remove configuration that is
critical for things to start, this <em>shouldn't</em> break matters permanently
(but hey, use your backups if you do break -- you have backups, right?)</p>
<p>OK, cool, so now we have a Windows VM that is... unable to boot.
Remember what I said about Windows storing where the controller is?
Yeah, there you go. Boot from the virtual USB disk that you created
above, and select the "Fix the boot" option in the menu. That will fix
it.</p>
<p>Ha ha, only kidding. Of course it doesn't.</p>
<p>I honestly can't tell you <em>everything</em> that I fiddled with, but I
<em>think</em> the bit that eventually fixed it was where I chose "safe mode",
which caused the system to do a hickup, a regular reboot, and then
suddenly everything was working again. Meh.</p>
<p>Don't throw the virtual USB disk away yet, you'll still need it.</p>
<p>Anyway, once you have it booting again, you will now have a machine that
theoretically <em>supports</em> Secure Boot, but you're still running off an
MBR partition. I found a
<a href="https://social.technet.microsoft.com/wiki/contents/articles/14286.converting-windows-bios-installation-to-uefi.aspx">procedure</a>
on how to convert things from MBR to GPT that was written almost 10
years ago, but surprisingly it still works, except for the bit where the
procedure suggests you use diskmgmt.msc (for one thing, that was
renamed; and for another, it can't touch the partition table of the
system disk either).</p>
<p>The last step in that procedure says to <strong>restart your computer!</strong>,
which is fine, except at this point you obviously need to switch over to
the TianoCore firmware, otherwise you're trying to read a UEFI boot
configuration on a system that only supports MBR booting, which
obviously won't work. In order to do that, you need to add a <code>loader</code>
element to the <code>domain.os</code> element of your libvirt configuration:</p>
<pre><code><loader readonly="yes" type="pflash">/usr/share/OVMF/OVMF_CODE_4M.ms.fd</loader>
</code></pre>
<p>When you do this, you'll note that <code>virt-manager</code> automatically adds an
<code>nvram</code> element. That's fine, let it.</p>
<p>I figured this out by looking at the <a href="https://wiki.debian.org/SecureBoot/VirtualMachine">documentation for enabling Secure
Boot in a VM</a> on the
Debian wiki, and using the same trick as for how to switch chipsets that
I explained above.</p>
<p>Okay, yay, so now secure boot is enabled, and we can install Windows 11!
All good? Well, almost.</p>
<p>I found that once I enabled secure boot, my display reverted to a
1024x768 screen. This turned out to be because I was using older
unsigned drivers, and since we're using Secure Boot, that's no longer
allowed, which means Windows reverts to the default VGA driver, and that
<em>only</em> supports the 1024x768 resolution. Yeah, I know. The solution is
to download the virtio-win ISO from one of the links in the <a href="https://github.com/virtio-win/virtio-win-pkg-scripts/">virtio-win
github project</a>,
connecting it to the VM, going to Device manager, selecting the display
controller, clicking on the "Update driver" button, telling the system
that you have the driver on your computer, browsing to the CD-ROM drive,
clicking the "include subdirectories" option, and then tell Windows to
do its thing. While there, it might be good to do the same thing for
unrecognized devices in the device manager, if any.</p>
<p>So, all I have to do next is to get used to the <em>completely different</em>
user interface of Windows 11. Sigh.</p>
<p>Oh, and to rename the "w10" VM to "w11", or some such. Maybe.</p>
Planet Grep now running PtLinkhttps://grep.be/blog//en/computer/Planet_Grep_now_running_PtLink/2022-07-26T10:15:48Z2022-07-23T18:48:27Z
<p>Almost 2 decades ago, <a href="https://planet.debian.org">Planet Debian</a> was
created using the "planetplanet" RSS aggregator. A short while later, I
created <a href="https://planet.grep.be">Planet Grep</a> using the same software.</p>
<p>Over the years, the blog aggregator landscape has changed a bit. First
of all, planetplanet was abandoned, forked into <a href="https://intertwingly.net/code/venus/">Planet
Venus</a>, and then abandoned again.
Second, the world of blogging (aka the "blogosphere") has disappeared
much, and the more modern world uses things like "Social Networks", etc,
making blogs less relevant these days.</p>
<p>A blog aggregator community site is still useful, however, and so I've
never taken Planet Grep down, even though over the years the number of
blogs that was carried on Planet Grep has been reducing. In the past
almost 20 years, I've just run Planet Grep on my personal server,
upgrading its Debian release from whichever was the most recent stable
release in 2005 to buster, never encountering any problems.</p>
<p>That all changed when I did the upgrade to Debian bullseye, however.
Planet Venus is a Python 2 application, which was never updated to
Python 3. Since Debian bullseye drops support for much of Python 2, focusing
only on Python 3 (in accordance with python upstream's policy on the matter),
that means I have had to run Planet Venus from inside a VM for a while now,
which works as a short-term solution but not as a long-term one.</p>
<p>Although there are other implementations of blog aggregation
software out there, I wanted to stick with something (mostly) similar.
Additionally, I have been wanting to add functionality to it to also
pull stuff from Social Networks, where possible (and legal, since some
of these have... scary Terms Of Use documents).</p>
<p>So, as of today, Planet Grep is no longer powered by Planet Venus, but
instead by <a href="https://salsa.debian.org/wouter/ptlink">PtLink</a>. Rather than
Python, it was written in Perl (a language with which I am more
familiar), and there are plans for me to extend things in ways that have
little to do with blog aggregation anymore...</p>
<p>There are a few other Planets out there that also use Planet Venus at
this point -- Planet Debian and Planet FSFE are two that I'm currently
already aware of, but I'm sure there might be more, too.</p>
<p>At this point, PtLink is not yet on feature parity with Planet Venus --
as shown by the fact that it can't yet build either Planet Debian or
Planet FSFE successfully. But I'm not stopping my development here, and
hopefully I'll have something that successfully builds both of those
soon, too.</p>
<p>As a side note, PtLink is not intended to be bug compatible with
Planet Venus. For one example, the configuration for Planet Grep
contains an entry for <a href="https://lefred.be/">Frederic Descamps</a>, but
somehow Planet Venus failed to fetch his feed. With the switch to
PtLink, that seems fixed, and now some entries from Frederic seem to
appear. I'm not going to be "fixing" that feature... but of course there
might be other issues that will appear. If that's the case, let me know.</p>
<p>If you're reading this post through Planet Grep, consider this a public
service announcement for the possibility (hopefully a remote one) of
minor issues.</p>