cleanup clarification

Cleanup: clarification

If I may believe my mailbox and blog comment system, here seems to be some confusion over my cleanup post of a few days ago, so allow me to clarify.

  • It is not up to me (or, for that matter, up to anyone but the author) to decide what anyone should write on their blog. However, it is important that the content of Planet Grep remains interesting, and the only way for me to do that is to remove feeds that provide content which, for the most part, is not that interesting for people who read Planet Grep. This is not a personal attack on anyone.
  • While I have removed blog feeds that for the most part consisted of nontechnical content, this in no way is meant to imply that people should only send technical content to Planet Grep; on the contrary. This is just a belated follow-up on the results of the survey, where it was clearly indicated that while an occasional nontechnical post is welcomed by most people, this should definitely not be the main topic.

Hope that explains.

Posted
debconf9

Debconf9

I'm a bit late in announcing this, I guess, but...

I'm leaving tomorrow evening, in fact. Still shitloads to do, but I'm sure I'll get there.

See you all there!

Posted
serial api

Serial API

I've previously expressed my "love" (ahem) for the serial port protocol. There are just too many options.

Now while I dislike the hardware I've never had the "joy" of working with the POSIX serial API. Until now, that is.

If I were nine, I'd say mommy, make it go away!

Whoever came up with the idea of the CREAD control flag (which enables or disables reading from the serial port) and made it be the default to disable reading from said port, should face criminal charges for making life unnecessarily miserable.

That is all.

Posted
reprap

RepRap

Stumbled upon RepRap. Seemed interesting. Before trying to build my own, however, I thought I'd have a look at their software.

So, I downloaded the thing. Then had a deep sigh, as the bloody thing uses Java. And they still live in the middle ages, since the 'Linux version' ships with some .so files for i386 only. Fidgeting around until I find the right version of the amd64 .so file seems somewhat hard.

I think I'll pass on that one, for now.

Whenever someone tells you that 'java' is the right solution if you're looking for 'portability', they're on crack. Seriously. I'll have C and POSIX any day, kthxbye.

Posted
reprap fu

RepRap: followup.

I received some comments on my previous blog post; one of them I guess was from one of the people who helps develop reprap, and who pointed out that the Java software is not the only thing; there are apparently alternatives in python, and a C-based one in development. Goodie.

Another managed to put out this gibberish:

I'd just like to point out that C and POSIX isn't very easily portable to the majority of computer systems that users use (ie. Windows), not to mention that C doesn't have a whole lot of abstractions away from the OS (eg. building up path names is not easily portable in C).

My friend, that is why they have these things called 'libraries' and 'cygwin', or even 'MingW'. There are many programs who were written in C and who are portable across Windows and POSIX; Apache, OpenSC, to name but a few. I'll grant you that it is more work to develop applications that are cross-platform using C, but that doesn't mean it can't be done. At least, however, they'll be easier to install.

Posted
mass travel

Long Live Mass Travel (not)

I blogged a few days ago that I'd be leaving on the 14th to Cáceres, for a slated arrival on the 15th. However, as it turned out, I misread my tickets, and apparently they were valid for leaving on the 15th, arrival on the 16th.

So, my trip to DebConf was supposed to be:

  • 2009-07-15 15:00 Mechelen - Brussels ~15:30
  • 2009-07-15 16:13 Brussels - Paris Nord 17:36
  • 2009-07-15 19:45 Paris Austerlitz - Madrid Chamartin 2009-07-16 09:10
  • 2009-07-15 some time Madrid - Cáceres

Unfortunately, though, something in the neighbourhood of the Paris Nord trainstation caught fire, and the fire department required that, for safety reasons, the electricity be shut down in the Paris Nord trainstation (which is understandable). As such, my train was delayed for approximately two hours, and I missed my connecting train to Madrid. So they offered me another ticket to Irún, where I was to take the 8:25 (or some such) train to Madrid:

  • 2009-07-15 15:00 Mechelen - Brussels ~15:30
  • 2009-07-15 16:13 Brussels - Paris Nord 19:30
  • 2009-07-15 19:50ish Paris Nord (subway) - Paris Austerlitz 20:05
  • 2009-07-15 23:10 Paris Austerlitz - Irún 2009-07-16 07:32
  • 2009-07-16 8:25 Irún - Madrid Chamartin

But then in Paris Austerlitz, the 23:10 train to Irún was delayed too, by about an hour. This meant that I just missed the train to Madrid as well. Additionally, I was smart enough to actually forget my Paris-Madrid ticket on the train at Irún station, so I had to go back to Habaye right across the French/Spanish border (a four minute tram ride) to fetch it. When I got back to Irún, I had 15 minutes left before the train to Pamplona:

  • 2009-07-15 15:00 Mechelen - Brussels ~15:30
  • 2009-07-15 16:13 Brussels - Paris Nord 19:30
  • 2009-07-15 19:50ish Paris Nord - Paris Austerlitz 20:05
  • 2009-07-16 00:10 Paris Austerlitz - Irún 08:30something
  • 2009-07-16 09:17 Irún Colón - Hadaye 09:23
  • 2009-07-16 10:03 Hadaye - Irún Colón 10:07
  • 2009-07-16 10:30 Irún - Pamplona 13:...
  • 2009-07-16 14:35 Pamplona - Madrid Puerta de Atocha 17:38

Originally, my arrival was to be 09:10 in Madrid Chamartin. With the train Madrid-Cáceres leaving at 16:something, I had not bothered to buy a ticket yet, instead planning to see whether a bus was leaving sooner than that (and if not, I was going to be buying a ticket in Madrid, rather than at home). But now, with all these delays, I couldn't even take that 16:something train. Goody.

Anyway, turned out that there was still a train from Puerta de Atocha at 19:09, which got me in Cáceres at 23:00. Plenty late, but at least I got here now.

Whee!

Posted
gnome-keyring

Death

GNOME-KEYRING MUST DIE

that is all.

Posted
vista

I love vista.

No, I don't use it, and I wouldn't want to. However, its hardware requirements are so ridiculously high, that hardware manufacturers these days have no option but to seriously bump the performance of their systems. As a result, they ship with ridiculous amounts of RAM and CPU power, while still keeping prices reasonable.

Of course performance still sucks your pants off if you indeed do use Vista, but not if you use an actual operating system.

I love vista.

Posted
firefox modutil

Security modules in firefox

One bug I have open against the belpic packages is that upon installation, the security module should be enabled so that the user doesn't have to do this anymore. While I tend to agree that this might conditionally make sense, the problem, of course, is making it actually work that way.

The SuSE folks have written a mono utility to easily enable and disable modules in firefox, thunderbird, and OpenOffice.org. While that's fine, I'd like to be able to enable or disable this security module on a system-wide basis.

Unfortunately, unless I'm missing something, that doesn't seem possible. That is, I can use modutil from the libnss3-tools package to create a security module database in /etc/iceweasel/profile (which gets symlinked to from /usr/share/iceweasel/defaults/profile), but firefox then conveniently ignores it.

I'm guessing the reason for that is the fact that usually, such security modules want to have certificates to go with them. In that case, you wouldn't necessarily want to share them with other users.

But in this case—a security module that enables the use of a smartcard reader to read a key from a smartcard—enabling the module system-wide should not be a security risk.

Posted
ms gpl

Microsoft and the GPL

For some strange reason, people all over the net are oohing and awing over Microsoft releasing some drivers to use Linux on their proprietary virtualization software. I'm oohing too; not because of the drivers, but because of all the buzz that goes around it.

Ten years ago, I would have oohed and awed, too. At that time, Microsoft was fighting open source and free software like a cancer. Today, they're not; they provide open source software for Windows themselves (such as an installer framework which they provide through sourceforge), and actively cooperate with many open source and free software projects through their open source labs. They even have a section of their website dedicated to open source software

A large company like Microsoft can't survive if it tries to actively work against what the marketplace wants. The fact that they were indeed so actively fighting against open source software is quite likely why the first decade of the 21st century has seen such a huge loss of market share for them; like any large company, they needed to adapt or lose. They've chosen the first; good for them, and that might be good for us too.

Over the past decade, I've seen Microsoft warming up to open source, to some extent. This is why I don't understand much of the 'Boycott Novell' lunatics; sure, I don't trust Microsoft enough yet to be willing to say that they don't have any plans that will negatively affect us; however, that doesn't mean I will assume that evil things are their plans; unless proven otherwise, I will assume they have the best interest of their customers and/or shareholders as their main goal.

Which is why I was totally not surprised in seeing a GPL patch from Microsoft at this point in time. Rather, I find it normal and expected behaviour, a continuation of an evolution that has been going on for the better part of a decade now.

Posted
daily work

Buildd maintenance

In Debian, I've been a buildd maintainer since 2001; most of that time was for the m68k port (I still am active there, though not as much as I used to be), but there's also been a short stint with armeb, and since a while I'm now also a PowerPC buildd maintainer. I used to do just one powerpc host at first, but now I maintain both malo and voltaire, with Philipp Kern doing praetorius.

This probably makes me one of the more experienced buildd maintainers in Debian today, together with the likes of LaMont Jones and Ryan Murray. I did do a talk about how this is supposed to work at FOSDEM 2004, but that's now five years ago, and some things have changed since. Also, a not-videotaped talk isn't very helpful if you weren't there.

So I'd thought I'd write up what it means to be a buildd maintainer. There's of course the documentation on the Debian.org website, but that only explains how the system works in theory; it does not explain what us buildd maintainers tend to do on a daily basis.

So let's have a look at that, shall we?

Basically, the work of a buildd maintainer is pretty monotonuous, and an experienced buildd maintainer will usually have a set of scripts to help them. Their work can be categorized into three main categories. In order of frequency, these are:

  1. Log handling;
  2. State handling;
  3. Host and chroot maintenance

Log handling

The first is the most obvious one. Every time the buildd builds a package, it will send a full log of the build to buildd.debian.org and to myself. The successful ones are signed with a simple script:

#!/bin/bash
tmpfile=$(mktemp)
sed -i -e '1,/\.changes:$/d;/^?:space:*$/,$d' $1 | tr -d "\200-\377" > $tmpfile
cat $tmpfile > $1
rm $tmpfile

Easy: use a sed command to fish out the embedded .changes file, and write that to the original file. I use a folder-hook to set this script as my 'editor' in mutt when I'm in my buildd mail directory; thus the result is thereby mailed off to the buildd. In that same folder-hook, mutt is also configured to send the reply gpg-signed in the 'traditional' format, without confirmation, and with just one keystroke, so that (after I have entered my GPG key passphrase) I can send off all the signed changes files in one go. A possible improvement could be to change the macro so that it would work with mutt's 'tag' feature (it doesn't, currently), but that's not a big issue (currently, doing 100 mails takes a few seconds and some careful counting).

Note the 'tr'; this is to avoid 8bit characters from appearing in the mail, which might otherwise be converted to their quoted-printable version in transit to the buildd; and since buildd-mail (the part that receives that mail) does not understand MIME, this would corrupt the GPG signature. This way, we do lose a few characters from the changelog, but that doesn't really matter -- the source still contains the unmodified changelog entry.

With this script, I often handle my 'success' folder several times a day. It's no effort, anyway.

The somewhat harder but much more fun part of log handling, is the handling of failure mails. Since there are loads and loads of possible failures, the scripts to handle these are somewhat more involved. I did receive a script from LaMont at some point, a few years ago, which I then built on so as to improve it. It's not perfect, but it does handle a few common cases with no extra input from me. Some of the others are not so easy, however.

One of the more common cases that cannot easily be automated is the case of the buildd failing to install a certain package, because 'foo depends on bar, but it will not be installed'. This is apt's way of telling you that bar depends on foobar which depends on quux (>= 1:2.3.4-5) which depends on libfrobnitz2, but that has now been replaced by libfrobnitz3. Or some such. The only way to figure out what the hell the problem is, is to walk the dependency tree and figure out stuff from there.

There is an 'edos-debcheck' that reportedly can help with this; personally, I wrote a set of perl scripts that will cache a Packages file into DBM files, and then allow you to walk over them to help you figure out what's wrong. They're not perfect, but if you use the '-v' option to check-dep-waits and verify the output when it tells you about missing libraries, it should be able to figure out the whole dependency tree I described above, and will allow me to write a proper dep-wait response, allowing the buildd host to automatically retry the package when the missing dependency is available.

Also somewhat common and routine are things like transient network failures (in which case we use either 'retry' or 'give-back' if the buildd hasn't figured that out by itself and done the latter), the maintainer uploading a new version of the package while the previous version is building (resulting in wanna-build firing off an email to the buildd host, which in turn results in buildd killing the build by removing the build directory; this is not always easily distinguishable from a regular failure, so I commonly respond to that mail with a failure message; if it did indeed fail because of a newer version, then buildd will notice that and ignore my mail), the incoming.d.o Packages file (which is only available to buildd hosts, so don't ask) being out of sync with reality (which happens 4 times a day for about an hour. In this case build-deps will fail to install, requiring a retry or give-back), and similar things.

Other things are less common; but because of that, they are not routine and require an in-depth investigation. Sometimes the fix is to just file a bug report and/or to mark the package as 'failed' (and let the maintainer or a porter handle the problem); sometimes the failures are due a maintainer script in a package being utterly broken, resulting in either some build-deps being uninstallable or (worse) the buildd chroot being fucked up. Sometimes a build is interrupted halfway through, leaving the chroot in an unclean state (sbuild is not pbuilder, and does not remove and recreate its chroot between builds). This would push us to category 3 of our work.

Basically, however, figuring out which is which takes some experience. Not all compilers are based on gcc (there are some really weird languages in Debian), and thus not all of their error output is the same; learning their different error modes can help quite a lot. Additionally, by continually compiling 10G worth of software, you'll be stress-testing your toolchain. If you've never seen an 'Internal Compiler Error' before, you will once you become a buildd maintainer, and it helps if you know what they are and how to deal with them (even if there isn't much one can do beyond filing bugs).

Obviously, handling failures takes some more time than does handling success mails, and it's not something I do quite as often. The exact time between both varies, but it's usually somewhere between a few days and one or two weeks—unless I suddenly stop receiving success mails from one of my buildd hosts, in which case I know something is utterly wrong and will usually investigate immediately.

State handling

With 'state handling', I mean managing the state of a package in the wanna-build database. There's help about this from the people on the debian-wb-team mailinglist; call me oldfashioned, but I still do consider this to be the final responsibility of the buildd maintainers. After all, the routine state changes are a result of decisions that I make; as such, if I fuck up, it should be me who fixes the fuckup. Also, if I mark a package as 'failed' because I believe the maintainer fucked up, then the debian-wb-team people may not know about my reasoning there, and might give the package back to another failure (although I would consider the latter pretty rare).

These requests are pretty common. Quite often, they're unnecessary—many maintainers are unaware of the intricacies of the wanna-build system, and may misunderstand that when a build is in dep-wait state, it will automatically migrate to needs-build once dependencies are available. About as often, however, they are very much necessary, and, since regular Debian package maintainers do not have access to the wanna-build database, require someone who does have access to said database to update it for them.

Having said that, there are cases where I will preemptively edit the wanna-build database. Usually this is to do something useful with packages in 'Building' state that have been in that state for far too long; either upload the package if its signature mail got lost (which happens once in a blue moon), or give the package back if its build was not attempted even though it is marked as such (this should not happen, but the system is not perfect and it does). Sometimes this is because I figured out that some common build-dependency (say, the GTK or Qt libraries) are in a transitional state and currently not installable; and rather than having a build daemon try a bunch of packages and failing them all, I may want to note in the wanna-build database that they should not bother attempting these 75 packages before the GTK package was done. This isn't done as often on the official Debian machines (since the release managers will do it for me there), but in m68k we do need to do this ourselves.

These kind of requests happen once every few days up to once every few weeks, and take little time to deal with.

Host and chroot maintenance

This is the hardest and least fun part of buildd maintenance, but it is just as necessary. Luckily, it is not as often needed.

Because Debian Unstable is a system that's in a constant state of flux, often things will break. This is even more of a problem on a buildd chroot, since it builds out of incoming; a maintainer may upload a package with a fucked postinst script, have its build succeed, but then fail spectacularly to install. This maintainer may notice that, and may upload a new package half an hour later. As such, the broken package will not end up on the system of a user or Debian Unstable, but between the time of the upload of the broken package and that of the new package, the old package will be available to buildd hosts, who may use it to completely and utterly destroy their build chroot. The joys of having a high turnaround time.

Luckily, Debian package maintainers are not stupid, and this kind of fuckup does not happen every other day. It does happen, however, and when it does, this often means manual work for the buildd maintainer. In the best case, it's a matter of syntactically fixing a postinst script and calling 'apt-get -f install' or 'dpkg --configure -a'. In the worst case (which is almost, but not quite entirely, totally unheard of), it's a matter of rebuilding the buildd chroot. In addition to that, a machine which runs 24/7 for the sole purpose of building packages tends to generate quite a lot of disk activity, which in turn tends to be detrimental to the disk in the long turn. If not looked after properly, disks will die, taking the entire buildd chroot with them. That requires rebuilding them. Obviously, this last issue is dealt with by the Debian System Administration team in the case of official Debian hosts, but the same is not true for the m68k port.

A somewhat more common thing that needs to be taken care of is the fact that buildd does not in all cases clean up after itself. For instance, when a new version of a package is uploaded to the archive between the time that the buildd host built it and the time the buildd maintainer sent the signed .changes file back, then buildd will say "I haven't got that package taken as Building" and refuse to upload it. This makes sense (you can't upload an old version of a package, since there wouldn't be any source for it, and dak would refuse the upload), but it does mean that the packages aren't cleaned out. Arguably a bug in buildd-mail, over time it will result in the disk filling up with outdated packages, and those require manual work from the admin. I recently (as in, a few hours ago now) finished a script to check each .changes file in the "build result" directory against the wanna-build database, and list those that are no longer necessary. I already had a script that, given a list of .changes files, would remove every .deb file listed in the given .changes files, and then proceed to remove the .changes files themselves. Combined, these do make that kind of work somewhat less of a burden.

As said, this kind of work does not need to be done all that often; for instance, I just cleaned the build result directory on voltaire and malo, my two powerpc buildd hosts, and found old files from late 2008...

And that's it, I guess. It may seem to be quite much, but in reality it isn't; the thing I've always liked about buildd maintenance is the fact that you do something little for Debian every day, but that it ends up being something big and helpful after a while.

Of course, the little things are the cherry on the cake. By looking at a lot of build logs, one eventually learns a thing or two about build systems, which is valuable knowledge. Getting build logs from the whole of Debian allows one to learn things about the archive that many people don't know about—for instance, did you know that we had a package called trousers? I didn't, until I signed the buildd log...

Update: changed the URL of this post to be under the buildd/ directory, rather than having it conflict with that and thus killing its permalink and making it impossible to comment on this post. Oops.

Posted
debcamp 9 stuff

DebCamp 9: stuff

This has, by far, been the most productive DebConf ever, for me.

Not that this means all that much—mostly that previous DebCamps haven't been productive at all—but I still got a few things done.

There've basically been three things that I worked on: d-i support for the Intel SS4000-E; a belpic/beid upstream update; and a minor incremental NBD update.

The latter was simple, and was basically the first thing I did. I had a chat with Vagrant Cascadian, who does a lot of LTSP stuff in Debian, and added some stuff to the package that would make his life a bit easier. Not a lot of work; as I'm also upstream for NBD, and as it's been one of those packages that I've maintained since an eternity, I know the code pretty well. Half a day later, all the code was there.

D-i support for the SS4000-E wasn't that hard (most of the hard parts had already been done by Martin Michlmayr), but unfortunately some bits are not yet completely in order—mostly having to do with the fact that the original firmware has a kernel command line embedded in the kernel. As such, for now, you'll have to connect to the serial line in order to fix the redboot config; maybe we'll come up with a sane way to fix that in the future, but as long as we don't, that does mean you need a serial null modem cable.

Not that you need to solder anything (the main board has a connector for a regular serial port; you just need to plug the right cable to the right connector, so it's not that bad.

The final thing was horrible. A piece of software that presumably works well, but initially wouldn't even compile on my laptop because of pointer/int confusion; a build system made on shell scripts and qmake; and other similar things.

Eventually, I just gave up and uploaded what I had to experimental. It works, to some extent, but should be improved over the next few weeks. That's not for today, however.

Posted
bison

Bison

The Bison parser expects to report the error by calling an error reporting function named "yyerror", which you must supply. It is called by "yyparse" whenever a syntax error is found, and it receives one argument. For a syntax error, the string is normally "syntax error".
If you invoke the directive "%error-verbose" in the Bison declarations section, then Bison provides a more verbose and specific error message string instead of just plain "syntax error".

Sounds good, right?

Well, no, not entirely.

syntax error, unexpected $undefined

Well, goody. Now I know what's going on.

note: yes, I do know that there are other ways to debug a Bison parser than just to use the parser error string. It's just that this could have been more useful, like, say, provide the line on which the error is found? The file I'm trying to parse here is pretty large, thank you very much.

Posted
rc nmu

RC NMUs

A few days ago, during DebCamp9, someone NMU'ed belpic to close #525593, a 'failed to build from source' bug that was filed against it on april 25th, 2009.

Since such bugs (for good reason) are deemed release critical, my package was facing the prospect of being thrown out of testing, and this person (who shall rename nameless in this blog post, since it is not about fingerpointing; but if you must, check the bugreport to find out who) did an upload to prevent that from happening.

In and of itself, this is a good thing. I don't like it when my packages have bugs, and generally try to keep the number as low as possible. There used to be a time when my maintainer bug listing had zero open bugs for most of the time, and though I long ago gave up trying to keep it that way, it is still a goal I would like to reach at some undefined point in the future. Yet, with this particular upload, I was quite unhappy, even going so far as to cancel it, preventing it from going into the archive; and when, over lunch, I discussed the situation with Adeodato Simó, he seemed to feel that I improperly blocked this upload; that I instead should have allowed it to proceed. I felt as if he was almost hostile to my notion that the uploader should have coordinated more with me than had happened. I tried to explain why I felt the uploader had done something wrong, but I do not think I convinced him.

After having thought about it for a few days now, I still don't feel I was wrong in my actions; and I don't like the idea of possibly being a bad maintainer. So here's my position on the whole thing:

First of all, the bug was indeed open for a long time. The reason is that I didn't have much spare time in May to look at it. I also had no clue as to what the problem was, which makes it kinda hard to fix it. I did have some spare time in June, but since Belgian citizens have to file their tax reports in that month, and since you may need the software in these packages to be able to do so, I did not think it was proper to indeed do an upload before the deadline of June 30th; I did not want to have to scramble to fix a botched upload at exactly the wrong time.

On the 11th of this month, a patch was sent to the bugreport that would fix it. This was the weekend before I would leave for DebCamp/DebConf, so I postponed working on it until I would arrive here in Cáceres.

Secondly, what I'm most upset about is the fact that the upload wasn't tested. It couldn't have been; the only way you can test this package is by using a smartcard reader and a Belgian ID card; since the uploader does not have the Belgian nationality, I doubt he has such a card. I don't personally do an NMU that often; but when I do, I consider it my duty to perform extensive testing, even more so than with my own packages. In this particular case, doing such tests is quite important, as I've had issues with the software in the past, where new builds wouldn't work properly for reasons that I haven't fully been able to pin down.

Of course the uploader couldn't know all that, but this is exactly why he should have talked to me before doing the upload. Of course, in theory, I could have documented all my knowledge about the package; but in practice, it's pretty hard to do that well (i.e., many of the things I know about my packages involve 'feeling', 'instinct', and 'experience', which does not easily translate to 'words').

For clarity: I'm not upset about the fact that I've been NMU'ed. That's happened in the past, even for this particular package, and that's been perfectly fine. What I am unhappy about, is that the NMU happened to DELAYED/1 (which gives me very little time to intervene) and that it happened without prior coordination with me.

So, was I really wrong in preventing the upload?

note: since the new upstream release that was available was eventually uploaded to experimental rather than unstable, I just uploaded 2.6.0-7 which also fixes the bug in question.

Posted
extramadura

Extramadura: gnuLinEx and NBD

So obviously I already knew that the region of Extramadura uses a version of Debian they call gnuLinEx, but I didn't know the specifics. As such, it was nice and interesting that they offered us the option of going to a local school, where we could see an installation of gnuLinEx in action. Obviously I went there.

This was an interesting experience, for sure. When I arrived in Cáceres, I learned from Vagrant Cascadian that the school installations make extensive use of LTSP. This, in turn, uses NBD. Since they run this on 80.000 computers, it's quite likely that they're the largest NBD installation in the world. I had no clue.

So, today, when noticing that nbd-client was used on the local machines, I had a short little chat with José, one of the guys from gnuLinEx who's a Debian Developer, about the fact that I've been wanting to do some work on NBD's performance (mostly profiling runs etc), but that I don't have the setup to do this efficiently. To keep a long story short: I now have a test lab the size of which is several times the country I live in. Whee.

I had my camera with me, and took some pictures. I'll upload them soon, but have some other, more urgent, matters to attend to now (such as 'eat').

Later.

Posted
debconf9 almost over

The lying will stop

A few hours from now, this site will stop lying in its section of past events of the same type.

One might think I'd be happy about the end of lies; but given what it implies, not quite so.

I guess I can't wait until some other future site starts lying about past events.

We'll see.

Posted
passwords

Passwords

The Debian System Administrators decided, apparently, that disabling password logons is a good thing that warrants a 'Good News' post.

Allow me to politely disagree, for two reasons:

First, an SSH key is a password that is stored on the hard disk, while a 'regular' password is only stored inside someone's brain. While torturing someone to get at their password is arguably possible, it is not possible to do so without this person noticing. The same cannot be said about someone secretly stealing a file from someone else's hard disk; and while it is certainly possible to protect an SSH key with a password, it is not at all required to do so in order to use such keys. As such, on the server end you have no way to know whether a remote client is in fact the person whom they claim to be, just because they happen to have a SSH key that just happens to match the original.

Second, security is not accomplished by forcing people to use things they do not want to use. If you do that, they will find ways to work around your security—leaving you with no security at all.

But oh well, it's not my call to make, so whatever.

Posted