eIDconfig-belgium
Someone over at Novell wrote an application to enable eID in various applications with a simple click: you can enable web authentication in firefox, and email signing in Thunderbird and Evolution. It also does stuff which I didn't even know was possible under Linux—enabling eID card use under OpenOffice.org.
So I'm now a bit in dubio as to what I should do with this. I have an open bug report against libbeidlibopensc2 that claims the mozilla/firefox plugin should be automatically registered when you install the package, rather than having to go through a bit of javascript in some HTML file, and I kindof agree with that. I could analyze the C# code to see how the Novell people do it, translate that to C (since C# doesn't work on every architecture Debian supports, and besides I don't want to depend on yet another huge list of dependencies after wxWidgets and Qt), and call the relevant code from postinst to enable the relevant plugins system-wide. OTOH, allowing every user to make the choice for themselves, could be a good idea as well. Then again, that's not really the Debian way (if installed, it should just work). Then again, I don't think that enabling these plugins system-wide allows one to still disable it on a per-user basis.
Guess I'll have to give it some thought—other people's insights are appreciated.
Planet.grep.be: about
I created an 'about' page over on planet.grep.be, which, apart from listing who's behind Planet Grep, also outlines the guidelines I've been following in (not) allowing people on Planet Grep.
In case anyone is interested...
Update: now with actually working link. Whee.
Reverse hard disk failure
There are two ways of telling a story:
The first way is just reciting what happens. I did foo. Then I did bar. That didn't work, so I did quux.. The second way is reciting what mistakes you shouldn't have made. Lessons learned: don't do foo, because that will result in bar, and a visit to the hospital. No, I won't be updating the package, as I promised.
In the interest of, well, keeping things interesting, here's a tale of what I did last week, in reverse mode:
- It's not a good idea to forget to close your laptop bag when you throw the bag on your back, since that might catapult out the laptop and break your hard disk.
- When ordering a replacement hard disk, it usually helps to call the
supplier to confirm the payment so that you can actually go and get the
disk, rather than waste several days on
waiting
- When installing MacOS on a secondary partition after installing Debian and spending several hours fine-tuning the Debian installation, do not run the MacOS partitioner. It will reformat your Linux-partitions as HFS+ ones.
- Remember that it takes several hours to download the DVD image to install Debian. Downloading that one rather than the CD image, or (better yet) going to the office where you have such a DVD lying around in order to "save time" is silly.
- If you restore from backups, and your backup program in its default
configuration writes to /tmp, do not forget that Debian in its default
configuration clears out /tmp on boot. This is especially
important if, due to the relative speed of tape streamers and network
hardware, the process of
restoring from backups
takes several hours and you shut down the laptop after completing the restore, so that you can go home. Sigh.
The good news is that I now have a 160G hard disk rather than a 60G one. Whee.
Whereami or guessnet?
Before my hard disk adventure, I used to use whereami to manage my laptop network setup. Whereami is pretty good. It allows me to detect whether a cable is connected to my NIC, and not even bother wasting several minutes in trying to get a DHCP lease. It can run tests on several interfaces, and decide that I am "home" if it finds a known network on either the wireless or the wired interface. Most importantly, it allows me to build a script based on where I actually am, so that I can have my laptop modify several configuration settings that depend on the actual network I'm on -- say, whether or not to use a proxy, or whether or not to use an SMTP smarthost.
I was having issues with whereami lately, however, and thought to look for something else. After all, it's true what they say; whereami does not properly integrate with ifupdown. If you use whereami, modifying the network configuration involves "/etc/init.d/whereami start" or some such; just "ifup eth0" will break things horribly. Additionally, I was having issues with my WPA setup; there is a testwpa, but it does not seem to work for me for some reason—and I can't seem to figure out what's going on in order to file a bug.
Apart from the two issues above, I was having some minor annoyances with my setup; enough so that after reinstalling my system, I thought to try something else. The guessnet package is designed to properly integrate with ifupdown, so I thought I'd give it a try.
I don't think it is what I'm looking for, though.
# route_data = ${lookup{ # ${readfile{/etc/whereami/topmost_location}} # }lsearch{/etc/exim4/smarthosts}} route_data = ${lookup{ ${extract{eth0}{ ${readfile{/etc/network/run/ifsftate}} }} }lsearch{/etc/exim4/smarthosts}} ${lookup{ ${extract{wlan0}{ ${readfile{/etc/network/run/ifstate}} }} }lsearch{/etc/exim4/smarthosts}}
The upper version is what I used with whereami (but not active, see the comment marks :). My script would write something to /etc/whereami/topmost_location, and several other things (including my exim configuration, as shown above) would use that to automagically modify their configuration. Which is pretty cool.
The lower version is the equivalent of the upper version with guessnet. How ugly. And then I didn't even see a way to avoid ifup from trying to 'up' eth0 if the MII tells me there is no cable in the NIC. Oh my.
Real Men don't take backups
They figure out how the filesystem works, and start hacking. Heh.
Actually, I did the same thing once myself. The difference between Carlo and myself was that in my case, only a few files that were not in backup had been deleted, rather than 3G worth of data; hence, using a hex editor, a cursory glance at the Linux source code, and a few hours to kill was all that was needed to recover.
(via)
(With apologies to Linus, and any Real Women reading this)
Public service announcement.
The "Data Display Debugger" sometimes is a great tool in, well, debugging stuff. Those who have never met it before, please take a look—it has some nice qualities not found in any other debuggers (or gdb wrappers).
However, here is a public service announcement to the ddd developers:
STOP MESSING WITH MY LAYOUT, DAMMIT!
Here ends this public service announcement.
I mean, seriously. Why the fuck do these widgets need to dance around on my screen?
Generating ogg theora from individual frames
Hello World,
When I have absolutely nothing useful to do (which has been quite a while ago now), I sometimes like to play with povray. Not that I'm very good at it—I'm a programmer, not an artist—but it still is good fun.
What I usually prefer doing is creating an animation. This is rather easy with povray; you can have it modify certain values in your scene based on some "clock" variable, and then generate a series of different frames based on that one scene description. This creates a large number of image files in the format that you specified (say, .png) in your output directory.
Of course a number of frames is not an animation yet. If you want that, you need to do some postprocessing on these images. ImageMagick to the rescue.
convert foo*.png foo.mpg
This works, but I don't like the low quality which the MPEG2 format gives me; and it also requires me to install the non-DFSG-free and slightly buggy MPEG2 reference implementation before it'll work. That's not very nice.
What I'd really like is for something similar that will work with Ogg Theora. So, dear lazyweb, is something like that even possible? This would preferably not require me to know too much about Ogg Theora, but just take a bunch of files and output a .ogg file—much like the above convert command line.
Thanks,
Perl
Martin blogs about how the Camel book mentions the fact that scalars, arrays, and hashes have completely separate namespaces, as if this is a problem. Well, I disagree.
As with anything in Perl: yes, obviously it's possible to confuse yourself with this feature, and yes, obviously it's true that if you do this, you will get to keep both pieces of your program. So don't do that then.
However, when used properly, I find that it actually helps me improve readability of my programs.
Let's say we have a subroutine which returns a reference to a filehandle to a bunch of data in a format that we know how to parse. We don't know what the filehandle points to (and we don't care, either), but we do know what the data is. So we store the filehandle in some scalar, and then proceed to parse the data into a hash.
Now we could use variable names such as $fh and %hash, but I'll hope that everyone agrees with me that this is very bad practice. So, instead, we should name the variables after what they contain. Say our data file contains books:
$books = get_books_fh(); while(<$books>) { (... parse $author out of $_ ...) push @{$books{$author}}, $_; }
The filehandle to which $books is a reference, points to some useful data; and the %books hash table contains the exact same data, only structured in a different, more useful way. So why give them a different name? They're the same thing, call them the same way. Personally, I find this makes a whole lot of sense. And you can do the same thing with arrays vs hashes, or scalars vs arrays.
Of course you can also write code that uses $somevar and @somevar, with no semantic connection between the two. That is stupid, so don't do it. The fact that you can do stupid things with Perl, however, should not mean the language sucks.
You see, other than some other programming language I could think about, Perl does not assume its users are stupid, or inexperienced, or dummies, or whatever. Yes, that does mean that new users will get confused; but it also allows experienced users to Get Stuff Done in a proper way. And, well, I'm sure you can do stupid things with python, too.
Dear lazyweb,
In an effort to improve the performance of nbd-server, I wrote a patch to make it use some common sendfile() implementations (specifically, Linux- and FreeBSD-style sendfile calls). Unfortunately, however, when I test the Linux version (I haven't done tests with the FreeBSD version yet), the server outputs some garbage in fron of the actual data that it needs to send; as a result, obviously the client can't make heads or tails of it, and the connection is dropped.
However, when I run it inside gdb, everything is fine. When I call strace over the server, I don't see any obvious errors. I tried using the DODBG version of the server (see the code for details), but that didn't help me.
At this point, I'm pretty clueless as to what is going on. If anyone were to give me some hints or pointers, I would be eternally grateful.