New toy
A while back, I was thinking. Playing the flute or whistling on a bottle is the exact same technique; but, in my opinion, the sound that comes from a bottle has some interesting overtones, making for a nice sound. Since the material of which an instrument is made is in large part responsible for the overtones that it produces (apart from the technique used by the musician), I wondered if nobody had ever come up with the idea of making a flute in glass.
Turns out that, of course, this has already happened. In fact, the modern concert flute wouldn't have been possible without one Claude Laurent, a Parisian instrument maker who received a patent for "a new [method of] making flutes from crystal"; part of his patent described a method of attaching the valves to the instrument. It was this method which was used by Theobald Boehm when he invented the fingering system that defines the modern concert flute.
The price of an intact Laurent flute today is, of course, beyond my budget; and it doesn't look like there are any glass flutes being made that use the Boehm system. But that doesn't mean there aren't any glass flutes being made anymore; indeed, a short look around the Internet quickly turned up Hall Crystal Flutes, a family-owned business from Rochester in the US state of Washington. As the price was not too insane (49 USD for the instrument, plus almost the same amount for shipping it to Belgium), I ordered one of their piccolos in C.
Last wednesday, a woman from the post office rang my doorbell, with a pakage containing the new instrument. Unfortunately, however, it had a customs tax of slightly more than what was in my wallet at the time, which meant I could only fetch it from the post office the next day. At least I could pay in plastic there.
After having had it for a few days, I can make a few observations:
- For someone who's been used to the Boehm system for as long as I have been, I thought the fingering system would be very confusing, but it's not too bad.
- As I bought a piccolo and not a flute, the sound is fairly sharp, and not entirely what I expected of it. Having said that, it's certainly not too bad, and I expect I'll be able to improve with time.
- Playing the highest notes on the fingering chart is challenging, and I've thus far not been able to produce the very highest note on that chart. Since playing a piccolo (any piccolo) is always slightly more difficult than playing a regular flute, however, especially in the higher octaves, that's not too unexpected.
- Playing sharp or flat notes requires me to open one of the holes slightly. Getting a correct sharp or flat requires, obviously, to open the hole to the exact correct amount, which will require quite some practice to do well. But I'm sure I'll get there, eventually.
I've been playing a few diddlies on it for the past few days, and it's been fun so far. We'll see where this leads us.
On PHP
This dude nails it. Well, almost—can't say I agree with the python bit. But other than that, yeah, pretty much what's wrong with PHP.
(Add that to what google seems to consider my most popular bit of code, ever, and, well, hmpf).
LOAD: tutorial on Debian Packaging
Two weeks ago, I was at LOAD, where I did a tutorial on Debian Packaging. Unfortunately, I wasn't doing very well physically, and as a result my preparation wasn't what I had hoped for. It ended up being not so much a tutorial as a walking the audience through packaging a piece of software—in casu NBD with debhelper in dh mode. Not very difficult, but since there's nothing for people to fall back on afterwards, it may help if they have a writedown of what I said during the session; and I promised to put it on my blog. So, here goes. However, note that the canonical written tutorial on Debian Packaging is here.
With apologies to readers of Planet Debian, most of whom for which all this is probably old hat.
Unlike RPM packaging, in Debian, the data that tells the debian packaging system how to package a piece of software is functionally split among a number of files. All these files go in a toplevel directory in the source package, (unsurprisingly) called 'debian'.
There are four files that are required for every Debian package: the control, changelog, copyright, and rules files. Without these files, dpkg-dev will fail to produce a package—any package.
The first, the control file, contains metadata on packages: the name of the package, its description, the dependencies, etc. Basically, almost all the information you see when you run 'apt-cache show <package>' (with one exception) is in the control file. If you have a source package that builds multiple binary packages, then you should have multiple 'Package' stanzas in the control file.
In my opinion, creating a control file is easiest done by copying a control file from another (similar?) package, and modifying it to suit the software you're dealing with.
The one exception, the one bit of metadata that is not found in the control file is the version number: this data is contained in the changelog file. This file looks like a free-form file for the most part, but it really is a machine-parsable format; as such, it's best to edit it with specialized tools, such as the debian-changelog-mode in the emacs editor, or the debchange script, also available as dch, from the devscripts package. In the changelog, you should document any changes you make to the package, making sure the version number (top line), distribution (unstable, experimental, stable, ...; top line), urgency (top line; used mainly when testing needs to be updated urgently for things like security updates), author (bottom line) and date and time (also bottom line) are correct.
The copyright file, unsurprisingly, should contain the copyrights statements of the original software, and the license (or a reference to a copy of the same license text under /usr/share/common-licenses). This file is still free-format in most packages today, although a machine-readable format for this file has recently been defined. Of the four required files, this is probably the most boring one, but hey, we can't like everything we do.
The last file, the rules file, is where all the action happens. This file is defined as a Makefile, of which the targets are called for various parts of the build system.
The rules file has a number of required targets (such as build, clean, and others), but if you're using debhelper in dh mode, you don't need to worry about those; instead, you can use the following simple rules file:
#!/usr/bin/make -f %: dh $@
With the first line, the shebang, we make clear that this is a makefile, and that it should be called by make. The next is a generic target (the % is a wildcard for make), with just one command: 'call dh with the name of the target being called'. Since dh implements all required rules targets, that immediately gets you a working package. Go ahead, try! Run 'dpkg-buildpackage -rfakeroot', and see what happens.
Did that work? Maybe, maybe not. First, you'll have seen loads of warnings about something involving a "compatibility level". This is from debhelper; this compatibility level, or 'API level' if you wish, allows debhelper to move forward and make incompatible changes without breaking all the existing packages out there; whenever the compat level is bumped, some new functionality will only be made available if you also raise the compat level in your source package (and then you'll know you may have some changes to do in your package). Since the original debhelper, over a decade ago, did not have a compat level, the absense of a compat level signals compat level 1, which is far outdated now, and about to be unsupported. Hence the warning. The fix is simple: create a file debian/compat containing a single number: the compatibility level you're working with. It's best to use the most recent level which debhelper supports when you create your package, which as of this writing is level 9.
Ignoring the compatibility level, if you're building a software package which uses a well-established build system, and you only build one binary package out of that, chances are pretty high that everything worked as expected. If not, you'll have more work.
Building multiple packages requires that you tell debhelper somehow which file goes in which package. You can do this with a file debian/package.install, where package is the name of the binary package you're building.
The install file is read by dh_install, one of many tools in the debhelper suite. You see, in the old days (before debhelper 7), debhelper was just a suite of tools, which required that you wrote a debian/rules file containing all the individual tools to be called in the correct order. This mode is still supported; in fact, even if you do use dh, you do, since all that tool really does is call the right tools in the right order; the real work is done by the invidual tools. They all have their own man page; to understand how dh_install chooses which file to put in which package, go read man dh_install. Go ahead, do it now; I'll wait.
Back? Good. You've now learned that dh_install can install files either from the source directory (useful for packages containing only scripts or so), or from a directory debian/tmp. If you use an autotools-based software package, this is what dh will do for you.
When you called dpkg-builpackage above, you may also have noticed that the output contained many lines starting with dh_, one of which said dh_install. As you may have guessed, dh echoes every debhelper command just before it will execute them. This allows you to look at the output, and see what's happening. For more detail, set the environment variable DH_VERBOSE to a non-zero value.
I'm sure you'll have seen one or two debhelper tools that could make your package better. Go ahead, go and read their man pages to see what they do, and how they do it. Most of these will require you to create a file debian/package.toolname to specify details.
In some cases, you may need to specify command-line arguments to the tool to get it to do what you want. In yet other cases, the tool won't support what you need it to do, and you'll have to do something manual. What to do now? The dh command line doesn't support adding extra commands easily. Does this mean you'd have to revert to old-style long, non-dh debian/rules files?
Luckily, no. You can create an override target. If dh detects that you have such a target for a particular tool, it will call that target instead of the tool. This target's rules can then call the tool in question (or not), and add any command line arguments, or extra commands, as needed.
An override target is a normal Makefile target with a name of the form override_dh_something, where dh_something is the name of the tool you wish to override.
At this point, I'd reached almost the end of my two allotted hours, and a member of the audience asked how to make dpkg deal with configuration files.
The answer is, you don't need to do anything! Not if you use debhelper, anyway; all you need to do is install the file in /etc. Since debian policy specifies that all files in /etc need to be configuration files and that no conffiles may be placed anywhere but in /etc, debhelper will automatically mark any file installed in /etc as a conffile, which will cause dpkg to ask the user what to do with changes to such files.
Note that in Debian parlance, a conffile is not the same thing as a configuration file: a conffile is a configuration file that is part of the binary package (i.e., if you call 'dpkg --contents' on the .deb file, you'll see it in the output), whereas a configuration file is a file (any file, including files generated during or after installation of the .deb file on a system).
Occasionally, this is also why some differences to configuration files are managed through debconf, while others aren't: the ones managed through debconf are non-conffile configuration files managed through ucf, whereas the others are conffiles.
So in the simplest of cases, if you want to install a configuration file and you want to make sure it's protected against accidental overwrites by package upgrades, all you need to do is make sure it's installed to /etc; debhelper will do the rest.
And that concludes this introduction. Please note it's only an introduction, not a full-blown tutorial; while this will allow you to get started, you may have to learn a bit more if you wish to eventually upload a package to Debian.
DPL vote, 2012
So, the vote is over, and Stefano won.
During many past DPL elections, I've made my vote public, and this one is no different:
V: 1223 597c362e6156ec7e37b334837161da26
That's me, in this list. Obviously I wouldn't run if I'm not serious about it, so I voted myself first. As to the other part: I thought long and hard about that, but eventually came to the conclusion that both Stefano and Gergely had properties as a candidate that I liked, and properties that I didn't like, and that therefore I couldn't prefer either of them over the other. I found Gergely's platform to be fairly similar to my own, which is a good thing; but there were a few details that made me have some pause about his candidacy. And while I stand by the things I said during campaigning about Stefano as a DPL, the truth is that the project could be far worse off than to have him re-elected.
As to the outcome... I can't say it's entirely unexpected. I knew it was a long shot even before I started, and then campaigning didn't excactly go as I would have hoped. I expected to lose, but not by such a margin—what Stefano did wasn't winning, it's called 'trashing the opposition'. Congratulations, zack, for a truly exceptional performance; and thanks, also to Gergely, for being a worthy opponent.
In closing, I'll say that I don't think I'll run again. I've gone through the process three times now, and have never gotten very close to winning; this probably means that what I feel about the position of DPL is somewhat removed from what the project as a whole thinks about it. So, absent some radical changes in either the project itself or in the way I look upon it, another candidacy from me is highly unlikely.
I guess I'll have to find other ways to spend my time...
Screen scraping sucks
At a customer, I've migrated a number of manually-maintained servers to having them be maintained through puppet not so long ago. Since then, some more machines have been added, and getting them up and running properly was a breeze: do a base install, install puppet, sign the certificate, restart puppet, and then wait and twiddle thumbs while puppet did its magic. Easy as pie.
Now, a few months later, we needed to install a number of windows machines for a lab (not my choice), and the person involved asked me to figure out some diskspace so we could start creating images for those.
Not a chance.
Instead, I suggested looking for a configuration management system, similar to puppet. Since we're using Samba 3 to run the Windows network here, dropping everything in Active Directory was not an option. But a short while later, he came back with the note that puppet, in its 2.7 version, actually does support Windows as a platform for the managed machines.
Interesting.
The unfortunate bit was that puppet supports creating files and installing software when it is distributed as an MSI file, but not when it's distributed as a .exe file. This is not unexpected; MSI files can be installed noninteractively; but when something is distributed as a .exe file, it means it needs to be installed interactively; and puppet does not have the ability to interact with GUI software.
The workaround: use something that does have that ability (in my case, autoit), and use an exec block in puppet to make it call those scripts. In effect, that's a bit like screenscraping. Add a creates stanza to the block, so that the installer isn't started again if the software at hand has already been installed. This 'autoit' thing also comes with a recording utility, allowing one to create an initial script by just doing the installation, and having the tool just record stuff.
With that, the machines are installed 99% automated. I say 99%, because there are still some issues:
- Some software ships with an embedded google toolbar and an embedded
google chrome. Not only do we not want that, it also makes automatic
installation a Very Hard problem: if you install the software the very
first time, it asks you whether you want to install Google Toolbar. If
you decline, the rest of the installation is shown, and a registry key
is set. If you then uninstall and reinstall, the Google Toolbar question
isn't shown; instead, the Chrome question is shown. If you decline that,
then the same happens with a different registry key.
Google doesn't actually document where these keys are set, or how it can be disabled for any software that might possibly have embedded toolbars or browsers. Or, well—if it does document it, I certainly didn't find any documentation about it; the fact that google tells me "you're running Linux, you don't need Google Toolbar, go fuck yourself" when I go to the toolbar.google.com website isn't very helpful.
Anyway, the keys are under HKLM\Software\Google. You can't miss them. - This being Windows, sometimes the machine needs to be rebooted after an installation. The Windows Installer software, msiexec.exe, signals this fact by exiting with an exit state of 3010. Unfortunately, puppet doesn't understand that, causing it to mark the installation as failed, and retrying it on the next run. That's annoying.
- Sometimes, telling the machine to postpone rebooting causes the next installation to behave slightly differently, which will cause the autoit script to fail. When that happens, I need to kill autoit, exit the installer for the failed application, and allow puppet to finish. After that, and after a reboot of the machine and restart of puppet, everything works well.
- To be able to install software, Windows may sometimes need to kill some applications, or may need to take focus. This may interfere with the lab readings these machines will be used for, so is not an option. Instead, I've decided not to ever run puppet agent as a service, instead always calling it manually with the '--onetime' option, so it quits once the manifest is applied. Unfortunately, that makes it harder to update things; I'll have to walk by each and every one of the machines if I ever need to add something. I did consider adding that to a login script, but then the user logging in to the machine will most likely not ever have the required privileges to actually do what puppet needs done.
I'll have to think about this some more, I guess. First, it's clear that while puppet does have some Windows functionality, it's not entirely ready yet. And somehow, using autoit to add to Puppet functionality feels like an ugly hack.
We'll see what the future brings.
Switching to duckduckgo
In the late 90s, google became popular for one reason: because they had a no-nonsense frontpage that loaded quickly and didn't try to play with your mind. Well, at least that was my motivation for switching. The fact that they were using a revolutionary new search algorithm which changed the way you search the web had nothing to do with it, but was a nice extra.
Over the years, that small hand-written frontpage has morphed into something else. A behind-the-scenes look at the page shows that it's no longer the hand-written simple form of old, but something horrible that went through a minifier (read: obfuscator). Even so, a quick check against the Internet Wayback machine shows that the size of that page has increased twenty-fold, which is a lot. But I could live with that, since at least it looked superficially similar.
Recently, however, they've changed their frontpage so that search-as-you-type is enabled by default. Switching that off requires you to log in. So, you have a choice between giving up your privacy by logging in before you enter a search term, or by having everything you type, including any typos and stuff you may not have confirmed yet, be sent over to a data center god knows where. Additionally, at the first character you type, the front page switches away to the results page, causing me to go "uh?!?" as I try to find where they moved my cursor to. This is annoying.
Duckduckgo doesn't do these things; and since they also don't do things like combining my typing skills, phone contact list, calendar, and chat history to figure out that I might be interested in a date, I'm a lot more comfortable using them.
So a few days ago, I decided to switch my default search engine in chromium to duckduckgo. It still feels a bit weird, to be using a browser written by one search engine to search something on another; but all in all, it's been a positive experience. And the fact that wikipedia results are shown first, followed by (maybe) one ad, followed by other search results, is refreshing.
We'll see how far this gets us.