I have a number of USB hard disks. Like, I suppose, mostly everyone who reads this blog. Unlike many people who do, however, for whatever reason I decided to create LVM volumes on most of my USB hard disks. The unfortunate result is that these now contain a lot of data with a somewhat less than efficient partitioning system.
I don't really care much, but it's somewhat annoying, not in the least
because disconnecting an LVM device isn't as easy as it used to be;
originally you could just run the lvm2
init script with the stop
argument, but that isn't the case anymore today. That is, you can run
that, but it won't help you because all that does, effectively, is exit
0
.
So what do you do instead? This:
First, make sure your devices aren't mounted anymore. Note: do not use lazy umount for a device that you're going to remove from your system! I've seen a few forum posts here and there of people who think it's safe to use
umount -l
for a device they're about to remove from their system which is still in use. It's not. It's a good way to cause data loss.Instead, make sure your partitions are really unmounted. Use
fuser -m
if you need to figure out which process is still using the partition.- Next, use
vgchange -a n
. This will cause LVM to deactivate any logical volumes and volume groups that aren't open any more. Note that this can't work if you haven't done the above. Also note that this doesn't cause the devices to be gone when you do things likevgs
or so. They're still there, they're just not in use anymore. Skipping this step isn't recommended, though; it will make LVM unhappy, mostly because some caches are still in use. - Remove your device from the computer. That is, disconnect the USB
cable, or call
nbd-client -d
, or do whatever you need to make sure the PV isn't connected to your system anymore. - Finally, run
vgchange --refresh
. This will cause the system to rescan all partitions, notice that the volume groups which you've just disconnected aren't there anymore, and remove them from configuration.
Voila, your LVM volume group is no longer available, and you've not suffered data loss. Kewl.
Note: I don't know what the lvm2
init script used to do. I suspect
there's another way which doesn't require the --refresh
step. I don't
think it matters all that much, though. This works, and is safe. That being
said, comments are welcome...
printer-driver-postscript-hp Depends: hplip
hplip Depends: policykit-1
policykit-1 Depends: libpam-systemd
libpam-systemd Depends: systemd (= 204-14)
Since the last in the above is a versioned dependency, that means you can't use systemd-shim to satisfy this dependency.
I do think we should migrate to systemd. However, it's unfortunate that this change is being rushed like this. I want to migrate my personal laptop to systemd—but not before I have the time to deal with any fallout that might result, and to make sure I can properly migrate my configuration.
Workaround (for now): hold policykit-1 at 0.105-3 rather than have it upgrade to 0.105-6. That version doesn't have a dependency on libpam-systemd.
Off-hand questions:
- Why does one need to log in to an init system? (yes, yes, it's probably a session PAM module, not an auth or password module. Still)
- What does policykit do that can't be solved with proper use of Unix domain sockets and plain old unix groups?
All this feels like another case of overengineering, like most of the *Kit thingies.
Update: so the systemd
package doesn't actually cause systemd to
be run, there are other packages that do that, and systemd-shim can be
installed. I misread things. Of course, the package name is somewhat
confusing... but that's no excuse.
Dear lazyweb,
reprepro is a great tool. I hand
it some configuration and a bunch of packages, and it creates the
necessary directory structure, moves the packages to the right location,
and generates a (signed) Debian package repository. Obviously it would
be possible to all that reprepro does by hand—by calling things
like cp
and dpkg-scanpackages
and gpg
and other things by
hand—but it's easy to forget a step when doing so, and having a
tool that just does things for me is wonderful. The fact that it does so
only on request (i.e., when I know something has changed, rather than
"once every so often") is also quite useful.
At work, I currently need to maintain a bunch
of package repositories. The Debian package archives there are
maintained with reprepro
, but I currently maintain the RPM archives
pretty much by hand: create the correct directories, copy the right
files to the right places, run createrepo
over the correct directories
(and in the case of the OpenSUSE repository, also run gpg
), and a
bunch of other things specific to our local installation. As if to prove
my above point, apparently I forgot to do a few things there, meaning,
some of the RPM repositories didn't actually work correctly, and my
testing didn't catch on.
Which makes me wonder how RPM package repositories are usually
maintained. When one needs to maintain just a bunch of packages for a
number of servers, well, running createrepo
manually isn't too much of
a problem. When it gets beyond own systems, however, and when you need
to support multiple builds for multiple versions of multiple
distributions, having to maintain all those repositories by hand is
probably not the best idea.
So, dear lazyweb: how do large RPM repositories maintain state of the packages, the distributions they belong to, and similar things?
Please don't say "custom scripts"
A few weeks back, I learned that some government webinterfaces require users to download a PDF files, sign them with their eID, and upload the signed PDF document. On Linux, the only way to do this appeared to be to download Adobe Reader for Linux, install the eID middleware, make sure that the former would use the latter, and from there things would just work.
Except for the bit where Adobe Reader didn't exist in a 64-bit version. Since the eid middleware packages were not multiarch ready, that meant you couldn't use Adobe Reader to create signatures with your eID card on a 64-bit Linux distribution. Which is, pretty much, "just about everything out there".
For at least the Debian packages, that has been fixed now (I still need to handle the RPM side of things, but that's for later). When I wanted to test just now if everything would work right, however...
... I noticed that Adobe no longer provides any downloads of the Linux
version of Adobe Reader. They're just gone. There is an
ftp.adobe.com
containing some old versions, but nothing more recent
than a 5.x version.
Well, I suppose that settles that, then.
Regardless, the middleware package has been split up and multiarchified, and is ready for early adopters. If you want to try it out, you should:
- run
dpkg --add-architecture i386
if you haven't yet enabled multiarch - Install the eid-archive package, as usual
- Edit
/etc/apt/sources.list.d/eid.list
, and enable thecontinuous
repository (that is, remove the#
at the beginning of the line) - run
dpkg-reconfigure eid-archive
, so that the key for the continuous repository is enabled - run
apt-get update
- run
apt-get -t continuous install eid-mw
to upgrade your middleware to the version in continuous - run
apt-get -t continuous install libbeidpkcs11-0:i386
to install the 32-bit middleware version. - run your 32-bit application and sign things.
You should, however, note that the continuous
repository is named so
because it contains the results of our continuous integration system;
that is, every time a commit is done to the middleware, packages in this
repository are updated automatically. This means the software in the
continuous repository might break. Or it might eat your firstborn. Or it
might cause nasal
daemons. As such,
FedICT does not support these versions of the middleware. Don't try the
above if you're not prepared to deal with that...
Several years ago, I blogged about how to use a Belgian electronic ID card with SSH. I never really used it myself, but was interested in figuring out if it would still work.
The good news is that since then, you don't need to recompile OpenSSH anymore to get PKCS#11 support; this is now compiled in by default.
The slightly bad news is that there will be some more typework. Rather
than entering ssh-add -D 0
(to access the PKCS#11 certificate in slot
0), you should now enter something along the lines of ssh-add -s
/usr/lib/libbeidpkcs11.so.0
. This will ask for your passphrase, but it
isn't necessary to enter the correct pin code at this point in time. The
first time you try to log on, you'll get a standard beid dialog box
where you should enter your pin code; this will then work. The next
time, you'll be logged on and you can access servers without having to
enter a pin code.
The worse news is that there seems to be a bug in ssh-agent, making it
impossible to unload a PKCS#11 library. Doing ssh-add -D
will remove
your keys from the agent; the next time you try to add them again,
however, ssh-agent will simply report SSH_AGENT_FAILURE
. I suspect the
dlopen()
ed modules aren't being unloaded when the keys are
removed.
Unfortunately, the same (or at least, a similar) bug appears to occur when one removes the card from the cardreader.
As such, I don't currently recommend trying to use this.
Update: fix command-line options to ssh-add
invocation above.