... is the new hype these days. Everyone seems to want to be part of it; even Microsoft wants to allow Docker to run on its platform. How they visualise that is slightly beyond me, seen as how Docker is mostly a case of "run a bunch of LXC instances", which by their definition can't happen on Windows. Presumably they'll just run a lot more VMs, then, which is a possible workaround. Or maybe Docker for Windows will be the same in concept, but not in implementation. I guess the future will tell.
As I understand the premise, the idea of Docker is that getting software to run on "all" distributions is a Hard Problem[TM], so in a Docker thing you just define that this particular stuff is meant to run on top of this and this and that environment, and Docker then compartmentalises everything for you. It should make things easier to maintain, and that's a good thing.
I'm not a fan. If the problem that Docker tries to fix is "making software run on all platforms is hard", then Docker's "solution" is "I give up, it's not possible". That's sad. Sure, having a platform which manages your virtualisation for you, without having to manually create virtual machines (or having to write software to do so) is great. And sure, compartmentalising software so that every application runs in its own space can help towards security, manageability, and a whole bunch of other advantages.
But having an environment which says "if you want to run this
applicaiton, I'll set up a chroot with distribution X for you; if you
want to run this other application, I'll set up a chroot with
distribution Y for you; and if you want to run yet this other
application yere, I'll start doing a chroot with distribution Z for you"
will, in the end, get you a situation where, if there's another bug in
libc6
or libssl
, you now have a nightmare trying to track down all
the different versions in all the docker instances to make sure they're
all fixed. And while it may work perfectly well on the open Internet, if
you're on a corporate network with a paranoid firewall and proxy,
downloading packages from public mirrors is harder than just creating a
local mirror instead. Which you now have to do not only for your local
distribution of choice, but also for the distributions of choice of all
the developers of the software you're trying to use. Which may result in
more work than just trying to massage the software in question to
actually bloody well work, dammit.
I'm sure Docker has a solution for some or all of the problems it introduces, and I'm not saying it doesn't work in practice. I'm sure it does fix some part of the "Making software run on all platforms is hard" problem, and so I might even end up using it at some point. But from an aesthetical point of view, I don't think Docker is a good system.
I'm not very fond of giving up.
Just standardize on one, adding:
FROM centos:6.6
...(or the like) to the top of each base image you want to build. The docker daemon (via aufs) will optimize containers running common base images.
Sorry, my previous comment made sense to me because I had the context in my head, but I now realize it was too vague.
My idea wasn't that the specific distribution NixOS might solve the problem, but rather the way it does things. The Nix package manager, which can run in other distributions, installs packages into folders with unique prefixes (quoting the manual, "a cryptographic hash of the package’s build dependency graph"). For instance, Firefox could be in /nix/store/b6gvzjyb2pg0kjfwrjmg1vfhh54ad73z-firefox-33.1/. This way there can be isolation of software and dependencies without a virtual machine or container. Domen Kozar wrote a good article about the ideas behind Nix:
https://www.domenkozar.com/2014/03/11/why-puppet-chef-ansible-arent-good-enough-and-we-can-do-better/
Anyway, I haven't tried it myself (yet), but it's been in the back of my mind for a while, and your blog post made it resurface.