Total Cost of Ownership |
- Red Hat Enterprise Linux costs $2499 per server per year
- Ongoing management and maintenance of systems is 60% of
TCO
- Downtime involves 15% of TCO
|
- Yes, Red Hat Enterprise is a pretty darn expensive
distribution. But it's important to realize that Red
Hat isn't the only distribution out there;
there are many more, and many of them come with
enterprise-class support from the company that
manages the distribution, too. Examples include Debian, SUSE, Ubuntu, CentOS, Mandriva, and
many others; a pretty comprehensive list and
comparison can be found on Distrowatch,
but note that no list can ever be complete.
In addition, there are many third-party support
companies who can give you the same type of
enterprise-class support on many GNU/Linux
distributions. In contrary to the Windows situation,
it's important to remember here that every
third-party contractor has access to the same source
code as the company or organization that created the
distribution in the first place; and for that
reason, the original company does not have a
significant advantage when compared to third-party
support companies. Compare to Windows, where only
Microsoft could really help customizing, tuning and
extending Windows when this were necessary.
- Microsoft does not specify why Windows would be in
the advantage when we're talking about ongoing
staffing costs, but one could guess that they think
it is easier to maintain Windows Server than it is
to maintain GNU/Linux servers (they repeat that
argument later on)
"Easy", however, is a subjective argument. What's
easy for you isn't necessarily easy for me. With
GNU/Linux, you often have the choice between many
configuration systems; there's YaST on SUSE, Debconf
on Debian, or linuxconf on some Red Hat
derivatives. There's webmin, Plone, and other
web-based things that aren't
distribution-specific. And if you prefer to
micromanage, you can manually edit configuration
files, or just roll your own configuration system
with some scripting language. With so much choice to
use what's best for yourself or for your
organization, you're pretty sure to find something
that's at least as easy to use as Windows Server, if
not easier.
Easy is also a question of experience. As an
example, the last version of Windows which I
personally have used regularly was Windows 98; since
graduating from college in 2001, I personally
haven't used anything but Debian GNU/Linux. As a
result, by now, using GNU/Linux is far easier for me
than Windows Server is, even if we're talking about
different distributions than what I usually
use.
- Stating that downtime involves 15% of TCO can only
mean they intend to tell you that Windows is more
stable than GNU/Linux.
In that case, though, one could wonder why Windows
requires you to reboot to install a new driver, or
even to install some applications. This is
not the case on most other operating
systems, such as MacOS, FreeBSD, or GNU/Linux.
|
Reliability
|
- A reliable system isn't just available; it's also
easy to configure and manage for administrators as
requirements change.
- Windows Server comes with a set of utilities that
standardize common administration tasks to make them
easy to do; >Windows Server also comes with robust
tools that easily allow for more customized
administration
- Windows Server is the most broadly tested and
certified platform for applications and hardware
|
- This is true. If you modify a system incorrectly,
it will most likely not do what you want it to
do.
However, their statement that many changes on a
GNU/Linux system will invalidate support contracts
means only one thing—that the support
contracts were wrong.
- Providing a double set of administration tools to
manage a system (one for scripting, and one for
daily use) is a pretty bad idea. If the standard
interface is scriptable, then a system administrator
will learn the scriptable interface as they do
common daily tasks. As a result, the barrier to
write a script to automate repetitive, common, daily
tasks will be much, much lower; and it is in the
common and repetitive tasks that errors have the
most profound interface. If, however, the interface
to do daily tasks is significantly different from
the interface used to writing scripts (as is the
case on Windows Server), then anyone who wants to
write a script to automate a common and repetitive
task will first have to learn the scripting
interface for the subsystem of this repetitive task,
which is a time-consuming job; therefore, many
people will neglect to do this, introducing errors
and reduced reliability along the way.
- Windows Server may be more broadly tested than any
other operating system out there, but this is only
useful information if the choice of hardware for
other alternatives is not significantly large. As it
is, this is not the case; except if you want to run
a server on desktop-class hardware (a choice that I
would not recommend, not for Windows and not for
GNU/Linux-based servers), Linux certification for
server hardware is available from all the major
vendors, including Dell, HP, IBM, Fujitsu-Siemens,
Sun, and many, many others. It is therefore
very easy to buy hardware with the
necessary support contracts for running GNU/Linux on
them, at competitive pricing.
|
Security |
- Empirical evidence that the "everyone can see the
code" approach to software security doesn't work for
Red Hat: the amount of published vulnerabilities
between Red Hat Enterprise and Windows differed in
favour of Windows
- At Microsoft, they've developed a structured
approach to developing secure software
|
- This isn't a very strong argument against the
"everyone can see the code" approach. Microsoft
themselves admits that it is empirical evidence; as
with any empirical evidence that's based on
observations over any given time period, it might
have been the case at one point, but it's probably
just as easy to find empirical evidence that points
out the opposite.
Their specific empiric evidence involves looking at
the past, counting the amount of published
vulnerabilities, and comparing them. This is wrong,
in many ways:
- You shouldn't be
interested in the past; rather, you should be
interested in how many vulnerabilities exist
today, and how many more will be
discovered in the future.
- Counting past vulnerabilites does not tell you
what you want to know, which is how many
vulnerabilities are still left. For example, if
product A currently has X security issues, and
product B has X+N security issues, and then
product A fixes M of them, while product B fixes
M-N of them, then suddenly product A is in a
much better position, security-wise,
than product B; it would have 2N less security
vulnerabilities, even though it had to fix N
extra security vulnerabilities to get
there.
- The argument revolves around the idea that
Windows and GNU/Linux, by default, install the
same amount of software features. This is not
true. For instance, almost all GNU/Linux
distributions will include a complete
development environment, Office suites, database
servers, and much more; in contrast, though it
is much better than it used to be, Windows
Server still comes with much less than the
average Linux distribution. If Red Hat sends out
a patch for a security vulnerability in one of
their products, it may be a vulnerability in one
of the subsystems that would be part of Windows
Server had it been Windows, or it might be a
vulnerability in one of the other things that
are add-on capabilities in the Windows world
(all of which have their own vulnerability
statistics that are counted separately). That's
the same as saying "this car is safer than that
one, provided we disregard tire safety on this
car but not on that other one"
Ben Laurie's statement would be quite convincing, if
we ignored the fact that a simple
google search only turns up results at
microsoft.com...
- A structured and proactive approach to security
certainly is a good idea; and given the decentral
organization of the Open Source world, expecting the
same methodology being used throughout all the code
in a GNU/Linux system is unrealistic.
In recent years, however, many people have begun
to actively perform security audits on Open Source
software; it is not unreasonable to assume that the
GNU/Linux security situation has been much improved
since then. Also, the "many eyes" approach
does work, as projects such as the Linux
Kernel actively show each day.
|
Choice |
Windows offers you less choice on the operating system
level, therefore you have more choice on the application
level. |
This is total nonsense. First, the argument readily
admits that there is less choice in the Windows
world: there are many GNU/Linux distributions out
there, but just one Windows. Second, Open Source
applications on one distribution, even when patched,
will not function significally differently from the same
applications on another distribution, and an experienced
GNU/Linux administrator will easily switch from one
distribution to another, if this would be necessary at
some point; so administrators would easily cope with
having one server run Red Hat, and another server run
SuSE, if this would be required at some point. Finally,
projects such as the LSB
exist to provide a common interface shared by all
distributions, which allows third-party applications
that are not open source to run on a variety of
distributions. |
Manageability |
Managing servers involves far more than just an update
tool; Red Hat comes with yum, which is very good at
updating, but there's so much more needed |
Manageability is one of the main areas in which
distributions differentiate. If Red Hat's manageability
options aren't sufficient for your needs, then you
should look at other distributions. SUSE, for example,
comes with YaST by default, which centralizes
configuration, installation, and management in one
tool. Debian's "debconf" allows to store configuration
in a central database, to autoconfigure packages at
install time, and Debian comes with a number of tools to
help you manage hundreds or thousands of
similarly-configured computer systems (such as desktops,
or cluster nodes). If one distribution's management
tools do not do what you need them to, then you should
shop elsewhere. |
Interoperability |
Open Standards is not the same as Open Source, and
Microsoft is interoperable by design. |
Funny they should say that. Microsoft's tools are indeed
interoperably by design, but only with other
microsoft tools. As long as you're interested in
only running Microsoft-tools and operating systems (not
just now, but for all eternity), then Microsoft's tools
are indeed a very good choice,
interoperability-wise. The moment you want Microsoft
tools to talk with software from other vendors, however,
chances are pretty high you're in trouble; and even
where Microsoft tools appear to be
interoperable, hidden issues will most likely still
surface. Examples include the minor incompatibility they
introduced in their Kerberos implementation, with as a
result the fact that connecting Windows servers to a
non-Windows Kerberos implementation is problematic, at
best; or the fact that they only rarely publish
specifications of network protocols they introduced in
new products (other than through APIs which they only
develop for their environments), making it
harder than necessary for third parties to implement
services that can properly communicate with their
systems.
That's not to say that they made it impossible to have
GNU/Linux systems interoperate properly with Windows
systems; an experienced GNU/Linux administrator can get
this done in most cases. However, they do appear to do
their best to create the sort of Vendor
lock-in that will only get more expensive to get
out of as time passes. |