Virtualizing Even a Single Server

I find it very common in conversations involving virtualization to have the concept of consolidation, which in the context of server virtualization refers to putting multiple formerly physical workloads onto a single physical box with separation handled by the virtual machine barriers, treated as being the core tenant and fundamental feature of virtualization.  Without a doubt, workload consolidation represents an amazing opportunity with virtualization, but it is extremely important that the value of virtualization and the value of consolidation not be confused.  Too often I have found that consolidation is viewed as the key value in virtualization and the primary justification for it but this is not the case.  Consolidation is a bonus feature, but should never be needed when justifying virtualization.  Virtualization should be a nearly foregone conclusion while consolidation must be evaluated and many times would not be used.  That workloads should not be consolidated should never lead to the belief that those workloads should not be virtual.  I would like to explore the virtualization decision space to see how we should be looking at this point.

Virtualization should be thought of as hardware abstraction as that is truly what it is, in a practical sense.  Virtualization encapsulates the hardware and presents a predictable, pristine hardware set to guest operating systems.  This may sound like it adds complication but, in reality, it actually simplifies a lot of things both for the makers of operating systems and drivers as well as for IT practitioners designing systems.  It is because computers, computer peripherals and operating systems are such complex beasts that this additional layer actually ends up removing complexity from the system by creating standard interfaces.  From standardization comes simplicity.

This exact same concept of presenting a standard, virtual machine to a software layer exists in other areas of computing as well, such as with how many programming languages are implemented.  This is a very mature and reliable computing model.

Hardware abstraction and the stability that it brings alone are reason enough to standardize on virtualization across the board but the practical nature of hardware abstraction as implemented by all enterprise virtualization products available to us today brings us even more important features.  To be sure, most benefits of virtualization can be found in some other way but rarely as completely, reliably, simply or freely as from virtualization.

The biggest set of additional features typically come from the abstraction of storage and memory allowing for the ability to snapshot storage or even the entire running state of a virtual machine, that is to take an image of the running system and store it in a file.  This ability leads to many very important capabilities such as the ability to take a system snapshot before installing new software, changing configurations or patching; allowing for extremely rapid rollbacks should anything go wrong.  This seemingly minor feature can lead to big peace of mind and overall system reliability.  It also makes testing of features and rolling back or repeated testing very easy in non-production environments.

The ability to snapshot from the abstraction layer also leads to the ability to take “image-based backups”, that is backups taken via the snapshot mechanism at a block device layer rather than from within the operating system’s file system layer.  This allows for operating system agnostic backup mechanisms and backups that include the entire system storage pool all at once.  Image backups allow for what were traditionally known as “bare metal restores” – the entire system can be restored to a fully running state without additional interaction – easily and very quickly.  Not all hypervisor makers include this capability or include it to equal levels so while conceptually a major feature it is critical that the extent to which this feature exists or is licensed must be considered on a case by case basis (notably HyperV includes this fully, XenServer includes it partially and VMware vSphere only includes it with non-free license levels.)  When available, image-based backups allow for extremely rapid recovery at speeds unthinkable with other backup methodologies.  Restoring systems in minutes is possible, from disaster to recovery!

The ability to treat virtual machines as files (at least when not actively running) provides additional benefits that are related to the backup benefits listed above.  Namely the ability to rapidly and easily migrate between physical hosts and even to move between disparate hardware.  Traditionally hardware upgrades or replacements meant a complicated migration process fraught with peril.  With modern virtualization, moving from existing hardware to new hardware can be a reliable, non-destructive process with safe fallback options and little or possibly even zero downtime!  Tasks that are uncommon but were very risky previously can often become trivial today.

Often this is the true benefit of virtualization and abstraction mechanisms.  It is not, necessarily, to improve the day to day operations of a system but to reduce risk and provide flexibility and options in the future.  Preparing for unknowns that are either unpredictable or are simply ignored in most common situations.  Rarely is such planning done at all, much to the chagrin of IT departments left with difficult and dangerous upgrades that could have been easily mitigated.

There are many features of virtualization that are applicable to only special scenarios.  Many virtualization products include live migration tools for moving running workloads between hosts, or possibly even between storage devices, without downtime.  High availability and fault tolerant options are often available allowing some workloads to rapidly or even transparently recover from system hardware failure, moving from failed hardware to redundant hardware without user intervention.  While more of a niche benefit and certainly not to be included in a list of why “nearly all workloads”  should be virtual, it is worth noting as a primary example of features that are often available and could be added later if a need for them arises as long as virtualization is used from the beginning.  Otherwise a migration to virtualization would be needed prior to being able to leverage such features.

Virtualization products typically come with extensive additional features that only matter in certain cases.  A great many of them fall into a large pool of “in case of future need.”  Possibly the biggest of all of these is the concept of consolidation, as I had mentioned at the beginning of this article.  Like other advanced features like high availability, consolidation is not a core value of virtualization but is often confused for it.  Workloads not intending to leverage high availability or consolidation should still be virtualized – without a doubt.  But these features are so potentially valuable as future options, even for scenarios where they will not be used today, that they are worth mentioning regardless.

Consolidation can be extremely valuable and it can easily be understood why so many people simply assume that it will be used as it is so often so valuable.  The availability of this once an infrastructure is in place is a key point of flexibility for handling the unknowns of future workloads.  Even when consolidation is completely unneeded today, there is a very good chance, even in the smallest of companies, that it will be useful at some unknown time in the future.  Virtualization provides us with a hedge against the unknown by preparing our systems for the maximum in flexibility.  One of the most important aspects of any IT decision is managing and reducing risk.  Virtualization does this.

Virtualization is about stability, flexibility, standardization, manageability and following best practices.  No major enterprise virtualization product is not available, at least in some form, for free today.  Any purchase would, of course, require a careful analysis of value versus expenditure.  However, with excellent enterprise options available for free from all four key product lines in this space currently (Xen, KVM, HyperV and VMware vSphere) we need make no such analysis.  We need only show that the implementation is a non-negative.

What makes the decision making easy is that when we consider the nominal case – the bare minimum that all enterprise virtualization provides which is the zero cost, abstraction, encapsulation and storage based benefits we find that we have a small benefit in effectively all cases, no measureable downsides and a very large potential benefit from the areas of flexibility and hedging against future needs.  This leaves us with a clear win and a simple decision that virtualization, being free and with essentially no downsides on its own, should be used in any case where it can be (which, at this point, is essentially all workloads.)  Additional, non-core, features like consolidation and high availability should be evaluated separately and only after the decision to virtualize has already been solidified.  No lack of need for those extended features, in any way, suggests that virtualization should not be chosen based on its own merits.

This is simply an explanation of existing industry best practices which have been to virtualize all potential workloads for many years.  This is not new nor a change of direction.  Just the fact that across the board virtualization has been an industry best practice for nearly a decade shows what a proven and accepted methodology this is.  There will always be workloads that, for one reason or another, simply cannot be virtualized, but these should be very few and far between and should prompt a deep review to find out why this is the case.

When deciding whether or not to virtualize, the approach should always be to assume that virtualization is a foregone conclusion and only vary from this if a solid, defended technical reason makes this impossible.  Nearly all arguments against virtualization come from a position of misunderstanding with a belief that consolidation, high availability, external storage, licensing cost and other loosely related or unrelated concepts are somehow intrinsic to virtualization.  They are not and should not be included in a virtualization versus physical deployment decision.  They are separate and should be evaluated as separate options.

It is worth noting that because consolidation is not part of our decision matrix in creating base value for virtualization, that all of the reasons that we are using apply equally to both one to one deployments (that is a single virtual machine on a single physical device) as to consolidated workloads (that is multiple virtual machines on a single physical device.)  There is no situation in which a workload is “too small” to be virtualized.  If anything, it is the opposite, only the largest workloads, typically with extreme latency sensitivity, where a niche scenario of non-virtualization still exists as an edge case but even these cases are rapidly disappearing as latency improvements in virtualization and total workload capacities are improved.  These cases are so rare and vanishing so quickly that even taking the time to mention these cases is probably unwise as it suggests that exceptions, based on capacity needs, are common enough to evaluate for, which they are not, especially not in the SMB market.  The smaller the workload, the more ideal for virtualization, but this is only to reinforce that small business, with singular workloads, are the most ideal case for virtualization across the board rather than an exception to best practices, not to suggest that larger businesses should be looking for exceptions themselves.

6 thoughts on “Virtualizing Even a Single Server”

  1. This is a very well timed article. We are currently in the process of moving P2V and one of the questions that was brought up to me was along the lines of “what if we decide to move to virtualization a little more gently by virtualizing to the servers that we are using now?”

    I’ll be passing this one along to my boss and the CIO.

    -jjoyner1985

  2. Scott, great post as usual. I almost feel like this came after a thread on Spiceworks where you advised me on my co’s P2V scenario, coincidence?
    Anyhow, great points you highlight here. These will go great with any discussion an admin has with his boss about why to go virtual.

  3. Spot on Scott. I am in the process of setting up a new server for a small business (35 users) and am using 2 VM’s for them on their host server. Backups, imaging, snapshots, etc. – make it a no-brainer.

  4. Thanks for sharing. I like and agree the idea of abstraction. But in a distributed environment like placing DCs worldwide and run the VM ontop of Windows Hyper-v role i only see the additional effort every patchday for example.
    Maybe it makes more sense when VM host is running anything other than Hyper-V with lesser patch frequency. But than again i have the additional effort of another managing environment for managing the VM Host instead using the windows stuff (SCCM) which is already in place.
    Do i get something wrong when i say i like Virtualization only in test environments and setups where i can use live migrations (hyper-v cluster setups for example) to avoid VM outage because of managing the host?!

  5. @BS

    Patching of the host is certainly a concern and worth considering, especially when choosing between host platforms. By and large hypervisors need very infrequent patches, they are very small, very light and do not have extensive “extras” so the amount to patch is low. (HyperV’s majority of patches come from use cases, not HyperV itself, you can get HyperV patching to be pretty rare if you want to stick with that.)

    But there are many factors to consider:

    * Patching of VM hosts is rare, is this really a concern at all? Most people patch quarterly, I believe. That’s generally not a major issue.
    * Virtualization doesn’t imply consolidation. I’ve never heard of a patching concern that wasn’t someone confusing the two. Consolidation is a separate issue entirely and creates major patching concerns. Never use the caveats of consolidation to rule out virtualization, they are two unique decisions. You should always virtualize, but only consolidate most of the time.
    * Because of the nature of limiting consolidation, you can, if need be, align patching between the host and the guest. If you are all Windows on HyperV, you can keep patching “in sync” and only need to patch HyperV when you also need to patch Windows and do them all together making for a single effort, single support window effectively eliminating the concern.
    * In most environments of any size, an artifact of virtualization is often that you get transparent guest migration meaning that you can patch the firmware of the host server AND the hypervisor while workloads are shunting off to another box allowing virtualization to actually reduce patching issues rather than increasing them in those instances.
    * Virtualization makes guest patching far easier due to the common, easy access to full system state rollbacks from snapshots taken before a patching process. This is just an awesome bonus for patching processes.
    * Virtualization dramatically reduces the power cycle time of machines needing to cycle after a patch process also easing the pain of patching guests.

    Patching is well worth considering. But with proper planning around workloads I have yet to see an edge case where virtualization introduced a patching concern. Poor planning once in a while will do this (running Windows on KVM so that you have mismatched patching is one reason. But effectively all concerns I have seen are consolidation patching concerns which is not related to virtualization patching concerns and is a completely different matter that should have no bearing on the decision to virtualize.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.