Virtualizing Even a Single Server

I find it very common in conversations involving virtualization to have the concept of consolidation, which in the context of server virtualization refers to putting multiple formerly physical workloads onto a single physical box with separation handled by the virtual machine barriers, treated as being the core tenant and fundamental feature of virtualization.  Without a doubt, workload consolidation represents an amazing opportunity with virtualization, but it is extremely important that the value of virtualization and the value of consolidation not be confused.  Too often I have found that consolidation is viewed as the key value in virtualization and the primary justification for it but this is not the case.  Consolidation is a bonus feature, but should never be needed when justifying virtualization.  Virtualization should be a nearly foregone conclusion while consolidation must be evaluated and many times would not be used.  That workloads should not be consolidated should never lead to the belief that those workloads should not be virtual.  I would like to explore the virtualization decision space to see how we should be looking at this point.

Virtualization should be thought of as hardware abstraction as that is truly what it is, in a practical sense.  Virtualization encapsulates the hardware and presents a predictable, pristine hardware set to guest operating systems.  This may sound like it adds complication but, in reality, it actually simplifies a lot of things both for the makers of operating systems and drivers as well as for IT practitioners designing systems.  It is because computers, computer peripherals and operating systems are such complex beasts that this additional layer actually ends up removing complexity from the system by creating standard interfaces.  From standardization comes simplicity.

This exact same concept of presenting a standard, virtual machine to a software layer exists in other areas of computing as well, such as with how many programming languages are implemented.  This is a very mature and reliable computing model.

Hardware abstraction and the stability that it brings alone are reason enough to standardize on virtualization across the board but the practical nature of hardware abstraction as implemented by all enterprise virtualization products available to us today brings us even more important features.  To be sure, most benefits of virtualization can be found in some other way but rarely as completely, reliably, simply or freely as from virtualization.

The biggest set of additional features typically come from the abstraction of storage and memory allowing for the ability to snapshot storage or even the entire running state of a virtual machine, that is to take an image of the running system and store it in a file.  This ability leads to many very important capabilities such as the ability to take a system snapshot before installing new software, changing configurations or patching; allowing for extremely rapid rollbacks should anything go wrong.  This seemingly minor feature can lead to big peace of mind and overall system reliability.  It also makes testing of features and rolling back or repeated testing very easy in non-production environments.

The ability to snapshot from the abstraction layer also leads to the ability to take “image-based backups”, that is backups taken via the snapshot mechanism at a block device layer rather than from within the operating system’s file system layer.  This allows for operating system agnostic backup mechanisms and backups that include the entire system storage pool all at once.  Image backups allow for what were traditionally known as “bare metal restores” – the entire system can be restored to a fully running state without additional interaction – easily and very quickly.  Not all hypervisor makers include this capability or include it to equal levels so while conceptually a major feature it is critical that the extent to which this feature exists or is licensed must be considered on a case by case basis (notably HyperV includes this fully, XenServer includes it partially and VMware vSphere only includes it with non-free license levels.)  When available, image-based backups allow for extremely rapid recovery at speeds unthinkable with other backup methodologies.  Restoring systems in minutes is possible, from disaster to recovery!

The ability to treat virtual machines as files (at least when not actively running) provides additional benefits that are related to the backup benefits listed above.  Namely the ability to rapidly and easily migrate between physical hosts and even to move between disparate hardware.  Traditionally hardware upgrades or replacements meant a complicated migration process fraught with peril.  With modern virtualization, moving from existing hardware to new hardware can be a reliable, non-destructive process with safe fallback options and little or possibly even zero downtime!  Tasks that are uncommon but were very risky previously can often become trivial today.

Often this is the true benefit of virtualization and abstraction mechanisms.  It is not, necessarily, to improve the day to day operations of a system but to reduce risk and provide flexibility and options in the future.  Preparing for unknowns that are either unpredictable or are simply ignored in most common situations.  Rarely is such planning done at all, much to the chagrin of IT departments left with difficult and dangerous upgrades that could have been easily mitigated.

There are many features of virtualization that are applicable to only special scenarios.  Many virtualization products include live migration tools for moving running workloads between hosts, or possibly even between storage devices, without downtime.  High availability and fault tolerant options are often available allowing some workloads to rapidly or even transparently recover from system hardware failure, moving from failed hardware to redundant hardware without user intervention.  While more of a niche benefit and certainly not to be included in a list of why “nearly all workloads”  should be virtual, it is worth noting as a primary example of features that are often available and could be added later if a need for them arises as long as virtualization is used from the beginning.  Otherwise a migration to virtualization would be needed prior to being able to leverage such features.

Virtualization products typically come with extensive additional features that only matter in certain cases.  A great many of them fall into a large pool of “in case of future need.”  Possibly the biggest of all of these is the concept of consolidation, as I had mentioned at the beginning of this article.  Like other advanced features like high availability, consolidation is not a core value of virtualization but is often confused for it.  Workloads not intending to leverage high availability or consolidation should still be virtualized – without a doubt.  But these features are so potentially valuable as future options, even for scenarios where they will not be used today, that they are worth mentioning regardless.

Consolidation can be extremely valuable and it can easily be understood why so many people simply assume that it will be used as it is so often so valuable.  The availability of this once an infrastructure is in place is a key point of flexibility for handling the unknowns of future workloads.  Even when consolidation is completely unneeded today, there is a very good chance, even in the smallest of companies, that it will be useful at some unknown time in the future.  Virtualization provides us with a hedge against the unknown by preparing our systems for the maximum in flexibility.  One of the most important aspects of any IT decision is managing and reducing risk.  Virtualization does this.

Virtualization is about stability, flexibility, standardization, manageability and following best practices.  No major enterprise virtualization product is not available, at least in some form, for free today.  Any purchase would, of course, require a careful analysis of value versus expenditure.  However, with excellent enterprise options available for free from all four key product lines in this space currently (Xen, KVM, HyperV and VMware vSphere) we need make no such analysis.  We need only show that the implementation is a non-negative.

What makes the decision making easy is that when we consider the nominal case – the bare minimum that all enterprise virtualization provides which is the zero cost, abstraction, encapsulation and storage based benefits we find that we have a small benefit in effectively all cases, no measureable downsides and a very large potential benefit from the areas of flexibility and hedging against future needs.  This leaves us with a clear win and a simple decision that virtualization, being free and with essentially no downsides on its own, should be used in any case where it can be (which, at this point, is essentially all workloads.)  Additional, non-core, features like consolidation and high availability should be evaluated separately and only after the decision to virtualize has already been solidified.  No lack of need for those extended features, in any way, suggests that virtualization should not be chosen based on its own merits.

This is simply an explanation of existing industry best practices which have been to virtualize all potential workloads for many years.  This is not new nor a change of direction.  Just the fact that across the board virtualization has been an industry best practice for nearly a decade shows what a proven and accepted methodology this is.  There will always be workloads that, for one reason or another, simply cannot be virtualized, but these should be very few and far between and should prompt a deep review to find out why this is the case.

When deciding whether or not to virtualize, the approach should always be to assume that virtualization is a foregone conclusion and only vary from this if a solid, defended technical reason makes this impossible.  Nearly all arguments against virtualization come from a position of misunderstanding with a belief that consolidation, high availability, external storage, licensing cost and other loosely related or unrelated concepts are somehow intrinsic to virtualization.  They are not and should not be included in a virtualization versus physical deployment decision.  They are separate and should be evaluated as separate options.

It is worth noting that because consolidation is not part of our decision matrix in creating base value for virtualization, that all of the reasons that we are using apply equally to both one to one deployments (that is a single virtual machine on a single physical device) as to consolidated workloads (that is multiple virtual machines on a single physical device.)  There is no situation in which a workload is “too small” to be virtualized.  If anything, it is the opposite, only the largest workloads, typically with extreme latency sensitivity, where a niche scenario of non-virtualization still exists as an edge case but even these cases are rapidly disappearing as latency improvements in virtualization and total workload capacities are improved.  These cases are so rare and vanishing so quickly that even taking the time to mention these cases is probably unwise as it suggests that exceptions, based on capacity needs, are common enough to evaluate for, which they are not, especially not in the SMB market.  The smaller the workload, the more ideal for virtualization, but this is only to reinforce that small business, with singular workloads, are the most ideal case for virtualization across the board rather than an exception to best practices, not to suggest that larger businesses should be looking for exceptions themselves.

You Are Not Special

It is not my intention for this to sound harsh, but I think that it has to be said: “You are not special.”  And by “you” here, of course, I mean your business.  The organization that you, as an IT practitioner, support.  For decades we have heard complaints about how modern education systems attempt to make every student feel unique and special, when awards are given out schools attempt to find a way, especially with elementary students, to make sure that every student gets an award of some sort.  Awards for best attendance, posture, being quiet in class or whatever are created to award completely irrelevant things in order to make every student not only feel like part of the group, but to be a special, unique individual that has accomplished something better than anyone else.

This attitude, this belief that everyone is special and that all of those statistics, general rules and best practices apply to “someone else” has become pervasive in IT now as well, manifesting itself in the belief that each business, each company is so special and unique that IT industry knowledge does not apply in this situation.  IT practitioners with whom I have spoken almost always agree that best practices and accumulated industry knowledge are good and apply in nearly every case – except for their own.  All of those rules of thumb, all of those guidelines are great for someone else, but not for them.  The problem is that nearly everyone feels this way, but this cannot be the case.

I have found this problem to be most pronounced and, in fact, almost exclusive to the small business market where, in theory, the likelihood of a company being highly unique is actually much lower than the large enterprise space of the Fortune 100 where uniqueness is somewhat expected.  But instead of small businesses assuming uniformity and enormous businesses expecting uniqueness the opposite appears to happen.  Large businesses understand that even at massive scale IT problems are mostly standard patterns and by and large should be solved using tried and true, normal approaches.  And likewise, small businesses, seemingly driven by an emotional need to be “special” claim a need for avoiding industry patterns often eschewing valuable knowledge to a ludicrous degree and often while conforming to the most textbook example of the use case for the pattern.  It almost seems, from my experience, that the more “textbook” a small business is, the more likely that its IT department will avoid solutions designed exactly for them and attempt to reinvent the wheel at any cost.

Common solutions and practices apply to the majority of businesses and workloads, easily in excess of 99.9% of them.  Even in larger companies where there is opportunity for uniqueness we expect to only see rare workloads that fall into a unique category.  Even in the world’s largest businesses the average workload is, well, average.  Large enterprises with tens of thousands of servers and workloads often find themselves with a handful of very unique situations for which there is no industry standard to rely on.  But even so, they have many thousands of very standard workloads that are not special in any way.  The smaller the business not only the less opportunity for a unique workload but the less chance of it occurring on a workload by workload basis because they have so many fewer workloads.

One of the reasons that small businesses, even ones very unique as small businesses go, are rarely actually unique is because when a small business has an extreme need for say performance, capacity, scale or security it [almost] never means that it needs that thing in excess of existing standards for larger businesses.  The standards of how to deal with large data sets or extreme security, for example, are already well established in the industry at large and small businesses need only leverage the knowledge and practices developed for larger players.

What is surprising is when a small business with relatively trivial revenue believes that its data requires a level of secrecy and security in excess of the security standards of the world’s top financial institutions, military organizations, governments, hospitals or nuclear power facilities.  What makes the situation more absurd is that in pursuing these extremes of security, small businesses almost always result in very low security standards.  They often cite needs for “extreme security” for doing insecure or as we often say “tin foil hat” procedures.

Security is one area where this behavior if very pronounced.  Often it is small business owners or small business IT “managers” who create this feeling of distrusting industry standards, not IT practitioners themselves, although the feeling that a business is unique often trickles down and is seen there as well.

Similar to security, the need for unlimited uptime and highly available systems, rarely needed even for high end enterprise workloads, seem an almost ubiquitous goal in small businesses.  Small businesses often spend orders of magnitude more money, in relationship to revenue, on procuring high availability systems compared to their larger business counterparts.  Often this is done with the mistaken belief that large businesses always use high availability and that small business must do so to compete, that if they do not that they are not a viable business or that any downtime equates to business collapse.  None of these are true.  Enterprises have far lower cost of reliability compared to revenue and still do considerable cost analysis to see what reliability expenditures are justified through risk.  Small businesses rarely do that best practice analysis and jump, almost universally, to the very unlikely belief that their workloads are dramatically more valuable than even the largest enterprises and that they have no means of mitigating downtime.  Eschewing both business best practices (doing careful cost and risk analysis before investing in risk mitigation), financial best practices (erring on the side of up front cost savings) or technology best practices (high availability only when needed and justified) leaves many businesses operating from the belief that they are “special” and none of the normal rules apply to them.

By approaching all technology needs from the assumption of being special, businesses that do this are unable to leverage the vast, accumulated knowledge of the industry.  This means that businesses are continuously reinventing the wheel and attempting to forge new paths where well trodden, safe paths already exist.  Not only can this result in an extreme degree of overspending in some cases and in dangerous risk in others but it effectively guarantees that the cost of any project is unnecessarily high.  Small business, especially, have the extreme advantage of being able to leverage the research and experience of larger businesses allowing smaller businesses to be more agile and lean.  This is a key component to making small businesses compete against the advantages of scale inherent to large businesses.  When small businesses ignore this advantage they are left with neither the scale of big business nor the advantages of being small.

There is no simple solution here – small business IT practitioners and small business managers need to step down from their pedestals and take a long, hard look at their companies and ask if they really are unique and special or if they are a normal business with normal needs.  I guarantee you are not the first to face the problems that you have.  If there isn’t a standard solution approach available already then perhaps the approach to the problem is wrong itself.  Take a step back and evaluate with an eye to understanding that many businesses share common problems and can tackle them effectively using standard patterns, approaches and often best practices.  If your immediate reaction to best practices, patterns and industry knowledge is “yes but that doesn’t apply here” you need to stop and reevaluate – because yes, it certainly does apply to you.  It is almost certainly true that you have misunderstood the uniqueness of your business or you have misunderstood how the guidance is applied resulting in the feeling that those guidelines are not applicable.  Even those rare businesses with very unique workloads only have them for a small number of their workloads and not the majority of them; the most extremely unique businesses and organizations still have many common workloads.

Patterns and best practices are our friends and allies, our trusted partners in IT.  IT, and business in general, is challenging and complex.  To excel as IT practitioners we can seek to stand on the shoulders of giants, walk the paths that have been mapped and trodden for us and leverage the work of others to make our solutions as stable, predictable and supportable as possible.  This allows us to provide maximum value to the businesses that we support.