Virtualization as an enterprise concept is almost as old as business computing is itself. The value of abstracting computing from the bare hardware was recognized very early on and almost as soon as computers had the power to manage the abstraction process, work began in implementing virtualization much as we know it today.
The earliest commonly accepted work on virtualization began in 1964 with the IBM CP-40 operating system developers for the IBM System/360 mainframe. This was the first real foray into commercial virtualization and the code and design from this early virtualization platform has descended today into the IBM VM platform that has been used continuously since 1972 as a virtualization layer for the IBM mainframe families over the decades. Since IBM first introduced virtualization we have seen enterprise systems adopting this pattern of hardware abstraction almost universally. Many large scale computing systems, minicomputers and mainframes, moved to virtualization during the 1970s with the bulk of all remaining enterprise systems doing so, as the power and technology were available to them, during the 1980s and 1990s.
The only notable holdout to virtualization for enterprise computing was the Intel IA32 (aka x86) platform which lacked the advanced hardware resources necessary to implement effective virtualization until the advent of the extended AMD64 64-bit platform and even then only with specific new technology. Once this was introduced the same high performance, highly secure virtualization was available across the board on all major platforms for business computing.
Because low cost x86 platforms lacked meaningful virtualization (outside of generally low performance software virtualization and niche high performance paravirtualization platforms) until the mid-2000s this left virtualization almost completely off of the table for the vast majority of small and medium businesses. This has lead many dedicated to the SMB space to be unaware that virtualization is a well established, mature technology set that long ago established itself as the de facto pattern for business server computing. The use of hardware abstraction is nearly ubiquitous in enterprise computing with many of the largest, most stable platforms having no option, at least no officially support option, for running systems “bare metal.”
There are specific niches where the need to avoid hardware abstraction through virtualization is not advised but these are extremely rare, especially in the SMB market. Typical systems needing to not be virtualized include latency sensitive systems (such as low latency trading platforms) and multi-server combined workloads such as HPC compute clusters where the primary goal is performance above stability and utility. Neither of these are common to the SMB.
Virtualization offers many advantages. Often, in the SMB where virtualization is less expected, it is assumed that virtualization’s goal is consolidation where massive scale cost savings can occur or in providing new ways to provide for high availability. Both of these are great options that can help specific organizations and situations but neither is the underlying justification for virtualization. We can consolidate and achieve HA through other means, if necessary. Virtualization simply provides us with a great array of options in those specific areas.
Many of the uses of virtualization are artifacts of the ecosystem such as a potential reduction in licensing costs. These types of advantages are not intrinsic advantages to virtualization but do exist and cannot be overlooked in a real world evaluation. Not all benefits apply to all hypervisors or virtualization platforms but nearly all apply across the board. Hardware abstraction is a concept, not an implementation, so how it is leveraged will vary. Conceptually, abstracting away hardware whether at the storage layer, at the computing layer, etc. is very important as it eases management, improves reliability and speeds development.
Here are some of the benefits from virtualization. It is important to note that outside of specific things such as consolidation and high availability nearly all of these benefits apply not only to virtualizing on a single hardware node but for a single workload on that node.
- Reduced human effort and impact associated with hardware changes, breaks, modifications, expansion, etc.
- Storage encapsulation for simplified backup / restore process, even with disparate hardware targets
- Snapshotting of entire system for change management protection
- Ease of archiving upon retirement or decommission
- Better monitoring capabilities, adding out of band management even on hardware platforms that don’t offer this natively
- Hardware agnosticism provides for no vendor lock-in as the operating systems believe the hypervisor is the hardware rather than the hardware itself
- Easy workload segmentation
- Easy consolidation while maintaining workload segmentation
- Greatly improved resource utilization
- Hardware abstraction creates a significantly realized opportunity for improved system performance and stability while lowering the demands on the operating system and driver writers for client operating systems
- Simplified deployment of new and varied workloads
- Simple transition from single platform to multi-platform hosting environments which then allow for the addition of options such as cloud deployments or high availability platform systems
- Redeployment of workloads to allow for easy physical scaling
In today’s computing environments, server-side workloads should be universally virtualized for these reasons. The benefits of virtualization are extreme while the downsides are few and trivial. The two common scenarios where virtualization still needs to be avoided are in situations where there is specialty hardware that must be used directly on the server (this has become very rare today, but does still exist from time to time) and extremely low latency systems where sub-millisecond latencies are critical. The second of these is common only in extremely niche business situations such as low latency investment trading systems. Systems with these requirements will also have incredible networking and geolocational requirements such as low-latency Infiniband with fiber to the trading floor of less than five miles.
Some people will point out that high performance computing clusters do not use virtualization, but this is a grey area as any form of clustering is, in fact, a form of virtualization. It is simply that this is a “super-system” level of virtualization instead of being strictly at the system level.
It is safe to assume that any scenario in which you might find yourself in which you should not use virtualization you will know it beyond a shadow of a doubt and will be able to empirically demonstrate why virtualization is either physically or practically impossible. For all other cases, virtualize. Virtualize if you have only one physical server and one physically workload and just one user. Virtualize if you are a Fortune 100 with the most demanding workloads. And virtualize if you are anyone in between. Size is not a factor in virtualization; we virtualize out of a desire to have a more effective and stable computing environment both today and into the future.