In the last year or two we have seen virtualization go from a poorly understood concept to a much-hyped industry buzz word being bantered about constantly in every conversation involving technology. There is no doubt that virtualization is playing an important role in today’s IT landscape, but the question we are asking is whether virtualization applies to the small and medium business markets at this time.
The quick answer to this question is: absolutely. Unlike many technologies that are of questionable value or that provide a great degree of technological complication, risk and expense that may not be appropriate for a small business, virtualization is a mature technology (IBM CP/CMS circa 1968) that is well understood and provides a layer of hardware abstraction that can benefit an IT organization of any size and may possibly apply even more to the small business IT department than it applies in the enterprise space.
Before looking at how virtualization can benefit the SMB market I would like to provide some definitions to be sure that we are discussing the same set of technologies. In today’s IT landscape it has become popular to relabel common technologies as “virtualization” for marketing reasons and this has unnecessarily complicated the issue.
True virtualization refers to the virtualizing of entire operating systems. Wikipedia uses the term platform virtualization and I will as well. Technically we could refer to this as “System Virtualization” or “Operating System Virtualization” to distinguish it from loosely-related technologies that may arguably have the right to also use the same general term.
The basic concept of platform virtualization involves running an abstraction layer on a computer that emulates the hardware itself. Through the combination of abstraction and emulation we get what is known as a virtual machine. This virtual machine is a completely working “computer” onto which we can install an operating system just as if we were installing onto the bare metal of a dedicated machine. Instead of being limited to only installing one operating system image per computer we can now, with platform virtualization, install many copies of the same or disparate operating systems onto the same piece of hardware. A powerful concept indeed.
The obviousness of the utility of this technology begs the obvious question: “If platform virtualization has been available since 1968, why is it only becoming popular and important recently?” This is an excellent question. The answer is actually quite simple.
Traditional platform virtualization technologies require a lot of support within the computer hardware itself. IBM has been building this type of support into its mainframe systems for decades and large UNIX vendors like Sun have been providing this in their high-end UNIX servers for years as well. These systems are highly specialized and typically run their own custom operating system(s). Generally only large IT shops could afford servers of this magnitude and small shops did not have ready access to these technologies. For those IT professionals who have worked with this type of equipment in the past the idea of virtualization was often so ingrained into the platform that it was often discussed very little as it was seen as simply an aspect of these high-end server systems and not necessarily a concept in its own right.
What has changed recently is the move to bring platform virtualization to the commodity hardware space occupied by the AMD and Intel (x86_64) processors used by the majority of small and medium businesses as well as larger enterprises. The first move was to use software alone to make this possible on the x86 processor family. The early players in this space were VMWare and Microsoft with products like VMWare Workstation, Virtual PC, VMWare GSX and MS Virtual Server. These products showed that no special hardware was needed to effectively virtualize whole operating systems and began to allow companies of all sizes to experiment with the concept of virtualizing their existing commodity platforms. This form of virtualization is known as “host-based virtualization” as it requires a host operating system on which the virtualization environment will run.
Following on the tail of these software-only solutions the big processor vendors in the commodity space, AMD and Intel, began building virtualization capabilities into the processor allowing for more flexibility, security and performance and bringing the commodity x64 hardware market much more in line with the traditional offerings from the other processor families common in big iron servers. By doing so the virtualization market has really exploded both from the vendor side as more and more vendors begin offering virtualization related products and from the customer side as virtualization begins to be better understood and its use becomes more commonplace. With the latest rounds of purchasing most small IT shops have purchased servers, and often desktops, that support hardware-level virtualization even without intending to prepare themselves for a move to virtualization making the equation often tip in that direction naturally. This hardware-supported virtualization model is called “hypervisor-based virtualization” as all operating systems run on top of a tiny kernel called the hypervisor and no traditional operating system runs directly on the hardware.
Now that we have a good idea of what platform virtualization is and why it is now available to us as an option we will look at why platform virtualization may be beneficial to us in the small and medium business space.
There are two things that we can readily virtualize (without getting esoteric or starting to virtualize our routing and switching infrastructure) – servers and desktops. By far the easier and more obvious choice is the virtualization of servers.
Virtualizing the server infrastructure, or part of it, is the first place that most IT shops look today as a potential for virtualization. Most companies find that the majority of their servers are extremely underutilized with excess CPU, memory and drive capacity sitting idle while additional workloads fail to find a home due to budget constraints, space or implementation time. Virtualization to the rescue.
Through virtualization we have the opportunity to run several virtual servers on a single piece of server hardware. We could virtualize just a single server system but this would not gain us any utilization advantages or we could, in theory, virtualize hundreds of servers if our hardware could handle it. Typically, small businesses can virtualize several typical servers roles onto a single physical server. Virtual machine density is, of course, determined by load characteristics as well as by the available hardware. Virtualization uses a lot of memory and storage, obviously, and so careful planning must be made. Memory and storage are relatively inexpensive today and are certainly vastly less expensive than purchasing additional server hardware and paying to support it. It is not uncommon for a small business to easily virtualize half a dozen servers on a single piece of hardware at a minimum and a score or more is not an unreasonable number to hope to achieve.
Many small shops instantly jump to the conclusion that virtualization requires expensive SAN storage. This is not the case at all. Virtualization provides a range of benefits even without using a SAN storage infrastructure of which shops can take advantage immediately. There are, of course, some significant advantages available by using SAN in conjunction with virtualization and high availability or load balancing technologies. Often, though, these high availability and load balancing capabilities are additional features that did not exist prior to virtualization and are not necessary in order for a shop to see significant benefits from virtualization but do present an opportunity for future improvement when and if budgets allow.
Small businesses will see many advantages from virtualization immediately even doing so on a small scale. Some of these benefits are obvious and some are less so.
Our first advantage is that of hardware cost as I mentioned above. By eliminating the need to purchase and support expensive server hardware on a per operating system basis we can now deploy more systems at lower cost per system. In many cases this is not only a cost savings but will also provide greater funds necessary to move from more spartan servers into fewer but more enterprise class offerings with important performance, stability and support features such as integrated power management and KVM over IP from an out of band management console.
Our second advantage is the cost savings from reducing power consumption. It is very trendy, and for good reason, for companies to be concerned with how “green” they are today and IT virtualization plays a key role in the greenification of the department. The addition of virtual machines onto a single physical server typically represents a trivial, if even measurable, increase in power draw. Adding additional physical servers, of course, adds a significant amount of power consumption even for systems that are lightly used or used only occasionally.
Our third advantage is in reducing backup complexity. Virtualized servers can be backed up using completely traditional methods such as file system level backups from the operating system itself as made popular by traditional backup systems like NetBackup, BackupExec, Amanda, Bacula and others. So if we desire to stick with current backup strategies we can without any additional complexity, but if we want to move to image-based backups we can do so quite easily. Using system images as backups is not necessarily new or unique to virtualization but virtualization makes this far more obvious and accessible for many users. In fact, with virtualization system images (a copy of the entire system, not just of its individual files) can be taken using nothing but the regular filesystem – no special software needed. A complete system backup can be taken by simply shutting down the virtual server, making a copy of its virtual filesystem – often a single, large file, and starting the system up again. Restoring a system can be a simple as copying an image file from a backup storage device to the virtual server and starting it back up. Restore done. System back online. This is as simple as it gets.
Our fourth advantage is in the ease of provisioning. Building a new server operating system directly on hardware is a time consuming venture for most shops. This is especially true if there are any surprises with new hardware type that has not been used previously. There may be missing drivers or special operating system settings and parameters needed to support the hardware. With virtualization the target platform is always identical removing many surprises from this process making it both faster and more reliable. In many cases deployment is also faster simply because the process of preparing the base machine is so much faster. To kick off a manual install of Linux on a traditional physical server I must purchase said server, install into rack, connect power and networking, provision networking, turn on server, update firmware, configure out of band management system, burn in hardware, install installation media and begin installing. Or from some virtualization environments I can simply kick off the entire process with a single command at the command line. Deploying a new server could go from hours or days to minutes. This does not even begin to address the simplicity of cloning existing systems within a virtual environment.
A fifth “soft” advantage of virtualization is that there is quite often a significant software cost savings when virtualizing. Some vendors, like Novell with Suse Linux, allow you to virtualize as many servers as you want on a single physical machine while paying for only a single machine license. Red Hat gives you multiple installs but not unlimited like Novell. Microsoft has a range of virtualization pricing options depending on your needs including an unlimited per processor deployment license. In a worst case scenario you will need to pay for additional operating system and other software licenses exactly as if you were running the same machines physically but in almost all cases there is more pricing flexibility and often dramatic cost reductions for multiple virtualized hosts.
A sixth benefit is in the ability to “roll back” an entire operating system. Most virtualization platforms allow for a concept of taking a system snapshot, making changes to the active system and then restoring the system back to its original state when done. This is great for software testing and especially for the testing of operating system patches or any critical update process where something going wrong could cause your system to become unresponsive and potentially not repairable. The ability to go “back in time” to the latest snapshot, taken seconds before the patch application or risky configuration change can be a lifesaver. Of course taking an image backup could be used in the same way but snapshots allow for even more rapid recovery due to their “proximity” to the original filesystem.
All of these aforementioned benefits come with a move to virtualization and do not require additional cost for software or hardware. If our budget allows and the need exists there is also the option of adding one of more virtualization servers and having these servers share a SAN for storage of virtual machine images. At a minimum this will roughly triple the hardware cost but provides double the processing power and some really amazing features. The main feature that really makes this solution impressive is the concept of live migration. Live migration is when a virtual operating system can be moved, while running, from one physical virtualization server to another. This can be done for purposes of load balancing, disaster testing or to survive a disaster itself. With some live migration solutions, generally sold as high availability, this migration can happen so quickly that it provides effectively “zero downtime” and even heavily used web servers could survive the loss of a physical server without customers ever knowing that a physical server had gone down. The transition between virtual machine host nodes is completely transparent to the end users.
There is one major caveat. Relying upon a SAN in a disaster recovery scenario, of course, creates another point of failure – the SAN system. So when planning to use SAN to increase the reliability of your virtual machines be sure not to use SAN that is not as redundant or moreso than your servers themselves or you may increase cost while accidentally lowering reliability and performance.
For the average small business it is not unlikely that it will make sense to not only virtualize some of the server infrastructure but virtualize all or nearly all of it. Virtualization’s advantages are so many and its downsides so few and minor that it is a rare workload in the small business space that would justify dedicated hardware servers.
Now that we have examined why server virtualization makes sense we can begin looking towards desktop virtualization. Unlike real desktops and servers, virtualized desktops often add a bit of complexity due to licensing requirements especially with Microsoft Windows desktops.
Virtualizing desktops is also somewhat complicated because there are many modes for physically providing desktops. Obviously once we begin talking about virtualizing the desktop infrastructure we are actually talking about a range of solutions because some device must always exist “on the desktop” providing a keyboard, mouse and monitor which cannot be virtualized and the desktop operating system itself must be running elsewhere. Even without virtualization this is done (and sometimes marketed as virtualization when, in fact it is simply remote access) very commonly through desktop blades, rackmount desktops or terminal servers. All of these solutions move the desktop into the datacenter and provide access to it either from thin client front ends or simply via software to remote users existing machines such as users at home logging in to the office.
We will start with the concept of the terminal server as this is the most easily virtualized and the most straightforward. Whether we are talking about virtualizing the server on which we run Microsoft Terminal Server (now known as Remote Desktop Services), Citrix XenApp or simply a standard Linux remote desktop terminal server we need do nothing more than install that server into a virtual environment rather than into a physical one. It is really a question of server virtualization not of desktop virtualization – it is only perceived by the end user as being related to their desktops.
The other method of desktop virtualization, “true desktop virtualization” as I will refer to it, is to actually run desktop operating system images on a virtual server just as if they were normal desktops dedicated to a user. This means virtualizing operating systems like Windows XP, Windows Vista or Windows 7 with each image being dedicated to a single user just as if it was a physical desktop. We could, theoretically, do the same thing with Linux or some other flavor of Unix but as those systems do not have per user licensing or desktop specific versions and since they always run their desktops in a server mode we would only be able to differentiate between a true virtualized desktop and a Unix-based terminal server in its usage and not by any strict technological means as they are one and the same. Only Windows truly offers a dedicated desktop model that allows this to happen in this particular manner without the concept of shared access to a single image simultaneously.
Due to licensing restrictions from Microsoft, Windows desktops must be installed one image per user even if technologies exist to make this technologically unnecessary, but still there are benefits to this model. The big benefits to virtualized desktops definitely go to companies who have employees who roam either internally or even externally.
Using virtualized desktops provides far more control to the company than does providing laptops. Laptops can be stolen, lost or damaged. Laptops wear out and need to be replaced regularly. A virtual desktop that is made accessible from the outside of the company can be secured and protected in ways that a laptop cannot be. Upgrades are much simpler and there is no concern of the virtual desktop becoming cut off from the corporate network and being unable to be supported by the IT staff.
Almost any worker who uses a computer in the office already has one at home for personal use and often has a laptop as well in addition to high speed Internet access. Providing remote access to a virtual desktop at the office therefore potentially incurs no additional hardware expense for the company or staff while easing administrative burdens, lowering power consumption and increasing security. Some workers will always require laptops but many will not.
For workers still sitting at a traditional desk inside of the company’s offices there is still a need for something physically sitting on the desk that will connect the keyboard, mouse and monitor to the newly virtualized desktop. This could be an old PC that was planned for retirement, a dedicated hardware thin client or even a laptop. Internal staff can then move around the office or between offices and sit at any available desk with a thin client and log in to their own dedicated virtual desktop and work exactly as if they were at their own desk. They can then go home and work from there as well if this is allowed.
Like virtualized servers, desktops, if the need is warranted, can be easily backed up using either traditional means or by simply taking complete system images. The flexibility is there to do whatever makes the most sense in your environment.
With the complexity and surprise cost of licensing as well as lack of ability to completely do away with hardware on the desktop except for solely remote users, desktop virtualization is hardly the no-brainer that server virtualization is. Desktop virtualization will require careful analysis on a case by case basis to determine if it will meet the cost and usability needs of the individual organization. Most organizations who choose to go this route will likely opt to only partially virtualize – using it only in cases where it makes the most sense such as roaming users and remote workers while keeping traditional desktops for those users who would seldom be in a position to take advantage of this technology. Using terminal server options will often be far more common than “true desktop virtualization” which often makes sense only for power users, developers or to support certain applications that work poorly in a terminal server mode.
There is a final usage of virtualization that warrants discussion if only because it is important to understand its use in the business environment. This final type of virtualization is not used to put operating systems into the datacenter on server hardware but instead is used to run additional operating system images on traditional desktops and laptops. This is a common scenario for people who need to test multiple operating systems for support or development. It is not useful for production systems and is generally outside the scope of this discussion. It is a highly useful use of the technology but it is rather a niche scenario primarily useful for compatibility testing.
In all of this discussion there has been, somewhat conspicuously, no mention of Apple’s Mac OSX products. There is a reason for this. Apple does not license Mac OSX so that it may be virtualized on non-Apple hardware and Apple does not have an enterprise-ready virtualization product ready for its own platform. The only way to virtualize Mac OSX is to purchase full, additional licenses for each operating system instance thereby eliminating most of the cost benefits of this approach and to run it on a host-based virtualization product such as VMWare Fusion or Parallels which are designed for use on top of a desktop and not as a server-class product. This is a major gap in the Mac OSX portfolio and one of the ways in which Apple continues to lag behind the rest of the market in capability and in its understanding of its business customers’ needs. If Apple were to change its licensing strategy around virtualization Mac OSX would prove to be an extremely popular and useful operating system to virtualize both from the server and desktop perspective.
Virtualization is a great opportunity to lower cost and raise productivity while reducing risk for businesses of any size and with budgets as low as zero. Many technologies promise important improvements for businesses but most create questionable value while incurring real cost. Virtualization brings real, measurable value while often costing nothing and often reducing spending immediately. For many businesses virtualization is the technology that they have always dreamed of and is, in fact, available today.