Category Archives: Virtualization

Virtualization as a Standard Pattern

Virtualization as an enterprise concept is almost as old as business computing is itself.  The value of abstracting computing from the bare hardware was recognized very early on and almost as soon as computers had the power to manage the abstraction process, work began in implementing virtualization much as we know it today.

The earliest commonly accepted work on virtualization began in 1964 with the IBM CP-40 operating system developers for the IBM System/360 mainframe.  This was the first real foray into commercial virtualization and the code and design from this early virtualization platform has descended today into the IBM VM platform that has been used continuously since 1972 as a virtualization layer for the IBM mainframe families over the decades.  Since IBM first introduced virtualization we have seen enterprise systems adopting this pattern of hardware abstraction almost universally.  Many large scale computing systems, minicomputers and mainframes, moved to virtualization during the 1970s with the bulk of all remaining enterprise systems doing so, as the power and technology were available to them, during the 1980s and 1990s.

The only notable holdout to virtualization for enterprise computing was the Intel IA32 (aka x86) platform which lacked the advanced hardware resources necessary to implement effective virtualization until the advent of the extended AMD64 64-bit platform and even then only with specific new technology.  Once this was introduced the same high performance, highly secure virtualization was available across the board on all major platforms for business computing.

Because low cost x86 platforms lacked meaningful virtualization (outside of generally low performance software virtualization and niche high performance paravirtualization platforms) until the mid-2000s this left virtualization almost completely off of the table for the vast majority of small and medium businesses.  This has lead many dedicated to the SMB space to be unaware that virtualization is a well established, mature technology set that long ago established itself as the de facto pattern for business server computing.  The use of hardware abstraction is nearly ubiquitous in enterprise computing with many of the largest, most stable platforms having no option, at least no officially support option, for running systems “bare metal.”

There are specific niches where the need to avoid hardware abstraction through virtualization is not advised but these are extremely rare, especially in the SMB market.  Typical systems needing to not be virtualized include latency sensitive systems (such as low latency trading platforms) and multi-server combined workloads such as HPC compute clusters where the primary goal is performance above stability and utility.  Neither of these are common to the SMB.

Virtualization offers many advantages.  Often, in the SMB where virtualization is less expected, it is assumed that virtualization’s goal is consolidation where massive scale cost savings can occur or in providing new ways to provide for high availability.  Both of these are great options that can help specific organizations and situations but neither is the underlying justification for virtualization.  We can consolidate and achieve HA through other means, if necessary.  Virtualization simply provides us with a great array of options in those specific areas.

Many of the uses of virtualization are artifacts of the ecosystem such as a potential reduction in licensing costs.  These types of advantages are not intrinsic advantages to virtualization but do exist and cannot be overlooked in a real world evaluation.  Not all benefits apply to all hypervisors or virtualization platforms but nearly all apply across the board.  Hardware abstraction is a concept, not an implementation, so how it is leveraged will vary.  Conceptually, abstracting away hardware whether at the storage layer, at the computing layer, etc. is very important as it eases management, improves reliability and speeds development.

Here are some of the benefits from virtualization.  It is important to note that outside of specific things such as consolidation and high availability nearly all of these benefits apply not only to virtualizing on a single hardware node but for a single workload on that node.

  1. Reduced human effort and impact associated with hardware changes, breaks, modifications, expansion, etc.
  2. Storage encapsulation for simplified backup / restore process, even with disparate hardware targets
  3. Snapshotting of entire system for change management protection
  4. Ease of archiving upon retirement or decommission
  5. Better monitoring capabilities, adding out of band management even on hardware platforms that don’t offer this natively
  6. Hardware agnosticism provides for no vendor lock-in as the operating systems believe the hypervisor is the hardware rather than the hardware itself
  7. Easy workload segmentation
  8. Easy consolidation while maintaining workload segmentation
  9. Greatly improved resource utilization
  10. Hardware abstraction creates a significantly realized opportunity for improved system performance and stability while lowering the demands on the operating system and driver writers for client operating systems
  11. Simplified deployment of new and varied workloads
  12. Simple transition from single platform to multi-platform hosting environments which then allow for the addition of options such as cloud deployments or high availability platform systems
  13. Redeployment of workloads to allow for easy physical scaling

In today’s computing environments, server-side workloads should be universally virtualized for these reasons.  The benefits of virtualization are extreme while the downsides are few and trivial.  The two common scenarios where virtualization still needs to be avoided are in situations where there is specialty hardware that must be used directly on the server (this has become very rare today, but does still exist from time to time) and extremely low latency systems where sub-millisecond latencies are critical.  The second of these is common only in extremely niche business situations such as low latency investment trading systems.  Systems with these requirements will also have incredible networking and geolocational requirements such as low-latency Infiniband with fiber to the trading floor of less than five miles.

Some people will point out that high performance computing clusters do not use virtualization, but this is a grey area as any form of clustering is, in fact, a form of virtualization.  It is simply that this is a “super-system” level of virtualization instead of being strictly at the system level.

It is safe to assume that any scenario in which you might find yourself in which you should not use virtualization you will know it beyond a shadow of a doubt and will be able to empirically demonstrate why virtualization is either physically or practically impossible.  For all other cases, virtualize.  Virtualize if you have only one physical server and one physically workload and just one user.  Virtualize if you are a Fortune 100 with the most demanding workloads.  And virtualize if you are anyone in between.  Size is not a factor in virtualization; we virtualize out of a desire to have a more effective and stable computing environment both today and into the future.

 

State of Thin Clients

The IT world loves to swing back and forth between moving processing out to the user via fat clients and moving processing back to the server leaving users with thin clients.  The battle is a long running one that started with the first appearance of multiuser computer systems several decades ago and has continued to this day and will likely continue for a very long time to come.

When I began working in IT, thin clients were simple text terminals attached to a single, central server via serial connections.  Limited to very basic text input these served their purpose at the time to provide relatively low cost computing to a large number of users.  The system wasn’t pretty or glamorous, but it was quite functional.

These ancient terminals gave way to the personal computer and computing power shifted from the datacenter to the desktop allowing users to run powerful apps like Lotus 1-2-3 and WordPerfect.  Responsive graphical applications were a powerful draw for decentralized processing.  Users were enthralled with the new usability.  The text terminal went into very rapid decline.

Eventually centralized power was available in such quantities and at such a low price point that graphical applications could be run with almost as much responsiveness from the server while clients could be “thin” needing just a shim of an operating system – enough to provide remote access back to the server.  Thin computing became the darling of the industry again and the term itself arose and moving towards centralized processing again came into vogue.

Administrators love the central computing model because data and configuration remains in one place.  Backups and management are a breeze.  The idea, at least in theory, is that in doing so desktop support becomes a non-issue with all desktop clients being nothing more than commodity components that can be replaced anytime with completely interchangeable parts.  Since nothing is stored or configured on the desktop there is nothing to support there.

In the initial swings of the “thin computing pendulum” the market movement was dramatic.  When text terminal computing first became available this was practically the only model used in the real world.  The value was so dramatic that no one could really justify doing anything else.  When the PC was introduced the movement to the fat client was so ubiquitous that many younger IT professionals today have never actually seen text terminals in use even though the move to fat “PC” clients was not as all encompassing as the move to text terminals had been one pendulum swing previous.

The PC model was generally better for end users because it mimicked how they used computers at home – those that had computers at home.  It also gave them more options for customization and, for better or for worse, opportunity for them to begin installing software of their own rather than only that software preconfigured for them on the central server.

Over time there have been a lot of developments from both camps giving each more and more advantages of the other.  Central domain services such as Microsoft’s Active Directory have come along allowing central management to extend out to fat clients bringing control and management more in line with traditional thin computing models.  Likewise, companies like Citrix have worked very hard developing new technologies that allow thin clients to perform much more like robust fat clients making their use as seamless as possible for end users and even making offline use possible for laptop users.

Most shops today have adopted hybrid models.  Fat clients where they make sense and thin clients for certain categories of users and for remote workers and continuity of business scenarios.

Over the past decade we have seen a shift in the way that business applications are created and deployed.  Today almost all business applications are web-based and have no client platform dependency.  This affords IT departments of today with a potential new opportunity – to shift from a traditional thin client platform – that requires remote graphical access – to the browser as the new thin client platform.

The move to web apps has happened slowly and most businesses have a rather large legacy codebase on which they are quite dependent that cannot be easily transferred to the new web app architecture and some apps simply are not good candidates for this architecture.  But by and large the majority of new business applications are web based, written most often in Java or .NET, and these apps are prime candidates for a new thin computing model.

If our custom business apps are available via the browser then our only commonly used apps that remain holding us back are the traditional productivity apps such as our office suites that are widely used by nearly all staff today (if they have a computer at all.)  Very few desktop apps are actually pervasive except for these.  Increasingly we are seeing browser-based alternatives to the traditional office suites.  Everyone is very aware of Google Apps as a pioneer in this area with Microsoft now offering online MS Office as well.  But the popular offerings making consumer news headlines require businesses to totally rethink long term strategies involving keeping critical business data within their walls and are not likely to be highly disruptive to the enterprise for quite some time.

What does pose a threat to the status quo is other alternative software products such as ThinkFree office which is installed within the organization and used and secured internally just like any other normal business application.  This category of “traditionally installed internal web applications” will allow enterprise IT departments to begin to reconsider their end users’ platforms without having to reevaluate their entire concept of IT in general.  The biggest barriers to this today are lingering business applications and power users using specific desktop apps that cannot be encapsulated within a browser.

One of the great advantages, however, of the browser as the new thin client is how simple it is to mix browser-based apps with traditional apps.  The move is transparent and most large businesses are moving in this direction today even if there is no overarching strategy to do so.  The market momentum to develop all new apps for the web is causing this to happen naturally.

Another key advantage of a completely “web based” architectural model is the great ease with which it can be exposed for users outside of the corporate network.  Instead of using cumbersome VPN clients and company laptops employees can find any web browser, sign in to the company network and have secure business applications delivered to any browser, anywhere.

Bringing this almost unnoticed shift into sharp relief today are a handful of, of all things, consumer devices such as: Apple’s iPhone and iPad and Google’s Android and ChromeOS platforms.  What all of these devices have in common is a focus upon being primarily thin web appliances – thin clients for consumers.  With the majority of consumer computing focused upon web connectivity the need for anything else from a platform is nearly non-existent in the consumer market.  This means that within a very short period of time users who once brought home PC experience to the office as their expectation of a computing environment will soon be beginning to bring web-based thin computing as their new expectation.

When this shift happens IT departments with need to rethink their internal application delivery strategy.  The change doesn’t have to be dramatic if current development trends are used commonly and legacy systems are routinely updated.  In fact, one of the great benefits of this new model is that traditional fat clients function very well as browser platforms and will do so for a very long time to come most likely.  Companies adopting this model will likely be able to slow desktop purchasing cycles and prepare for purchasing some form of traditional thin client with embedded browser or move to a business version of the new Nettop trend that we are beginning to see emerge in the consumer space.  Some businesses may even attempt the rather dangerous path of using consumer devices but the lack of management and security features will likely keep this from being popular in all but rare instances.

I believe, though, that this swing of the pendulum will not be as dramatic as the last one just as it was not as dramatic as the swing before that.  It will be an important trend but IT departments understand more and more that no new technological shift is a silver bullet and that with each new opportunity comes new challenges.  Most IT departments will need to implement some degree of browser-based thin computing over the next few years but most will retain a majority user base of fat clients.  Hybrid environments, like we’ve seen for many years with more traditional models, will continue as before with each technology being used in target areas where they make the most sense.

The one area where thin clients continue to be challenged the most is in mobile computing where disconnected users end up being digitally marooned away from their company networks unable to continue working until network connectivity is reestablished.  This is a significant issue for power users who must travel extensively and need to be able to continue working regardless of their current connectivity.  Today this is being solved in the traditional thin client arena thanks to companies like Citrix who continue to advance the state of the art in thin application delivery.

In the browser-based arena we have had to turn to technologies like Google Gears and Adobe AIR in the past to make this possible but these had poor market penetration.  Coming down the pike, however, is the new HTML 5 Offline API which is set to redefine how the web works for users who need to go “off the grid” from time to time.  With HTML 5 incorporating offline capabilities and a richer feature set into the specification for the web itself we expect to see broad and rapid adoption from all of the leading vendors – most likely even before the draft standard is finalized.  While still quite some ways away this new standard will certainly lay the groundwork for a significant shift towards the browser as a ubiquitous, standard and robust platform.

The future of thin computing looks to be incredibly promising both in the enterprise as well as, for the first time, in the consumer arena as well.  Adoption of thin computing models will be spurred on by the current movement towards Software as a Service models and SaaS adoption will continue to be encouraged by the widespread presence of thin computing devices.  In many ways browser-based thin computing represents the technology aspect that is now maturing in the SaaS arena where SaaS itself is maturing in social acceptance rather than technical feasibility.

Linux Virtualization Deployment Advantage

As more and more businesses begin to deploy virtualization broadly, we must begin to step back and reconsider the opportunities presented to us by this shift in datacenter architecture.  Virtualization comes with new challenges and potential not only for cost savings but for aggressive project implementation.  Small businesses, especially, when using virtualization tend to prepare themselves for projects that they could never have envisioned doing during the era of physical-only servers.

The big winners in this space of emerging virtualization opportunity are the open source operating systems such as Linux, OpenSolaris and FreeBSD.  The reason that these particular operating systems have unique opportunities that Windows and Mac OSX do not is because of the way that they are, or can be, licensed.  Each of these operating systems has an option by which they are available completely for free – something that cannot be done with Windows or Mac OSX.

Traditionally, when purchasing a new server a business would price out expensive hardware with relatively inexpensive software.  An enterprise operating system, such as Windows, would typically represent a relatively small percentage of the cost of a new server.  Even a small server would cost a few thousand dollars and Windows Server can easily be purchased for less than one thousand dollars.  In this scenario a business looking to purchase a new server would see only a very small cost savings in opting for a “free” operating system since introducing a new OS has its own risks and the bulk of the cost of the new server is in the hardware which would still need to be purchased.

Given that equation, only a rare small business would consider the purchase of a non-Windows-based server.  The opportunity for failure is too high given the risk of change and the cost savings are too small.  Today, though, virtualization is commonplace and becoming more ubiquitous every day.  Businesses virtualizing their infrastructure typically have excess capacity on their servers that is going unused.  As these businesses and their IT departments begin to look to utilize this spare capacity they will increasingly find that the cost of deploying virtualized Windows Server remains high while the cost of deploying a virtualized Linux or OpenSolaris server is nominal – generally nothing more than the effort to do so without any capital expenditure or its associated risk.

The ability to deploy new servers, at any time, without any cost is a significant advantage that companies have not begun to truly comprehend.  If a business wants a new web server, for instance, they can have one provisioned and built in thirty minutes without buying any licenses.  Having redundant virtualization hardware means that a redundant web server can be had as well – again without any capital cost.  Unlike Windows (or other commercial operating systems) there is no need to purchase a second license just to have a backup server.

This means that for the first time many businesses can begin to consider clusters as well.  Typically the cost of licensing software for clustering was prohibitive but if that licensing becomes free then suddenly clusters become very attractive options.

Of course, as open source proponents will point out, the low cost of Linux and other free and open source solutions have long been reasons to move to these platforms, but this discounts the incredible shift in pricing structure that occurs only when spare usable capacity meets the previously existing free licenses.  It is only because so many business have already implemented virtualization strategies, or are in the process of doing so, that this new opportunity truly presents itself.

The first challenge will be in getting businesses to begin to think of operating systems and application platforms as being free.  The ways in which businesses may take advantage of this has yet to be seen.  Businesses are so used to being hamstrung by the need to buy new hardware and expensive server software licenses for every new system deployment that the widespread availability of spare server images is quite novel indeed.

Of course, as with many new technology changes, it is the small and medium business space where the greatest change will likely take place.  Large enterprises are already doing datacenter consolidation and do not necessarily have spare capacity available to them as their capacity plan already takes into account virtualization.  But in the smaller business space where capacity planning is a practically non-existent practice we see a different type of opportunity.

What we typically see in small businesses moving to virtualization is an over-purchasing of hardware.  This generally comes from a misunderstanding of how capacity planning and virtual guest interaction will occur in the virtualized environment but also from a desire to err on the side of overpowered versus underpowered and the nature of virtualization capacity planning being a bit of a “black art”.  Because of this, however, many small businesses have server resources sitting idle.  It is not uncommon to see a powerful server virtualizing just two server instances when there is capacity to virtualize a dozen or more.

It is this overprovisioning of hardware that offers unique opportunity.  Many small businesses, and even medium sized businesses, may manage to effectively virtualize their entire existing server infrastructure leaving no further opportunity for cost savings through consolidation.  At this point the spare capacity of the existing servers offers no further cost savings and can now be viewed as capacity for growth instead.

This begs the question of “What new deployment opportunities exist given these opportunities?”  This question is difficult to answer as it will be different for nearly every business, but we can look at some commonalities to build a rough picture of where we may see new value presenting itself.

The most obvious new opportunity is in new web applications.  Small businesses often would like to take advantage of free web-based applications but do not want to risk deploying new, low-priority applications to their existing Windows-based web server of do not even have a server available to do so.  Creating one or more open source application servers is incredibly simple.  Deploying a wiki, corporate web portal, a blogging engine or news site, bug or incident tracking application, microblogging platform (a la laconi.ca,) CRM, ERP or any of thousands of similar applications can be done quickly and easily with minimal cost using only “spare” time from the existing IT resources.  Any number of internal applications such as these could bring value to the company and produce very little impact on a virtualization platform so many could be deployed utilizing only a small amount of excess capacity.

Beyond obvious web apps there are more feature-rich systems that could be deployed for no cost.  A great example is the OpenFire instant messaging and presence server.  Companies can suddenly roll out complete enterprise class, secure, internal instant messaging applications at no cost whatsoever.  Another example is in monitoring systems such as Nagios, Zenoss or Zabbix – all of which are available for free and represent a real benefit for companies that currently have no such system.  Enterprise monitoring completely for free.

Beyond new applications there is also an “environmental” benefit to be had.  In an enterprise environment changes going into production go through a series of testing.  Typically big businesses will maintain a development server environment, a user acceptance testing environment and then the production environment.  For a small business to do this with Windows is extremely cost prohibitive as the servers in each environment need to be licensed.  But with open source servers being virtualized using spare capacity deploying virtual servers for each of these environments is completely free and allows small businesses to test their own processes before making production changes giving them added stability previously unaffordable to them.

After all of these growth benefits there is one additional benefit to consider – flexibility.  Because these new systems can be deployed and tested with no cost it provides a new opportunity for small shops to deploy open source solutions that may replace expensive Windows solutions that they are currently using.  This could include replacing Exchange with Zimbra or replacing IIS with Apache or Active Directory with an LDAP server.  Doing a project like this would be risky and potentially costly if the hardware and software had to be purchased up front.  But if the project can be done, only using free time from the existing IT department, and can be done as a free “proof of concept” before looking to do a pilot and then full production replacement then risk can be minimized and the entire project can be effectively free.

While a full architectural replacement may be very aggressive for an average small business it is also a very significant potential cost savings.  Moving completely to open source systems is not for everyone and should be evaluated carefully.  The ability to evaluate a project of this magnitude, for free, is very important and small businesses should consider doing so to be sure that they are using the systems that make the most sense for their business model and needs rather than simply using the solutions with which they are already familiar or are already in place.

There are many additional ways in which free and open source products, deployed using existing, excess server capacity, can be used to expand the IT infrastructure of small businesses.  Learning to seek out opportunities rather than seeking cost savings from IT is a new process for most small businesses and requires some relearning, but those that take the time to pursue these opportunities have many benefits to be gained.

Virtualization for Small Business

In the last year or two we have seen virtualization go from a poorly understood concept to a much-hyped industry buzz word being bantered about constantly in every conversation involving technology.  There is no doubt that virtualization is playing an important role in today’s IT landscape, but the question we are asking is whether virtualization applies to the small and medium business markets at this time.

The quick answer to this question is: absolutely.  Unlike many technologies that are of questionable value or that provide a great degree of technological complication, risk and expense that may not be appropriate for a small business, virtualization is a mature technology (IBM CP/CMS circa 1968) that is well understood and provides a layer of hardware abstraction that can benefit an IT organization of any size and may possibly apply even more to the small business IT department than it applies in the enterprise space.

Before looking at how virtualization can benefit the SMB market I would like to provide some definitions to be sure that we are discussing the same set of technologies.  In today’s IT landscape it has become popular to relabel common technologies as “virtualization” for marketing reasons and this has unnecessarily complicated the issue.

True virtualization refers to the virtualizing of entire operating systems.  Wikipedia uses the term platform virtualization and I will as well.  Technically we could refer to this as “System Virtualization” or “Operating System Virtualization” to distinguish it from loosely-related technologies that may arguably have the right to also use the same general term.

The basic concept of platform virtualization involves running an abstraction layer on a computer that emulates the hardware itself. Through the combination of abstraction and emulation we get what is known as a virtual machine.  This virtual machine is a completely working “computer” onto which we can install an operating system just as if we were installing onto the bare metal of a dedicated machine.  Instead of being limited to only installing one operating system image per computer we can now, with platform virtualization, install many copies of the same or disparate operating systems onto the same piece of hardware.  A powerful concept indeed.

The obviousness of the utility of this technology begs the obvious question: “If platform virtualization has been available since 1968, why is it only becoming popular and important recently?”  This is an excellent question.  The answer is actually quite simple.

Traditional platform virtualization technologies require a lot of support within the computer hardware itself.  IBM has been building this type of support into its mainframe systems for decades and large UNIX vendors like Sun have been providing this in their high-end UNIX servers for years as well.  These systems are highly specialized and typically run their own custom operating system(s).  Generally only large IT shops could afford servers of this magnitude and small shops did not have ready access to these technologies.  For those IT professionals who have worked with this type of equipment in the past the idea of virtualization was often so ingrained into the platform that it was often discussed very little as it was seen as simply an aspect of these high-end server systems and not necessarily a concept in its own right.

What has changed recently is the move to bring platform virtualization to the commodity hardware space occupied by the AMD and Intel (x86_64) processors used by the majority of small and medium businesses as well as larger enterprises.  The first move was to use software alone to make this possible on the x86 processor family.  The early players in this space were VMWare and Microsoft with products like VMWare Workstation, Virtual PC, VMWare GSX and MS Virtual Server.  These products showed that no special hardware was needed to effectively virtualize whole operating systems and began to allow companies of all sizes to experiment with the concept of virtualizing their existing commodity platforms.  This form of virtualization is known as “host-based virtualization” as it requires a host operating system on which the virtualization environment will run.

Following on the tail of these software-only solutions the big processor vendors in the commodity space, AMD and Intel, began building virtualization capabilities into the processor allowing for more flexibility, security and performance and bringing the commodity x64 hardware market much more in line with the traditional offerings from the other processor families common in big iron servers.  By doing so the virtualization market has really exploded both from the vendor side as more and more vendors begin offering virtualization related products and from the customer side as virtualization begins to be better understood and its use becomes more commonplace.  With the latest rounds of purchasing most small IT shops have purchased servers, and often desktops, that support hardware-level virtualization even without intending to prepare themselves for a move to virtualization making the equation often tip in that direction naturally.  This hardware-supported virtualization model is called “hypervisor-based virtualization” as all operating systems run on top of a tiny kernel called the hypervisor and no traditional operating system runs directly on the hardware.

Now that we have a good idea of what platform virtualization is and why it is now available to us as an option we will look at why platform virtualization may be beneficial to us in the small and medium business space.

There are two things that we can readily virtualize (without getting esoteric or starting to virtualize our routing and switching infrastructure) – servers and desktops.  By far the easier and more obvious choice is the virtualization of servers.

Virtualizing the server infrastructure, or part of it, is the first place that most IT shops look today as a potential for virtualization.  Most companies find that the majority of their servers are extremely underutilized with excess CPU, memory and drive capacity sitting idle while additional workloads fail to find a home due to budget constraints, space or implementation time.  Virtualization to the rescue.

Through virtualization we have the opportunity to run several virtual servers on a single piece of server hardware.  We could virtualize just a single server system but this would not gain us any utilization advantages or we could, in theory, virtualize hundreds of servers if our hardware could handle it.  Typically, small businesses can virtualize several typical servers roles onto a single physical server.  Virtual machine density is, of course, determined by load characteristics as well as by the available hardware.  Virtualization uses a lot of memory and storage, obviously, and so careful planning must be made.  Memory and storage are relatively inexpensive today and are certainly vastly less expensive than purchasing additional server hardware and paying to support it.  It is not uncommon for a small business to easily virtualize half a dozen servers on a single piece of hardware at a minimum and a score or more is not an unreasonable number to hope to achieve.

Many small shops instantly jump to the conclusion that virtualization requires expensive SAN storage.  This is not the case at all.  Virtualization provides a range of benefits even without using a SAN storage infrastructure of which shops can take advantage immediately.  There are, of course, some significant advantages available by using SAN in conjunction with virtualization and high availability or load balancing technologies.  Often, though, these high availability and load balancing capabilities are additional features that did not exist prior to virtualization and are not necessary in order for a shop to see significant benefits from virtualization but do present an opportunity for future improvement when and if budgets allow.

Small businesses will see many advantages from virtualization immediately even doing so on a small scale.  Some of these benefits are obvious and some are less so.

Our first advantage is that of hardware cost as I mentioned above.  By eliminating the need to purchase and support expensive server hardware on a per operating system basis we can now deploy more systems at lower cost per system.  In many cases this is not only a cost savings but will also provide greater funds necessary to move from more spartan servers into fewer but more enterprise class offerings with important performance, stability and support features such as integrated power management and KVM over IP from an out of band management console.

Our second advantage is the cost savings from reducing power consumption.  It is very trendy, and for good reason, for companies to be concerned with how “green” they are today and IT virtualization plays a key role in the greenification of the department.  The addition of virtual machines onto a single physical server typically represents a trivial, if even measurable, increase in power draw.  Adding additional physical servers, of course, adds a significant amount of power consumption even for systems that are lightly used or used only occasionally.

Our third advantage is in reducing backup complexity.  Virtualized servers can be backed up using completely traditional methods such as file system level backups from the operating system itself as made popular by traditional backup systems like NetBackup, BackupExec, Amanda, Bacula and others.  So if we desire to stick with current backup strategies we can without any additional complexity, but if we want to move to image-based backups we can do so quite easily.  Using system images as backups is not necessarily new or unique to virtualization but virtualization makes this far more obvious and accessible for many users.  In fact, with virtualization system images (a copy of the entire system, not just of its individual files) can be taken using nothing but the regular filesystem – no special software needed.  A complete system backup can be taken by simply shutting down the virtual server, making a copy of its virtual filesystem – often a single, large file, and starting the system up again.  Restoring a system can be a simple as copying an image file from a backup storage device to the virtual server and starting it back up.  Restore done.  System back online.  This is as simple as it gets.

Our fourth advantage is in the ease of provisioning.  Building a new server operating system directly on hardware is a time consuming venture for most shops.  This is especially true if there are any surprises with new hardware type that has not been used previously.  There may be missing drivers or special operating system settings and parameters needed to support the hardware.  With virtualization the target platform is always identical removing many surprises from this process making it both faster and more reliable.  In many cases deployment is also faster simply because the process of preparing the base machine is so much faster.  To kick off a manual install of Linux on a traditional physical server I must purchase said server, install into rack, connect power and networking, provision networking, turn on server, update firmware, configure out of band management system, burn in hardware, install installation media and begin installing.  Or from some virtualization environments I can simply kick off the entire process with a single command at the command line.  Deploying a new server could go from hours or days to minutes.  This does not even begin to address the simplicity of cloning existing systems within a virtual environment.

A fifth “soft” advantage of virtualization is that there is quite often a significant software cost savings when virtualizing.  Some vendors, like Novell with Suse Linux, allow you to virtualize as many servers as you want on a single physical machine while paying for only a single machine license.  Red Hat gives you multiple installs but not unlimited like Novell.  Microsoft has a range of virtualization pricing options depending on your needs including an unlimited per processor deployment license.  In a worst case scenario you will need to pay for additional operating system and other software licenses exactly as if you were running the same machines physically but in almost all cases there is more pricing flexibility and often dramatic cost reductions for multiple virtualized hosts.

A sixth benefit is in the ability to “roll back” an entire operating system.  Most virtualization platforms allow for a concept of taking a system snapshot, making changes to the active system and then restoring the system back to its original state when done.  This is great for software testing and especially for the testing of operating system patches or any critical update process where something going wrong could cause your system to become unresponsive and potentially not repairable.  The ability to go “back in time” to the latest snapshot, taken seconds before the patch application or risky configuration change can be a lifesaver.  Of course taking an image backup could be used in the same way but snapshots allow for even more rapid recovery due to their “proximity” to the original filesystem.

All of these aforementioned benefits come with a move to virtualization and do not require additional cost for software or hardware.  If our budget allows and the need exists there is also the option of adding one of more virtualization servers and having these servers share a SAN for storage of virtual machine images.  At a minimum this will roughly triple the hardware cost but provides double the processing power and some really amazing features.  The main feature that really makes this solution impressive is the concept of live migration.  Live migration is when a virtual operating system can be moved, while running, from one physical virtualization server to another.  This can be done for purposes of load balancing, disaster testing or to survive a disaster itself.  With some live migration solutions, generally sold as high availability, this migration can happen so quickly that it provides effectively “zero downtime” and even heavily used web servers could survive the loss of a physical server without customers ever knowing that a physical server had gone down.  The transition between virtual machine host nodes is completely transparent to the end users.

There is one major caveat.  Relying upon a SAN in a disaster recovery scenario, of course, creates another point of failure – the SAN system.  So when planning to use SAN to increase the reliability of your virtual machines be sure not to use SAN that is not as redundant or moreso than your servers themselves or you may increase cost while accidentally lowering reliability and performance.

For the average small business it is not unlikely that it will make sense to not only virtualize some of the server infrastructure but virtualize all or nearly all of it.  Virtualization’s advantages are so many and its downsides so few and minor that it is a rare workload in the small business space that would justify dedicated hardware servers.

Now that we have examined why server virtualization makes sense we can begin looking towards desktop virtualization.  Unlike real desktops and servers, virtualized desktops often add a bit of complexity due to licensing requirements especially with Microsoft Windows desktops.

Virtualizing desktops is also somewhat complicated because there are many modes for physically providing desktops.  Obviously once we begin talking about virtualizing the desktop infrastructure we are actually talking about a range of solutions because some device must always exist “on the desktop” providing a keyboard, mouse and monitor which cannot be virtualized and the desktop operating system itself must be running elsewhere.  Even without virtualization this is done (and sometimes marketed as virtualization when, in fact it is simply remote access) very commonly through desktop blades, rackmount desktops or terminal servers.  All of these solutions move the desktop into the datacenter and provide access to it either from thin client front ends or simply via software to remote users existing machines such as users at home logging in to the office.

We will start with the concept of the terminal server as this is the most easily virtualized and the most straightforward.  Whether we are talking about virtualizing the server on which we run Microsoft Terminal Server (now known as Remote Desktop Services), Citrix XenApp or simply a standard Linux remote desktop terminal server we need do nothing more than install that server into a virtual environment rather than into a physical one.  It is really a question of server virtualization not of desktop virtualization – it is only perceived by the end user as being related to their desktops.

The other method of desktop virtualization, “true desktop virtualization” as I will refer to it, is to actually run desktop operating system images on a virtual server just as if they were normal desktops dedicated to a user.  This means virtualizing operating systems like Windows XP, Windows Vista or Windows 7 with each image being dedicated to a single user just as if it was a physical desktop.  We could, theoretically, do the same thing with Linux or some other flavor of Unix but as those systems do not have per user licensing or desktop specific versions and since they always run their desktops in a server mode we would only be able to differentiate between a true virtualized desktop and a Unix-based terminal server in its usage and not by any strict technological means as they are one and the same.  Only Windows truly offers a dedicated desktop model that allows this to happen in this particular manner without the concept of shared access to a single image simultaneously.

Due to licensing restrictions from Microsoft, Windows desktops must be installed one image per user even if technologies exist to make this technologically unnecessary, but still there are benefits to this model.  The big benefits to virtualized desktops definitely go to companies who have employees who roam either internally or even externally.

Using virtualized desktops provides far more control to the company than does providing laptops.  Laptops can be stolen, lost or damaged.  Laptops wear out and need to be replaced regularly.  A virtual desktop that is made accessible from the outside of the company can be secured and protected in ways that a laptop cannot be.  Upgrades are much simpler and there is no concern of the virtual desktop becoming cut off from the corporate network and being unable to be supported by the IT staff.

Almost any worker who uses a computer in the office already has one at home for personal use and often has a  laptop as well in addition to high speed Internet access.  Providing remote access to a virtual desktop at the office therefore potentially incurs no additional hardware expense for the company or staff while easing administrative burdens, lowering power consumption and increasing security.  Some workers will always require laptops but many will not.

For workers still sitting at a traditional desk inside of the company’s offices there is still a need for something physically sitting on the desk that will connect the keyboard, mouse and monitor to the newly virtualized desktop.  This could be an old PC that was planned for retirement, a dedicated hardware thin client or even a laptop.  Internal staff can then move around the office or between offices and sit at any available desk with a thin client and log in to their own dedicated virtual desktop and work exactly as if they were at their own desk.  They can then go home and work from there as well if this is allowed.

Like virtualized servers, desktops, if the need is warranted, can be easily backed up using either traditional means or by simply taking complete system images.  The flexibility is there to do whatever makes the most sense in your environment.

With the complexity and surprise cost of licensing as well as lack of ability to completely do away with hardware on the desktop except for solely remote users, desktop virtualization is hardly the no-brainer that server virtualization is.  Desktop virtualization will require careful analysis on a case by case basis to determine if it will meet the cost and usability needs of the individual organization.  Most organizations who choose to go this route will likely opt to only partially virtualize – using it only in cases where it makes the most sense such as roaming users and remote workers while keeping traditional desktops for those users who would seldom be in a position to take advantage of this technology.  Using terminal server options will often be far more common than “true desktop virtualization” which often makes sense only for power users, developers or to support certain applications that work poorly in a terminal server mode.

There is a final usage of virtualization that warrants discussion if only because it is important to understand its use in the business environment.  This final type of virtualization is not used to put operating systems into the datacenter on server hardware but instead is used to run additional operating system images on traditional desktops and laptops.  This is a common scenario for people who need to test multiple operating systems for support or development.  It is not useful for production systems and is generally outside the scope of this discussion.  It is a highly useful use of the technology but it is rather a niche scenario primarily useful for compatibility testing.

In all of this discussion there has been, somewhat conspicuously, no mention of Apple’s Mac OSX products.  There is a reason for this.  Apple does not license Mac OSX so that it may be virtualized on non-Apple hardware and Apple does not have an enterprise-ready virtualization product ready for its own platform.  The only way to virtualize Mac OSX is to purchase full, additional licenses for each operating system instance thereby eliminating most of the cost benefits of this approach and to run it on a host-based virtualization product such as VMWare Fusion or Parallels which are designed for use on top of a desktop and not as a server-class product.  This is a major gap in the Mac OSX portfolio and one of the ways in which Apple continues to lag behind the rest of the market in capability and in its understanding of its business customers’ needs.  If Apple were to change its licensing strategy around virtualization Mac OSX would prove to be an extremely popular and useful operating system to virtualize both from the server and desktop perspective.

Virtualization is a great opportunity to lower cost and raise productivity while reducing risk for businesses of any size and with budgets as low as zero.  Many technologies promise important improvements for businesses but most create questionable value while incurring real cost.  Virtualization brings real, measurable value while often costing nothing and often reducing spending immediately.  For many businesses virtualization is the technology that they have always dreamed of and is, in fact, available today.