Tag Archives: system virtualization

You Can’t Virtualize That!

We get this all of the time in IT, a vendor tells us that a system cannot be virtualized.  The reasons are numerous.  On the IT side, we are always shocked that a vendor would make such an outrageous claim; and often we are just as shocked that a customer (or manager) believes them.  Vendors have worked hard to perfect this sales pitch over the years and I think that it is important to dissect it.

The root cause of problems is that vendors are almost always seeking ways to lower costs to themselves while increasing profits from customers.  This drives a lot of what would otherwise be seen as odd behaviour.

One thing that many, many vendors attempt to do is limit the scenarios under which their product will be supported.  By doing this, they set themselves up to be prepared to simply not provide support – support is expensive and unreliable.  This is a common strategy.  It some cases, this is so aggressive that any acceptable, production deployment scenario fails to even exist.

A very common means of doing this is to fail to support any supported operating system, de facto deprecating the vendor’s own software (for example, today this would mean only supporting Windows XP and earlier.)  Another example is only supporting products that are not licensed for the use case (an example would be requiring the use of a product like Windows 10 be used as a server.)  And one of the most common cases is forbidding virtualization.

These scenarios put customers into difficult positions because on one hand they have industry best practices, standard deployment guidelines, in house tooling and policies to adhere to; and on the other hand they have vendors often forbidding proper system design, planning and management.  These needs are at odds with one another.

Of course, no one expects every vendor to support every potential scenario.   Limits must be applied.  But there is a giant chasm between supporting reasonable, well deployed systems and actively requiring unacceptably bad deployments.  We hope that our vendors will behave as business partners and share a common interest in our success or, at the very least, the success of their product and not directly seek to undermine both of these causes.  We would hope that, at a very minimum, best effort support would be provided for any reasonable deployment scenario and that guaranteed support would be likely offered for properly engineered, best practice scenarios.

Imagine a world where driving the speed limit and wearing a seatbelt would violate your car warranty and that you would only get support if you drove recklessly and unprotected!

Some important things need to be understood about virtualization.  The first is that virtualization is a long standing industry best practice and is expected to be used in any production deployment scenario for services.  Virtualization is in no way new, even in the small business market it has been in the best practice category for well over a decade now and for many decades in the enterprise space.  We are long past the point where running systems non-virtualized is considered acceptable, and that includes legacy deployments that have been in place for a long time.

There are, of course, always rare exceptions to nearly any rule.  Some systems need access to very special case hardware and virtualization may not be possible, although with modern hardware passthrough this is almost unheard of today.  And some super low latency systems cannot be virtualized but these are normally limited to only the biggest international investment banks and most aggressive hedgefunds and even the majority of those traditional use cases have been eliminated by improvements in virtualization making even those situations rare.  But the bottom line is, if you can’t virtualize you should be sad that you cannot, and you will know clearly why it is impossible in your situation.  In all other cases, your server needs to be virtual.

Is it not important?

If a vendor does not allow you to follow standard best practices for healthy deployments, what does this say about the vendor’s opinion of their own product?  If we were talking about any other deployment, we would immediately question why we were deploying a system so poorly if we plan to depend on it.  If our vendor forces us to behave this way, we should react in the same manner – if the vendor doesn’t take the product to the same degree that we take the least of our IT services, why should we?

This is an “impedance mismatch”, as we say in engineering circles, between our needs (production systems) and how the vendor making that system appears to treat them (hobby or entertainment systems.)  If we need to depend on this product for our businesses, we need a vendor that is on board and understands business needs – has a production mind set.  If the product is not business targeted or business ready, we need to be aware of that.  We need to question why we feel we should be using a service in production, on which we depend and require support, that is not intended to be used in that manner.

Is it supported?  Is it being tested?

Something that is often overlooked from the perspective of customers is whether or not the necessary support resources for a product are in place.  It’s not uncommon for the team that supports a product to become lean, or even disappear, but the company to keep selling the product in the hopes of milking it for as much as they can and bank on either muddling through a problem or just returning customer funds should the vendor be caught in a situation where they are simply unable to support it.

Most software contracts state that the maximum damage that can be extracted from the vendor is the cost of the product, or the amount spent to purchase it.  In a case such as this, the vendor has no risk from offering a product that they cannot support – even if charging a premium for support.  If the customer manages to use the product, great they get paid. If the customer cannot and the vendor cannot support it, they only lose money that they would never have gotten otherwise.  The customer takes on all the risk, not the vendor.

This suggests, of course, that there is little or no continuing testing of the product as well, and this should be of additional concern.  Just because the product runs does not mean that it will continue to run.  Getting up and running with an unsupported, or worse unsupportable, product means that you are depending more and more over time on a product with a likely decreasing level of potential support, slowly getting worse over time even as the need for support and the dependency on the software would be expected to increase.

If a proprietary product is deployed in production, and the decision is made to forgo best practice deployments in order to accommodate support demands, how can this fit in a decision matrix? Should this imply that proper support does not exist? Again, as before, this implies a mismatch in our needs.

 

Is It Still Being Developed?

If the deployment needs of the software follow old, out of date practices, or require out of date (or not reasonably current software or design) then we have to question the likelihood that the product is currently being developed.  In some cases we can determine this by watching the software release cycle for some time, but not in all cases.  There is a reasonable fear that the product may be dead, with no remaining development team working on it.  The code may simply be old, technical debt that is being sold in the hopes of making a last, few dollars off of an old code base that has been abandoned.  This process is actually far more common than is often believed.

Smaller software shops often manage to develop an initial software package, get it on the market and available for sale, but fail to be able to afford to retain or restaff their development team after initial release(s).  This is, in fact, a very common scenario.  This leaves customers with a product that is expected to become less and less viable over time with deployment scenarios becoming increasingly risky and data increasing hard to extricate.

 

How Can It Be Supported If the Platform Is Not Supported?

A common paradox of some more extreme situations is software that, in order to qualify as “supported”, requires other software that is either out of support or was never supported for the intended use case.  Common examples of this are requiring that a server system be run on top of a desktop operating system or requiring versions of operating systems, databases or other components, that are no longer supported at all.  This last scenario is scarily common.  In a situation like this, one has to ask if there can ever be a deployment, then, where the software can be considered to be “supported”?  If part of the stack is always out of support, then the whole stack is unsupported.  There would always be a reason that support could be denied no matter what.   The very reason that we would therefore demand that we avoid best practices would equally rule out choosing the software itself in the first place.

Are Industry Skills and Knowledge Lacking?

Perhaps the issue that we face with software support problems of this nature are that the team(s) creating the software simply do not know how good software is made and/or how good systems are deployed.  This is among the most reasonable and valid reasons for what would drive us to this situation.  But, like the other hypothesis reasons, it leaves us concerned about the quality of the software and the possibility that support is truly available.  If we can’t trust the vendor to properly handle the most visible parts of the system, why would we turn to them as our experts for the parts that we cannot verify?

The Big Problem

The big, overarching problem with software that has questionable deployment and maintenance practice demands in exchange for unlocking otherwise withheld support is not, as we typically assume a question of overall software quality, but one of viable support and development practices.  That these issues suggest a significant concern for long term support should make us strongly question why we are choosing these packages in the first place while expecting strong support from them when, from the onset, we have very visible and very serious concerns.

There are, of course, cases where no other software products exist to fill a need or none of any more reasonable viability.  This situation should be extremely rare and if such a situation exists should be seen as a major market opportunity for a vendor looking to enter that particular space.

From a business perspective, it is imperative that the technical infrastructure best practices not be completely ignored in exchange for blind or nearly blind following of vendor requirements that, in any other instance, would be considered reckless or unprofessional. Why do we so often neglect to require excellence from core products on which our businesses depend in this way?  It puts our businesses at risk, not just from the action itself, but vastly moreso from the risks that are implied by the existence of such a requirement.

Virtualization for Small Business

In the last year or two we have seen virtualization go from a poorly understood concept to a much-hyped industry buzz word being bantered about constantly in every conversation involving technology.  There is no doubt that virtualization is playing an important role in today’s IT landscape, but the question we are asking is whether virtualization applies to the small and medium business markets at this time.

The quick answer to this question is: absolutely.  Unlike many technologies that are of questionable value or that provide a great degree of technological complication, risk and expense that may not be appropriate for a small business, virtualization is a mature technology (IBM CP/CMS circa 1968) that is well understood and provides a layer of hardware abstraction that can benefit an IT organization of any size and may possibly apply even more to the small business IT department than it applies in the enterprise space.

Before looking at how virtualization can benefit the SMB market I would like to provide some definitions to be sure that we are discussing the same set of technologies.  In today’s IT landscape it has become popular to relabel common technologies as “virtualization” for marketing reasons and this has unnecessarily complicated the issue.

True virtualization refers to the virtualizing of entire operating systems.  Wikipedia uses the term platform virtualization and I will as well.  Technically we could refer to this as “System Virtualization” or “Operating System Virtualization” to distinguish it from loosely-related technologies that may arguably have the right to also use the same general term.

The basic concept of platform virtualization involves running an abstraction layer on a computer that emulates the hardware itself. Through the combination of abstraction and emulation we get what is known as a virtual machine.  This virtual machine is a completely working “computer” onto which we can install an operating system just as if we were installing onto the bare metal of a dedicated machine.  Instead of being limited to only installing one operating system image per computer we can now, with platform virtualization, install many copies of the same or disparate operating systems onto the same piece of hardware.  A powerful concept indeed.

The obviousness of the utility of this technology begs the obvious question: “If platform virtualization has been available since 1968, why is it only becoming popular and important recently?”  This is an excellent question.  The answer is actually quite simple.

Traditional platform virtualization technologies require a lot of support within the computer hardware itself.  IBM has been building this type of support into its mainframe systems for decades and large UNIX vendors like Sun have been providing this in their high-end UNIX servers for years as well.  These systems are highly specialized and typically run their own custom operating system(s).  Generally only large IT shops could afford servers of this magnitude and small shops did not have ready access to these technologies.  For those IT professionals who have worked with this type of equipment in the past the idea of virtualization was often so ingrained into the platform that it was often discussed very little as it was seen as simply an aspect of these high-end server systems and not necessarily a concept in its own right.

What has changed recently is the move to bring platform virtualization to the commodity hardware space occupied by the AMD and Intel (x86_64) processors used by the majority of small and medium businesses as well as larger enterprises.  The first move was to use software alone to make this possible on the x86 processor family.  The early players in this space were VMWare and Microsoft with products like VMWare Workstation, Virtual PC, VMWare GSX and MS Virtual Server.  These products showed that no special hardware was needed to effectively virtualize whole operating systems and began to allow companies of all sizes to experiment with the concept of virtualizing their existing commodity platforms.  This form of virtualization is known as “host-based virtualization” as it requires a host operating system on which the virtualization environment will run.

Following on the tail of these software-only solutions the big processor vendors in the commodity space, AMD and Intel, began building virtualization capabilities into the processor allowing for more flexibility, security and performance and bringing the commodity x64 hardware market much more in line with the traditional offerings from the other processor families common in big iron servers.  By doing so the virtualization market has really exploded both from the vendor side as more and more vendors begin offering virtualization related products and from the customer side as virtualization begins to be better understood and its use becomes more commonplace.  With the latest rounds of purchasing most small IT shops have purchased servers, and often desktops, that support hardware-level virtualization even without intending to prepare themselves for a move to virtualization making the equation often tip in that direction naturally.  This hardware-supported virtualization model is called “hypervisor-based virtualization” as all operating systems run on top of a tiny kernel called the hypervisor and no traditional operating system runs directly on the hardware.

Now that we have a good idea of what platform virtualization is and why it is now available to us as an option we will look at why platform virtualization may be beneficial to us in the small and medium business space.

There are two things that we can readily virtualize (without getting esoteric or starting to virtualize our routing and switching infrastructure) – servers and desktops.  By far the easier and more obvious choice is the virtualization of servers.

Virtualizing the server infrastructure, or part of it, is the first place that most IT shops look today as a potential for virtualization.  Most companies find that the majority of their servers are extremely underutilized with excess CPU, memory and drive capacity sitting idle while additional workloads fail to find a home due to budget constraints, space or implementation time.  Virtualization to the rescue.

Through virtualization we have the opportunity to run several virtual servers on a single piece of server hardware.  We could virtualize just a single server system but this would not gain us any utilization advantages or we could, in theory, virtualize hundreds of servers if our hardware could handle it.  Typically, small businesses can virtualize several typical servers roles onto a single physical server.  Virtual machine density is, of course, determined by load characteristics as well as by the available hardware.  Virtualization uses a lot of memory and storage, obviously, and so careful planning must be made.  Memory and storage are relatively inexpensive today and are certainly vastly less expensive than purchasing additional server hardware and paying to support it.  It is not uncommon for a small business to easily virtualize half a dozen servers on a single piece of hardware at a minimum and a score or more is not an unreasonable number to hope to achieve.

Many small shops instantly jump to the conclusion that virtualization requires expensive SAN storage.  This is not the case at all.  Virtualization provides a range of benefits even without using a SAN storage infrastructure of which shops can take advantage immediately.  There are, of course, some significant advantages available by using SAN in conjunction with virtualization and high availability or load balancing technologies.  Often, though, these high availability and load balancing capabilities are additional features that did not exist prior to virtualization and are not necessary in order for a shop to see significant benefits from virtualization but do present an opportunity for future improvement when and if budgets allow.

Small businesses will see many advantages from virtualization immediately even doing so on a small scale.  Some of these benefits are obvious and some are less so.

Our first advantage is that of hardware cost as I mentioned above.  By eliminating the need to purchase and support expensive server hardware on a per operating system basis we can now deploy more systems at lower cost per system.  In many cases this is not only a cost savings but will also provide greater funds necessary to move from more spartan servers into fewer but more enterprise class offerings with important performance, stability and support features such as integrated power management and KVM over IP from an out of band management console.

Our second advantage is the cost savings from reducing power consumption.  It is very trendy, and for good reason, for companies to be concerned with how “green” they are today and IT virtualization plays a key role in the greenification of the department.  The addition of virtual machines onto a single physical server typically represents a trivial, if even measurable, increase in power draw.  Adding additional physical servers, of course, adds a significant amount of power consumption even for systems that are lightly used or used only occasionally.

Our third advantage is in reducing backup complexity.  Virtualized servers can be backed up using completely traditional methods such as file system level backups from the operating system itself as made popular by traditional backup systems like NetBackup, BackupExec, Amanda, Bacula and others.  So if we desire to stick with current backup strategies we can without any additional complexity, but if we want to move to image-based backups we can do so quite easily.  Using system images as backups is not necessarily new or unique to virtualization but virtualization makes this far more obvious and accessible for many users.  In fact, with virtualization system images (a copy of the entire system, not just of its individual files) can be taken using nothing but the regular filesystem – no special software needed.  A complete system backup can be taken by simply shutting down the virtual server, making a copy of its virtual filesystem – often a single, large file, and starting the system up again.  Restoring a system can be a simple as copying an image file from a backup storage device to the virtual server and starting it back up.  Restore done.  System back online.  This is as simple as it gets.

Our fourth advantage is in the ease of provisioning.  Building a new server operating system directly on hardware is a time consuming venture for most shops.  This is especially true if there are any surprises with new hardware type that has not been used previously.  There may be missing drivers or special operating system settings and parameters needed to support the hardware.  With virtualization the target platform is always identical removing many surprises from this process making it both faster and more reliable.  In many cases deployment is also faster simply because the process of preparing the base machine is so much faster.  To kick off a manual install of Linux on a traditional physical server I must purchase said server, install into rack, connect power and networking, provision networking, turn on server, update firmware, configure out of band management system, burn in hardware, install installation media and begin installing.  Or from some virtualization environments I can simply kick off the entire process with a single command at the command line.  Deploying a new server could go from hours or days to minutes.  This does not even begin to address the simplicity of cloning existing systems within a virtual environment.

A fifth “soft” advantage of virtualization is that there is quite often a significant software cost savings when virtualizing.  Some vendors, like Novell with Suse Linux, allow you to virtualize as many servers as you want on a single physical machine while paying for only a single machine license.  Red Hat gives you multiple installs but not unlimited like Novell.  Microsoft has a range of virtualization pricing options depending on your needs including an unlimited per processor deployment license.  In a worst case scenario you will need to pay for additional operating system and other software licenses exactly as if you were running the same machines physically but in almost all cases there is more pricing flexibility and often dramatic cost reductions for multiple virtualized hosts.

A sixth benefit is in the ability to “roll back” an entire operating system.  Most virtualization platforms allow for a concept of taking a system snapshot, making changes to the active system and then restoring the system back to its original state when done.  This is great for software testing and especially for the testing of operating system patches or any critical update process where something going wrong could cause your system to become unresponsive and potentially not repairable.  The ability to go “back in time” to the latest snapshot, taken seconds before the patch application or risky configuration change can be a lifesaver.  Of course taking an image backup could be used in the same way but snapshots allow for even more rapid recovery due to their “proximity” to the original filesystem.

All of these aforementioned benefits come with a move to virtualization and do not require additional cost for software or hardware.  If our budget allows and the need exists there is also the option of adding one of more virtualization servers and having these servers share a SAN for storage of virtual machine images.  At a minimum this will roughly triple the hardware cost but provides double the processing power and some really amazing features.  The main feature that really makes this solution impressive is the concept of live migration.  Live migration is when a virtual operating system can be moved, while running, from one physical virtualization server to another.  This can be done for purposes of load balancing, disaster testing or to survive a disaster itself.  With some live migration solutions, generally sold as high availability, this migration can happen so quickly that it provides effectively “zero downtime” and even heavily used web servers could survive the loss of a physical server without customers ever knowing that a physical server had gone down.  The transition between virtual machine host nodes is completely transparent to the end users.

There is one major caveat.  Relying upon a SAN in a disaster recovery scenario, of course, creates another point of failure – the SAN system.  So when planning to use SAN to increase the reliability of your virtual machines be sure not to use SAN that is not as redundant or moreso than your servers themselves or you may increase cost while accidentally lowering reliability and performance.

For the average small business it is not unlikely that it will make sense to not only virtualize some of the server infrastructure but virtualize all or nearly all of it.  Virtualization’s advantages are so many and its downsides so few and minor that it is a rare workload in the small business space that would justify dedicated hardware servers.

Now that we have examined why server virtualization makes sense we can begin looking towards desktop virtualization.  Unlike real desktops and servers, virtualized desktops often add a bit of complexity due to licensing requirements especially with Microsoft Windows desktops.

Virtualizing desktops is also somewhat complicated because there are many modes for physically providing desktops.  Obviously once we begin talking about virtualizing the desktop infrastructure we are actually talking about a range of solutions because some device must always exist “on the desktop” providing a keyboard, mouse and monitor which cannot be virtualized and the desktop operating system itself must be running elsewhere.  Even without virtualization this is done (and sometimes marketed as virtualization when, in fact it is simply remote access) very commonly through desktop blades, rackmount desktops or terminal servers.  All of these solutions move the desktop into the datacenter and provide access to it either from thin client front ends or simply via software to remote users existing machines such as users at home logging in to the office.

We will start with the concept of the terminal server as this is the most easily virtualized and the most straightforward.  Whether we are talking about virtualizing the server on which we run Microsoft Terminal Server (now known as Remote Desktop Services), Citrix XenApp or simply a standard Linux remote desktop terminal server we need do nothing more than install that server into a virtual environment rather than into a physical one.  It is really a question of server virtualization not of desktop virtualization – it is only perceived by the end user as being related to their desktops.

The other method of desktop virtualization, “true desktop virtualization” as I will refer to it, is to actually run desktop operating system images on a virtual server just as if they were normal desktops dedicated to a user.  This means virtualizing operating systems like Windows XP, Windows Vista or Windows 7 with each image being dedicated to a single user just as if it was a physical desktop.  We could, theoretically, do the same thing with Linux or some other flavor of Unix but as those systems do not have per user licensing or desktop specific versions and since they always run their desktops in a server mode we would only be able to differentiate between a true virtualized desktop and a Unix-based terminal server in its usage and not by any strict technological means as they are one and the same.  Only Windows truly offers a dedicated desktop model that allows this to happen in this particular manner without the concept of shared access to a single image simultaneously.

Due to licensing restrictions from Microsoft, Windows desktops must be installed one image per user even if technologies exist to make this technologically unnecessary, but still there are benefits to this model.  The big benefits to virtualized desktops definitely go to companies who have employees who roam either internally or even externally.

Using virtualized desktops provides far more control to the company than does providing laptops.  Laptops can be stolen, lost or damaged.  Laptops wear out and need to be replaced regularly.  A virtual desktop that is made accessible from the outside of the company can be secured and protected in ways that a laptop cannot be.  Upgrades are much simpler and there is no concern of the virtual desktop becoming cut off from the corporate network and being unable to be supported by the IT staff.

Almost any worker who uses a computer in the office already has one at home for personal use and often has a  laptop as well in addition to high speed Internet access.  Providing remote access to a virtual desktop at the office therefore potentially incurs no additional hardware expense for the company or staff while easing administrative burdens, lowering power consumption and increasing security.  Some workers will always require laptops but many will not.

For workers still sitting at a traditional desk inside of the company’s offices there is still a need for something physically sitting on the desk that will connect the keyboard, mouse and monitor to the newly virtualized desktop.  This could be an old PC that was planned for retirement, a dedicated hardware thin client or even a laptop.  Internal staff can then move around the office or between offices and sit at any available desk with a thin client and log in to their own dedicated virtual desktop and work exactly as if they were at their own desk.  They can then go home and work from there as well if this is allowed.

Like virtualized servers, desktops, if the need is warranted, can be easily backed up using either traditional means or by simply taking complete system images.  The flexibility is there to do whatever makes the most sense in your environment.

With the complexity and surprise cost of licensing as well as lack of ability to completely do away with hardware on the desktop except for solely remote users, desktop virtualization is hardly the no-brainer that server virtualization is.  Desktop virtualization will require careful analysis on a case by case basis to determine if it will meet the cost and usability needs of the individual organization.  Most organizations who choose to go this route will likely opt to only partially virtualize – using it only in cases where it makes the most sense such as roaming users and remote workers while keeping traditional desktops for those users who would seldom be in a position to take advantage of this technology.  Using terminal server options will often be far more common than “true desktop virtualization” which often makes sense only for power users, developers or to support certain applications that work poorly in a terminal server mode.

There is a final usage of virtualization that warrants discussion if only because it is important to understand its use in the business environment.  This final type of virtualization is not used to put operating systems into the datacenter on server hardware but instead is used to run additional operating system images on traditional desktops and laptops.  This is a common scenario for people who need to test multiple operating systems for support or development.  It is not useful for production systems and is generally outside the scope of this discussion.  It is a highly useful use of the technology but it is rather a niche scenario primarily useful for compatibility testing.

In all of this discussion there has been, somewhat conspicuously, no mention of Apple’s Mac OSX products.  There is a reason for this.  Apple does not license Mac OSX so that it may be virtualized on non-Apple hardware and Apple does not have an enterprise-ready virtualization product ready for its own platform.  The only way to virtualize Mac OSX is to purchase full, additional licenses for each operating system instance thereby eliminating most of the cost benefits of this approach and to run it on a host-based virtualization product such as VMWare Fusion or Parallels which are designed for use on top of a desktop and not as a server-class product.  This is a major gap in the Mac OSX portfolio and one of the ways in which Apple continues to lag behind the rest of the market in capability and in its understanding of its business customers’ needs.  If Apple were to change its licensing strategy around virtualization Mac OSX would prove to be an extremely popular and useful operating system to virtualize both from the server and desktop perspective.

Virtualization is a great opportunity to lower cost and raise productivity while reducing risk for businesses of any size and with budgets as low as zero.  Many technologies promise important improvements for businesses but most create questionable value while incurring real cost.  Virtualization brings real, measurable value while often costing nothing and often reducing spending immediately.  For many businesses virtualization is the technology that they have always dreamed of and is, in fact, available today.