I see this concept appear in discussions surrounding virtualization all of the time. This is a broader, more general concept but virtualization is the “hot, new technology” facing many IT organizations and seems to be the space where currently we see the “just because you can, doesn’t mean you should” problems rearing their ugly heads most prevalently. As with everything in IT, it is critical that all technical decisions be put into a business context so that we understand why we choose to do what we do and not blindly attempt to make our decisions based on popular deployment methodologies or worse, myths..
Virtualization itself, I should point out, I feel should be a default decision today for those working in the x64 computing space with systems being deployed sans virtualization only when a clear and obvious necessity exists such as specific hardware needs, latency sensitive applications, etc. Baring any specific need, virtualization is free to implement from many vendors and offers many benefits both today and in future-proofing the environment.
That being said, what I often see today is companies deploying virtualization not as a best practice but as a panacea to all perceived IT problems. This it certainly is not. Virtualization is a very important tool to have in the IT toolbox and one that we will reach for very often, but it does not solve every problem and should be treated like every other tool that we posses and used only when appropriate.
I see several things recurring when virtualization discussions come up as a topic. Many companies today are moving towards virtualization not because they have identified a business need but because it is the currently trending topic and people feel that if they do not implement virtualization that somehow they will be left behind or miss out on some mythical functionality. This is generally good as it is increasing virtualization adoption, but it is bad because good IT and business decision making processes are being bypassed. What happens is often that in the wave of virtualization hype IT departments feel that not only do they have to implement virtualization itself but do so in ways that may not be appropriate for their business.
There are four things that I often see tied to virtualization, often accepted as virtualization requirements, whether or not they make sense in a given business environment. These are server consolidation, blade servers, SAN storage and high availability or live failover.
Consolidation is so often vaunted as the benefit of virtualization that I think most IT departments forget that there are other important reasons for doing implementing it. Clearly consolidation is a great benefit for nearly all deployments (mileage may vary, of course) and is nearly always able to be achieved simply through better utilization of existing resources. It is a pretty rare company that runs more than a single physical server that cannot shave some amount of cost through limited consolidation and it is not uncommon to see datacenter footprints decimated in larger organizations.
In extreme cases, though, it is not necessary to abandon virtualization projects just because consolidation proves to be out of the question. These cases exist for companies with high utilization systems and little budget for a preemptive consolidation investment. But these shops can still virtualize “in place” systems on a one to one basis to gain other benefits of virtualization today and look to consolidate when hardware needs to be replaced tomorrow or when larger, more powerful servers become more cost effective in the future. It is important to not rule out virtualization just because its most heralded benefit may not apply at the current time in your environment.
Blade servers are often seen as the choice for virtualization environments. Blades may play better in a standard virtualization environment than they do with more traditional computational workloads but this is both highly disputable and not necessarily applicable data. Being a good scenario for blades themselves does not make it a good scenario for a business. Just because the blades perform better than normal when used in this way does not imply that they perform better than traditional servers – only that they have potentially closed the gap.
Blades needs to be evaluated using the same harsh criteria when virtualizing as when not and, very often, they will continue to fail to provide the long term business value needed to choose them over the more flexible alternatives. Blades remain far from a necessity for virtualization and often, in my opinion, a very poor choice indeed.
One of the most common misconceptions is that by moving to virtualization one must also move to shared storage such as SAN. This mindset is the obvious reaction to the desire to also achieve other benefits from virtualization which, if they don’t require SAN, benefit greatly from it. The ability to load balance or failover between systems is heavily facilitated by having a shared storage backend. It is a myth that this is a hard requirement, but replicated local storage brings its own complexities and limitations.
But shared storage is far from a necessity of virtualization itself and, like everything, needs to be evaluated on its own. If virtualization makes sense for your environment but you need no features that require SAN, then virtualize without shared storage. There are many cases where local storage backed virtualization is an ideal deployment scenario. There is no need to dismiss this approach without first giving it serious consideration.
The last major assumed necessary feature of virtualization is system level high availability or instant failover for your operating system. Without a doubt, high availability at the system layer is a phenomenal benefit that virtualization brings us. However, few companies needed high availability at this level prior to implementing virtualization and the price tag of the necessary infrastructure and software to do it with virtualization is often so high as to make it too expensive to justify.
High availability systems are complex and often overkill. It is a very rare business system that requires transparent failover for even the most critical systems and those companies with that requirement would almost certainly already have failover processes in place. I see companies moving towards high availability all of the time when looking at virtualization simply because a vendor saw an opportunity to dramatically oversell the original requirements. The cost of high availability is seldom justified by the potential loss of revenue from the associated reduction in downtime. With non-highly available virtualization, downtime for a failed hardware device might be measured in minutes if backups are handled well. This means that high availability has to justify its cost in potentially eliminating just a few minutes of unplanned downtime per year minus any additional risks assumed by the added system complexity. Even in the biggest organizations this is seldom justified on any large scale and in a more moderately sized company it is uncommon altogether. But today we find many small businesses implementing high availability systems at extreme cost on systems that could easily suffer multi-day outages with minimal financial loss simply because the marketing literature promoted the concept.
Like anything, virtualization and all of the associated possibilities that it brings to the table need to be evaluated individually in the context of the organization considering them. If the individual feature does not make sense for your business do not assume that you have to purchase or implement that feature. Many organizations virtualize but use only a few, if any, of these “assumed” features. Don’t look at virtualization as a black box, look at the parts and consider them like you would consider any other technology project.
What often happens in a snowball effect where one feature, likely high availability, is assumed to be necessary without the proper business assessment being performed. Then a shared storage system, often assumed to be required for high availability, is added as another assumed cost. Even if high availability features are not purchased the decision to use SAN might already be made and fail to be revisited after changes to the plan are made. It is very common, in my experience, to find projects of this nature with sometimes more than fifty percent of the total expenditure on the project being spent on products that the purchaser is unable to even describe the reason for having purchased.
This concept does not stop at virtualization. Extend it to everything that you do. Keep IT in perspective of the business and don’t assume that going with one technology automatically assumes that you must adopt other technologies that are popularly associated with it.
I would like to hear your perspective on virtualization in a startup setting without any existing servers in place.
Would the same considerations apply or should one virtualize at the outset to keep the number of servers down to 2?
Cheers
Eric