Tag Archives: blade

Just Because You Can…

I see this concept appear in discussions surrounding virtualization all of the time.  This is a broader, more general concept but virtualization is the “hot, new technology” facing many IT organizations and seems to be the space where currently we see the “just because you can, doesn’t mean you should” problems rearing their ugly heads most prevalently.  As with everything in IT, it is critical that all technical decisions be put into a business context so that we understand why we choose to do what we do and not blindly attempt to make our decisions based on popular deployment methodologies or worse, myths..

Virtualization itself, I should point out, I feel should be a default decision today for those working in the x64 computing space with systems being deployed sans virtualization only when a clear and obvious necessity exists such as specific hardware needs, latency sensitive applications, etc.  Baring any specific need, virtualization is free to implement from many vendors and offers many benefits both today and in future-proofing the environment.

That being said, what I often see today is companies deploying virtualization not as a best practice but as a panacea to all perceived IT problems.  This it certainly is not.  Virtualization is a very important tool to have in the IT toolbox and one that we will reach for very often, but it does not solve every problem and should be treated like every other tool that we posses and used only when appropriate.

I see several things recurring when virtualization discussions come up as a topic.  Many companies today are moving towards virtualization not because they have identified a business need but because it is the currently trending topic and people feel that if they do not implement virtualization that somehow they will be left behind or miss out on some mythical functionality.  This is generally good as it is increasing virtualization adoption, but it is bad because good IT and business decision making processes are being bypassed.  What happens is often that in the wave of virtualization hype IT departments feel that not only do they have to implement virtualization itself but do so in ways that may not be appropriate for their business.

There are four things that I often see tied to virtualization, often accepted as virtualization requirements, whether or not they make sense in a given business environment.  These are server consolidation, blade servers, SAN storage and high availability or live failover.

Consolidation is so often vaunted as the benefit of virtualization that I think most IT departments forget that there are other important reasons for doing implementing it.  Clearly consolidation is a great benefit for nearly all deployments (mileage may vary, of course) and is nearly always able to be achieved simply through better utilization of existing resources.  It is a pretty rare company that runs more than a single physical server that cannot shave some amount of cost through limited consolidation and it is not uncommon to see datacenter footprints decimated in larger organizations.

In extreme cases, though, it is not necessary to abandon virtualization projects just because consolidation proves to be out of the question.  These cases exist for companies with high utilization systems and little budget for a preemptive consolidation investment.  But these shops can still virtualize “in place” systems on a one to one basis to gain other benefits of virtualization today and look to consolidate when hardware needs to be replaced tomorrow or when larger, more powerful servers become more cost effective in the future.  It is important to not rule out virtualization just because its most heralded benefit may not apply at the current time in your environment.

Blade servers are often seen as the choice for virtualization environments.  Blades may play better in a standard virtualization environment than they do with more traditional computational workloads but this is both highly disputable and not necessarily applicable data.  Being a good scenario for blades themselves does not make it a good scenario for a business.  Just because the blades perform better than normal when used in this way does not imply that they perform better than traditional servers – only that they have potentially closed the gap.

Blades needs to be evaluated using the same harsh criteria when virtualizing as when not and, very often, they will continue to fail to provide the long term business value needed to choose them over the more flexible alternatives.  Blades remain far from a necessity for virtualization and often, in my opinion, a very poor choice indeed.

One of the most common misconceptions is that by moving to virtualization one must also move to shared storage such as SAN.  This mindset is the obvious reaction to the desire to also achieve other benefits from virtualization which, if they don’t require SAN, benefit greatly from it.  The ability to load balance or failover between systems is heavily facilitated by having a shared storage backend.  It is a myth that this is a hard requirement, but replicated local storage brings its own complexities and limitations.

But shared storage is far from a necessity of virtualization itself and, like everything, needs to be evaluated on its own.  If virtualization makes sense for your environment but you need no features that require SAN, then virtualize without shared storage.  There are many cases where local storage backed virtualization is an ideal deployment scenario.  There is no need to dismiss this approach without first giving it serious consideration.

The last major assumed necessary feature of virtualization is system level high availability or instant failover for your operating system.  Without a doubt, high availability at the system layer is a phenomenal benefit that virtualization brings us.  However, few companies needed high availability at this level prior to implementing virtualization and the price tag of the necessary infrastructure and software to do it with virtualization is often so high as to make it too expensive to justify.

High availability systems are complex and often overkill.  It is a very rare business system that requires transparent failover for even the most critical systems and those companies with that requirement would almost certainly already have failover processes in place.  I see companies moving towards high availability all of the time when looking at virtualization simply because a vendor saw an opportunity to dramatically oversell the original requirements.  The cost of high availability is seldom justified by the potential loss of revenue from the associated reduction in downtime.  With non-highly available virtualization, downtime for a failed hardware device might be measured in minutes if backups are handled well.  This means that high availability has to justify its cost in potentially eliminating just a few minutes of unplanned downtime per year minus any additional risks assumed by the added system complexity.  Even in the biggest organizations this is seldom justified on any large scale and in a more moderately sized company it is uncommon altogether.  But today we find many small businesses implementing high availability systems at extreme cost on systems that could easily suffer multi-day outages with minimal financial loss simply because the marketing literature promoted the concept.

Like anything, virtualization and all of the associated possibilities that it brings to the table need to be evaluated individually in the context of the organization considering them.  If the individual feature does not make sense for your business do not assume that you have to purchase or implement that feature.  Many organizations virtualize but use only a few, if any, of these “assumed” features.  Don’t look at virtualization as a black box, look at the parts and consider them like you would consider any other technology project.

What often happens in a snowball effect where one feature, likely high availability, is assumed to be necessary without the proper business assessment being performed.  Then a shared storage system, often assumed to be required for high availability, is added as another assumed cost.  Even if high availability features are not purchased the decision to use SAN might already be made and fail to be revisited after changes to the plan are made.  It is very common, in my experience, to find projects of this nature with sometimes more than fifty percent of the total expenditure on the project being spent on products that the purchaser is unable to even describe the reason for having purchased.

This concept does not stop at virtualization.  Extend it to everything that you do.  Keep IT in perspective of the business and don’t assume that going with one technology automatically assumes that you must adopt other technologies that are popularly associated with it.

The Dangers of Blade Servers in SMB – Debunking the Blade Server Myth

Blade Servers are the hottest trend in datacenters today.  I am sure that you have heard the hype: lower cost and better efficiency.  To be sure, blades have come a long way in the last few years and are looking better than ever, but considering putting blades into your own business is something that should be considered very carefully.  There are many hidden dangers inherent to the blade concept that are often overlooked and these hidden dangers can come back to haunt you long after you have committed to the idea of blades.

Before we look into blades themselves I want to discuss what blades are.  According to Wikipedia: “Blade servers are stripped down computer servers with a modular design optimized to minimize the use of physical space. Whereas a standard rackmount server can function with (at least) a power cord and network cable, blade servers have many components removed to save space, minimize power consumption and other considerations, while still having all the functional components to be considered a computer.”  It is important to define blade servers because it has become common, especially in the used server market, for resellers to use the term blade to refer to standard, 1U and 2U rackmount servers in the hopes of confusing customers new to the blade market.  Blades are a specific hardware category that requires the use of an enclosure and are not simply “small” servers.  Blade servers use shared components in the enclosure, such as power supplies and remote management consoles, reducing the components necessary in each individual blade server.

The first danger of blades is cost.  Blade enclosures are generally very expensive even though blades themselves are often less expensive than their rackmount counterparts.  In a quick price comparison of a large blade vendor’s offerings the enclosure was approximately $5,000 and could hold a maximum of eight blade servers.  Each blade was roughly $500 less expensive than the matching vendor’s rackmount server of the same or similar specs.  This means that a fully populated blade enclosure, at list price, from this vendor would cost $1,000 more than the equivalent computational power in traditional form factors.  And every blade slot not populated would be an additional $500 deficit.

The cost of blades is not just a total cost factor.  Blade enclosures, often holding eight to sixteen blade servers, need to be purchased up front.  If you need enough servers to match the capacity of an enclosure this is not a factor, but if you are looking to buy only a single server now you may be making a significant investment in proposed future server farm growth.  This means increased risk as well as an investment against the time-value of your dollar.

Hardware cost is always a difficult number to nail down.  Stated prices from the vendors rarely reflect reality and, as most companies know, dramatically lower prices are available if you demand them.  I have known companies to get their blade enclosures for free, for example, which completely changes the cost equation of blades.  But in the same breath one must remember that if a blade enclosure is available for free that serious discounts on traditional rackmount servers are likely also available.  So the list prices are often a good judge of relative prices even if not absolute ones.  Your mileage will vary – so due diligence is necessary to create a cost analysis appropriate for your given situation and the deal that you receive from your vendor.

The second danger of blades is technological obsolescence.  Unlike traditional racks which have gone practically unchanged for many decades, blade enclosures are new and relatively dynamic.  Several generations of blade enclosures have come and gone since their inception in 2001 and each subsequent generation, thus far, has required shops to replace their enclosures to support new blade servers.  This is a high risk if you are not buying servers often enough and in large enough quantity to justify the technology churn in the enclosures.  This rate of change is slowing as the technologies mature, but risks remains.  When doing a proper cost analysis of blade servers this rate of change needs to be factored.

The third danger is vendor lock-in.  Traditional rack technologies are vendor agnostic.  Most shops will mix and match not only servers but batteries, routers, switches, monitoring equipment and other gear into their racks.  Blades are vendor specific.  For a large enterprise this is of little or no concern.  In a small shop with a limited number of servers it can be crucial not to give up the ability to use different vendors and technologies.  This can be a limitation on technology but also is a limitation on leverage to obtain premium vendor pricing discounts in the future.

Take, as an example, a shop that wishes to run HP Integrity blades with their Intel Itanium processors today.  They invest in blade enclosures and begin using them.  In three years they purchase software that runs on Sun UltraSparc or IBM Power processors.  In order to use blades each of these technologies will require their own brand of blade enclosure and will significantly increase the risk in a small shop that enclosures will not be able to be fully populated.  There is much more flexibility in technologies using traditional rackmount servers because each vendor generally supplies one set of RISC or EPIC-based systems and one set of AMD / Intel-based commodity systems.  If you want more than that blades will become quite difficult for a small shop to manage.  I have worked first hand with shops that use multiple technologies like this on a regular basis making blades a most difficult choice today before considering potential future platform desicions.  The use of Apple Mac OSX must also be mentioned as Apple does not provide blade servers so any deployment of OSX-based servers cannot be integrated into a blade enclosure.

The fourth danger is the shared backplane and other key components.  A blade enclosure, while generally built with massive redundancy and with truly amazing design, still represents a single point of failure that must be considered.  If your enclosure fails you do not lose just a single server but as many as sixteen physical server platforms.  With rackmounts servers you can add redundancy simply by adding an additional server – typically one matching server for each server you need.  With blades you have to have redundant enclosures for the same level of reliability.  Again, for a large enterprise this is trivial and obvious.  For a small business the need to suddenly own dual enclosures for full redundancy will often result in simply foregoing that level of protection and increasing risk.

The fifth danger is in the cost of flexibility.  Small IT shops may not often move their equipment around.  The option is generally there, though.  If a small business owns three servers and replaces one with a shiny, new unit the option is almost always there to redeploy the old server to another role elsewhere in the company – perhaps in a branch office.  With blades the old blades can only be redeployed in a location that has a blade enclosure matching the one from which the blade was pulled.  This is a cost of lost opportunity late in the server lifecycle and often completely ignored in cost analysis of blades.  If there is not a spot ready for an older server it is far more likely to be discarded in the blade model rather than redeployed unless the company is large enough to have many enclosures available all of the same generation and with available space ready to accept an older server.

The sixth danger of blades is the high cost of storage.  Storage is a subject all its own these days with SAN, NAS and DAS as possible options.  Shops of all sizes are moving to SAN and NAS quickly and with enough network storage in place this can alleviate much of the storage risk associated with blade servers.  Many shops, however, use circular reasoning and justify SAN because of the blades and blades because of the SAN.  Taking a holistic view of the server and storage picture is crucial.

A typical blade server can house only one or two 2.5″ SAS or SATA drives.  This is far less than a typical rackmount server would provide as potential storage space.  It is common to find eight to sixteen drive bays available in popular 2U rackmount configurations – sometimes using 3.5″ drives rather than 2.5″ drives.  One popular and very cost effective 2U server can hold 28TB of low-cost storage on fourteen spindles.  You cannot put this type of storage into a blade enclosure.  Because local drive space is simply not available, blade server owners are forced to use minimal direct attached storage and use SAN or NAS instead even when DAS would provide better performance and cost (otherwise) for that particular application.

To bridge this need most blade vendors provide storage blades – blade servers that act as tiny, low volume SAN devices and fit directly into the blade enclosure.  These units are generally of rather low capacity, often just six drives, and rather expensive compared to other means of providing storage.  Additionally they use a critical enclosure bay removing one of the potential slots necessary for a blade enclosure to provide server density.  So an eight bay blade enclosure with two small storage blades would only be able to house six blade servers.

Obviously buying a blade enclosure does not mean that you have given up the ability to also use rackmount servers when appropriate.  You can continue to mix and match.  But to obtain the numbers necessary for a small business to cost justify the blade infrastructure often requires that purchases lean heavily towards blade servers to fill the enclosure(s) as densely as possible.

Much of the danger of blades is in the potential for lost opportunities.  Small businesses especially function best and compete most strongly against larger businesses by being flexible and agile.  Blades are the opposite of agile.  They require large, upfront infrastructure planning that includes technological, physical and geographic lock-in.  Even if a business plans ahead and sees no obstacles to adoption this does not mean that opportunities will not be missed in the future, caused by a lack of flexibility to adapt to changing business conditions effectively.  Once a blade enclosure is in place purchasing decisions almost certainly are made based on the investment already made and no longer on simply what is best for the company.  This doesn’t have to happen but almost certainly will.  The existing investment needs to be protected.  This is the natural reaction to have.

All of this being said, blade servers can still make a lot of sense for certain businesses.  Blade servers generally consume less power than their non-blade counterparts due to their shared system components.  Be sure to consider the power consumption differences in the storage area, however, as blades push power consumption from the server to the SAN and can often be misleading as to where the power is going.  A savings in one place is only valuable if the cost does not appear again in another.

Blades are easy to transport and relocate when enclosures are available.  This can be a bigger factor than is obvious especially when it means that there are several additional staff members capable of relocating a server.  Almost anyone can lift and move a blade server.

When combined with a very aggressive SAN infrastructure, blades can be very beneficial to a virtualization environment.  This combination gives the maximum cost and flexibility advantage to businesses large enough to leverage it.  The SMB market mostly consists of businesses for whom this would be very prohibitive, though, and this solution will continue to be relegated to businesses at the larger end of the SMB spectrum.  Virtualization will, in fact, reduce the number of servers needed by most businesses making it even harder to justify blades to smaller businesses where previously a dozen or more servers would have been needed but today only two to four are needed to not only meet but to surpass earlier service levels.

If you can support adequate densities or get really aggressive vendor incentives then blades can be quite cost effective if you calculate against your risks.  Blades are always a little more risky, but if your cost is reduced significantly in buying them then they may be very much worth the risk in flexibility.  The cost of the enclosure is a key factor here.  If your enclosure is free then suddenly the cost savings of a blade system can be enormous – especially if a large number of blades are purchased providing really good enclosure density.

Blade servers are a great technology and show a lot of promise for the future.  As enclosure lifecycles slow, new technologies emerge, costs are reduced, volumes increase and, hopefully, as vendor-neutral standards emerge I am confident that blades will become the de facto standard in even the smallest datacenters.  I see this as taking at least another market cycle before this will really occur.  Most likely, in my opinion, it will be another five to seven years before the form factor truly displaces the rackmount server in general utility.