RAID Revisited

Back when I was a novice service tech and barely knew anything about system administration one of the few topics that we were always expected to know cold was RAID –  Redundant Array of Inexpensive Disks.  It was the answer to all of our storage woes.  With RAID we could scale our filesystems larger, get better throughput and even add redundancy allowing us to survive the loss of a disk which, especially in those days, happened pretty regularly.  With the rise of NAS and SAN storage appliances the skill set of getting down to the physical storage level and tweaking it to meet the needs of the system in question are rapidly disappearing.  This is not a good thing.  Just because we are offloading storage to external devices does not change the fact that we need to fundamentally understand our storage and configure it to meet the specific needs of our systems.

A misconception that seems to have entered the field over the last five to ten years is the belief that RAID somehow represents a system backup.  It does not.  RAID is a form of fault tolerance.  Backup and fault tolerance are very different conceptually.  Backup is designed to allow you to recover after a disaster has occurred.  Fault tolerance is designed to lessen the chance of disaster in the first place.  Think of fault tolerance as building a fence at the top of a cliff and backup as building a hospital at the bottom of it.  You never really want to be in a situation without a both a fence and a hospital, but they are definitely different things.

Once we are implementing RAID for our drives, whether locally attached or on a remote appliance like SAN, we have four key RAID solutions from which to choose today for business: RAID 1 (mirroring), RAID 5 (striping with parity), RAID 6 (striping with double parity) and RAID 10 (mirroring with striping.)  There are others, like RAID 0, that only should be used in rare circumstances when you really understand your drive subsystem needs.  RAID 50 and 51 are used as well but far less commonly and are not nearly as effective.  Ten years ago RAID 1 and RAID 5 were common, but today we have more options.

Let’s step through the options and discuss some basic numbers.  In our examples we will use n to represent the number of drives in our array and we will use s to represent the size of any individual drive.  Using these we can express the usable storage space of an array making comparisons easy in terms of storage capacity.

RAID 1: In this RAID type drives are mirrored.  You have two drives and they do everything together at the same time, hence “mirroring”.  Mirroring is extremely stable as the process is so simple, but it requires you to purchase twice as many drives as you would need if you were not using RAID at all as your second drive is dedicated to redundancy.  The benefit being that you have the assurance that every bit that you write to disk is being written twice for your protection.  So with RAID 1 our capacity is calculated to be (n*s/2).  RAID 1 suffers from providing minimal performance gains over non-RAID drives.  Write speeds are equivalent to a non-RAID system while read speeds are almost twice as fast in most situations since during read operations the drives can access in parallel to increase throughput.  RAID 1 is limited to two drive sets.

RAID 5: Striping with Single Parity, in this RAID type data is written in a complex stripe across all drives in the array with a distributed parity block that exists across all of the drives.  By doing this RAID 5 is able to use an arbitrarily sized array of three or more disks and only loses the storage capacity equivalent to a single disk to parity although the parity is distributed and does not exist solely on any one physical disk.   RAID 5 is often used because of its cost effectiveness due to its lack of storage capacity loss in large arrays.  Unlike mirroring, striping with parity requires that a calculation be performed for each write stripe across the disks and this creates some overhead.  Therefore the throughput is not always an obvious calculation and is dependent heavily upon the computational power of the system doing the parity calculation.  Calculating RAID 5 capacity is quite easy as it is simply ((n-1)*s).  A RAID 5 array can survive the loss of any single disk in the array.

RAID 6: Redundant Striping with Double Parity.  RAID 6 is practically identical to RAID 5 but uses two parity blocks per stripe rather than one to allow for additional protection against disk failure.  RAID 6 is a newer member of the RAID family having been added several years after the other levels had become standardized.  RAID 6 is special in that it allows for the failure of any two drives within an array without suffering data loss.  But to accommodate the additional level of redundancy a RAID 6 array loses the storage capacity of the equivalent to two drives in the array and requires a minimum of four drives.  We can calculate the capacity of a RAID 6 array with ((n-2)*s).

RAID 10: Mirroring plus Striping.  Technically RAID 10 is a hybrid RAID type encompassing a set of RAID 1 mirrors existing in a non-parity stripe (RAID 0).  Many vendors use the term RAID 10 (or RAID 1+0) when speaking of only two drives in an array but technically that is RAID 1 as striping cannot occur until there are a minimum of four drives in the array.  With RAID 10 drives must be added in pairs so only an even number of drives can exist in an array.  RAID 10 can survive the loss of up to half of the total set of drives but a maximum loss of one from each pair.  RAID 10 does not involve a parity calculation giving it a performance advantage over RAID 5 or RAID 6 and requiring less computational power to drive the array.  RAID 10 delivers the greatest read performance of any common RAID type as all drives in the array can be used simultaneously in read operations although its write performance is much lower.  RAID 10’s capacity calculation is identical to that of RAID 1, (n*s/2).

In today’s enterprise it is rare for an IT department to have a serious need to consider any drive configuration outside of the four mentioned here regardless of whether software or hardware RAID is being implemented.  Traditionally the largest concern in a RAID array decision was based around usable capacity.  This was because drives were expensive and small.  Today drives are so large that storage capacity is rarely an issue, at least not like it was just a few years ago, and the costs have fallen such that purchasing additional drives necessary for better drive redundancy is generally of minor concern.  When capacity is at a premium RAID 5 is a popular choice because it loses the least storage capacity compared to other array types and in large arrays the storage loss is nominal.

Today we generally have other concerns, primarily data safety and performance.  Spending a little extra to ensure data protection should be an obvious choice.  RAID 5 suffers from being able to lose only a single drive.  In an array of just three members this is only slightly more dangerous than the protection offered by RAID 1.  We could survive the loss of any one out of three drives.  Not too scary compared to losing either of two drives.  But what about a large array, say sixteen drives.  Being able to safely lose only one of sixteen drives should make us question our reliability a little more thoroughly.

This is where RAID 6 stepped in to fill the gap.  RAID 6, when used in a large array, introduces a very small loss of storage capacity and performance while providing the assurance of being able to lose any two drives.  Proponents of the striping with parity camp will often quote these numbers to assuage management that RAID 5/6 can provide adequate “bang for the buck” in storage subsystems, but there are other factors at play.

Almost entirely overlooked in discussions of RAID reliability, an all too seldom discussed topic as it is, is the question of parity computation reliability.  With RAID 1 or RAID 10 there is no “calculation” done to create a stripe with parity.  Data is simply written in a stable manner.  When a drive fails its partner picks up the load and drive performance is slightly degraded until the partner is replaced.  There is no rebuilding process that impacts existing drive members.  Not so with parity stripes.

RAID arrays with parity have operations that involve calculating what is and what should be on the drives.  While this calculation is very simple it provides an opportunity for things to go wrong.  An array control that fails with RAID 1 or RAID 10 could, in theory, write bad data over the contents of the drives but there is no process by which the controller makes drive changes on its own so this is extremely unlikely to ever occur as there is never a “rebuild” process except in creating a mirror.

When arrays with parity perform a rebuild operation they perform a complex process by which they step through the entire contents of the array and write missing data back to the replaced drive.  In and of itself this is relatively simple and should be no cause for worry.  What I and others have seen first hand is a slightly different scenario involving disks that have lost connectivity due to loose connectors to the array.  Drives can commonly “shake” loose over time as they sit in a server especially after several years of service in an always-on system.

What can happen, in extreme scenarios, is that good data on drives can be overwritten by bad parity data when an array controller believes that one or more drives have failed in succession and been brought back online for rebuild.  In this case the drives themselves have not failed and there is no data loss.  All that is required is that the drives be reseated, in theory.  On hot swap systems the management of drive rebuilding is often automatic based on the removal and replacement of a failed drive.  So this process of losing and replacing a drive may occur without any human intervention – and a rebuilding process can begin.  During this process the drive system is at risk and should this same event occur again the drive array may, based upon the status of the drives, begin striping bad data across the drives overwriting the good filesystem.  It is one of the most depressing sights for a server administrator to see when a system with no failed drives loses an entire array due to an unnecessary rebuild operation.

In theory this type of situation should not occur and safeguards are in place to protect against it but the determination of a low level drive controller as to the status of a drive currently and previously and the quality of the data residing upon that drive is not as simple as it may seem and it is possible for mistakes to occur.  While this situation is unlikely it does happen and it adds a nearly impossible to calculate risk to RAID 5 and RAID 6 systems.  We must consider the risk of parity failure in addition to the traditional risk calculated from the number of drive losses that an array can survive out of a pool.  As drives become more reliable the significance of the parity failure risk event becomes greater.

Additionally, RAID 5 and RAID 6 parity introduces system overhead due to parity calculation which is often handled by way of dedicated RAID hardware.  This calculation introduces latency into the drive subsystem that varies dramatically by implementation both in hardware and in software making it impossible to state performance numbers of RAID levels against one another as each implementation will be unique.

Possibly the biggest problem with RAID choices today is that the ease with which metrics for storage efficiency and drive loss survivability can be obtained mask the big picture of reliability and performance as those statistics are almost entirely unavailable.  One of the dangers of metrics is that people will focus upon factors that can be easily measured and ignore those that cannot be easy measured regardless of their potential for impact.

While all modern RAID levels have their place it is critical that they be considered within context and with an understanding as to the entire scope of the risks.  We should work hard to shift our industry from a default of RAID 5 to a default of RAID 10.  Drives are cheap and data loss is expensive.

[Edit: In the years since this was initial written the rise of URE (Unrecoverable Read Errors) risks during a rebuild operation has shifted the primary risks from those listed to URE-related risks for parity arrays.]

Virtualization for Small Business

In the last year or two we have seen virtualization go from a poorly understood concept to a much-hyped industry buzz word being bantered about constantly in every conversation involving technology.  There is no doubt that virtualization is playing an important role in today’s IT landscape, but the question we are asking is whether virtualization applies to the small and medium business markets at this time.

The quick answer to this question is: absolutely.  Unlike many technologies that are of questionable value or that provide a great degree of technological complication, risk and expense that may not be appropriate for a small business, virtualization is a mature technology (IBM CP/CMS circa 1968) that is well understood and provides a layer of hardware abstraction that can benefit an IT organization of any size and may possibly apply even more to the small business IT department than it applies in the enterprise space.

Before looking at how virtualization can benefit the SMB market I would like to provide some definitions to be sure that we are discussing the same set of technologies.  In today’s IT landscape it has become popular to relabel common technologies as “virtualization” for marketing reasons and this has unnecessarily complicated the issue.

True virtualization refers to the virtualizing of entire operating systems.  Wikipedia uses the term platform virtualization and I will as well.  Technically we could refer to this as “System Virtualization” or “Operating System Virtualization” to distinguish it from loosely-related technologies that may arguably have the right to also use the same general term.

The basic concept of platform virtualization involves running an abstraction layer on a computer that emulates the hardware itself. Through the combination of abstraction and emulation we get what is known as a virtual machine.  This virtual machine is a completely working “computer” onto which we can install an operating system just as if we were installing onto the bare metal of a dedicated machine.  Instead of being limited to only installing one operating system image per computer we can now, with platform virtualization, install many copies of the same or disparate operating systems onto the same piece of hardware.  A powerful concept indeed.

The obviousness of the utility of this technology begs the obvious question: “If platform virtualization has been available since 1968, why is it only becoming popular and important recently?”  This is an excellent question.  The answer is actually quite simple.

Traditional platform virtualization technologies require a lot of support within the computer hardware itself.  IBM has been building this type of support into its mainframe systems for decades and large UNIX vendors like Sun have been providing this in their high-end UNIX servers for years as well.  These systems are highly specialized and typically run their own custom operating system(s).  Generally only large IT shops could afford servers of this magnitude and small shops did not have ready access to these technologies.  For those IT professionals who have worked with this type of equipment in the past the idea of virtualization was often so ingrained into the platform that it was often discussed very little as it was seen as simply an aspect of these high-end server systems and not necessarily a concept in its own right.

What has changed recently is the move to bring platform virtualization to the commodity hardware space occupied by the AMD and Intel (x86_64) processors used by the majority of small and medium businesses as well as larger enterprises.  The first move was to use software alone to make this possible on the x86 processor family.  The early players in this space were VMWare and Microsoft with products like VMWare Workstation, Virtual PC, VMWare GSX and MS Virtual Server.  These products showed that no special hardware was needed to effectively virtualize whole operating systems and began to allow companies of all sizes to experiment with the concept of virtualizing their existing commodity platforms.  This form of virtualization is known as “host-based virtualization” as it requires a host operating system on which the virtualization environment will run.

Following on the tail of these software-only solutions the big processor vendors in the commodity space, AMD and Intel, began building virtualization capabilities into the processor allowing for more flexibility, security and performance and bringing the commodity x64 hardware market much more in line with the traditional offerings from the other processor families common in big iron servers.  By doing so the virtualization market has really exploded both from the vendor side as more and more vendors begin offering virtualization related products and from the customer side as virtualization begins to be better understood and its use becomes more commonplace.  With the latest rounds of purchasing most small IT shops have purchased servers, and often desktops, that support hardware-level virtualization even without intending to prepare themselves for a move to virtualization making the equation often tip in that direction naturally.  This hardware-supported virtualization model is called “hypervisor-based virtualization” as all operating systems run on top of a tiny kernel called the hypervisor and no traditional operating system runs directly on the hardware.

Now that we have a good idea of what platform virtualization is and why it is now available to us as an option we will look at why platform virtualization may be beneficial to us in the small and medium business space.

There are two things that we can readily virtualize (without getting esoteric or starting to virtualize our routing and switching infrastructure) – servers and desktops.  By far the easier and more obvious choice is the virtualization of servers.

Virtualizing the server infrastructure, or part of it, is the first place that most IT shops look today as a potential for virtualization.  Most companies find that the majority of their servers are extremely underutilized with excess CPU, memory and drive capacity sitting idle while additional workloads fail to find a home due to budget constraints, space or implementation time.  Virtualization to the rescue.

Through virtualization we have the opportunity to run several virtual servers on a single piece of server hardware.  We could virtualize just a single server system but this would not gain us any utilization advantages or we could, in theory, virtualize hundreds of servers if our hardware could handle it.  Typically, small businesses can virtualize several typical servers roles onto a single physical server.  Virtual machine density is, of course, determined by load characteristics as well as by the available hardware.  Virtualization uses a lot of memory and storage, obviously, and so careful planning must be made.  Memory and storage are relatively inexpensive today and are certainly vastly less expensive than purchasing additional server hardware and paying to support it.  It is not uncommon for a small business to easily virtualize half a dozen servers on a single piece of hardware at a minimum and a score or more is not an unreasonable number to hope to achieve.

Many small shops instantly jump to the conclusion that virtualization requires expensive SAN storage.  This is not the case at all.  Virtualization provides a range of benefits even without using a SAN storage infrastructure of which shops can take advantage immediately.  There are, of course, some significant advantages available by using SAN in conjunction with virtualization and high availability or load balancing technologies.  Often, though, these high availability and load balancing capabilities are additional features that did not exist prior to virtualization and are not necessary in order for a shop to see significant benefits from virtualization but do present an opportunity for future improvement when and if budgets allow.

Small businesses will see many advantages from virtualization immediately even doing so on a small scale.  Some of these benefits are obvious and some are less so.

Our first advantage is that of hardware cost as I mentioned above.  By eliminating the need to purchase and support expensive server hardware on a per operating system basis we can now deploy more systems at lower cost per system.  In many cases this is not only a cost savings but will also provide greater funds necessary to move from more spartan servers into fewer but more enterprise class offerings with important performance, stability and support features such as integrated power management and KVM over IP from an out of band management console.

Our second advantage is the cost savings from reducing power consumption.  It is very trendy, and for good reason, for companies to be concerned with how “green” they are today and IT virtualization plays a key role in the greenification of the department.  The addition of virtual machines onto a single physical server typically represents a trivial, if even measurable, increase in power draw.  Adding additional physical servers, of course, adds a significant amount of power consumption even for systems that are lightly used or used only occasionally.

Our third advantage is in reducing backup complexity.  Virtualized servers can be backed up using completely traditional methods such as file system level backups from the operating system itself as made popular by traditional backup systems like NetBackup, BackupExec, Amanda, Bacula and others.  So if we desire to stick with current backup strategies we can without any additional complexity, but if we want to move to image-based backups we can do so quite easily.  Using system images as backups is not necessarily new or unique to virtualization but virtualization makes this far more obvious and accessible for many users.  In fact, with virtualization system images (a copy of the entire system, not just of its individual files) can be taken using nothing but the regular filesystem – no special software needed.  A complete system backup can be taken by simply shutting down the virtual server, making a copy of its virtual filesystem – often a single, large file, and starting the system up again.  Restoring a system can be a simple as copying an image file from a backup storage device to the virtual server and starting it back up.  Restore done.  System back online.  This is as simple as it gets.

Our fourth advantage is in the ease of provisioning.  Building a new server operating system directly on hardware is a time consuming venture for most shops.  This is especially true if there are any surprises with new hardware type that has not been used previously.  There may be missing drivers or special operating system settings and parameters needed to support the hardware.  With virtualization the target platform is always identical removing many surprises from this process making it both faster and more reliable.  In many cases deployment is also faster simply because the process of preparing the base machine is so much faster.  To kick off a manual install of Linux on a traditional physical server I must purchase said server, install into rack, connect power and networking, provision networking, turn on server, update firmware, configure out of band management system, burn in hardware, install installation media and begin installing.  Or from some virtualization environments I can simply kick off the entire process with a single command at the command line.  Deploying a new server could go from hours or days to minutes.  This does not even begin to address the simplicity of cloning existing systems within a virtual environment.

A fifth “soft” advantage of virtualization is that there is quite often a significant software cost savings when virtualizing.  Some vendors, like Novell with Suse Linux, allow you to virtualize as many servers as you want on a single physical machine while paying for only a single machine license.  Red Hat gives you multiple installs but not unlimited like Novell.  Microsoft has a range of virtualization pricing options depending on your needs including an unlimited per processor deployment license.  In a worst case scenario you will need to pay for additional operating system and other software licenses exactly as if you were running the same machines physically but in almost all cases there is more pricing flexibility and often dramatic cost reductions for multiple virtualized hosts.

A sixth benefit is in the ability to “roll back” an entire operating system.  Most virtualization platforms allow for a concept of taking a system snapshot, making changes to the active system and then restoring the system back to its original state when done.  This is great for software testing and especially for the testing of operating system patches or any critical update process where something going wrong could cause your system to become unresponsive and potentially not repairable.  The ability to go “back in time” to the latest snapshot, taken seconds before the patch application or risky configuration change can be a lifesaver.  Of course taking an image backup could be used in the same way but snapshots allow for even more rapid recovery due to their “proximity” to the original filesystem.

All of these aforementioned benefits come with a move to virtualization and do not require additional cost for software or hardware.  If our budget allows and the need exists there is also the option of adding one of more virtualization servers and having these servers share a SAN for storage of virtual machine images.  At a minimum this will roughly triple the hardware cost but provides double the processing power and some really amazing features.  The main feature that really makes this solution impressive is the concept of live migration.  Live migration is when a virtual operating system can be moved, while running, from one physical virtualization server to another.  This can be done for purposes of load balancing, disaster testing or to survive a disaster itself.  With some live migration solutions, generally sold as high availability, this migration can happen so quickly that it provides effectively “zero downtime” and even heavily used web servers could survive the loss of a physical server without customers ever knowing that a physical server had gone down.  The transition between virtual machine host nodes is completely transparent to the end users.

There is one major caveat.  Relying upon a SAN in a disaster recovery scenario, of course, creates another point of failure – the SAN system.  So when planning to use SAN to increase the reliability of your virtual machines be sure not to use SAN that is not as redundant or moreso than your servers themselves or you may increase cost while accidentally lowering reliability and performance.

For the average small business it is not unlikely that it will make sense to not only virtualize some of the server infrastructure but virtualize all or nearly all of it.  Virtualization’s advantages are so many and its downsides so few and minor that it is a rare workload in the small business space that would justify dedicated hardware servers.

Now that we have examined why server virtualization makes sense we can begin looking towards desktop virtualization.  Unlike real desktops and servers, virtualized desktops often add a bit of complexity due to licensing requirements especially with Microsoft Windows desktops.

Virtualizing desktops is also somewhat complicated because there are many modes for physically providing desktops.  Obviously once we begin talking about virtualizing the desktop infrastructure we are actually talking about a range of solutions because some device must always exist “on the desktop” providing a keyboard, mouse and monitor which cannot be virtualized and the desktop operating system itself must be running elsewhere.  Even without virtualization this is done (and sometimes marketed as virtualization when, in fact it is simply remote access) very commonly through desktop blades, rackmount desktops or terminal servers.  All of these solutions move the desktop into the datacenter and provide access to it either from thin client front ends or simply via software to remote users existing machines such as users at home logging in to the office.

We will start with the concept of the terminal server as this is the most easily virtualized and the most straightforward.  Whether we are talking about virtualizing the server on which we run Microsoft Terminal Server (now known as Remote Desktop Services), Citrix XenApp or simply a standard Linux remote desktop terminal server we need do nothing more than install that server into a virtual environment rather than into a physical one.  It is really a question of server virtualization not of desktop virtualization – it is only perceived by the end user as being related to their desktops.

The other method of desktop virtualization, “true desktop virtualization” as I will refer to it, is to actually run desktop operating system images on a virtual server just as if they were normal desktops dedicated to a user.  This means virtualizing operating systems like Windows XP, Windows Vista or Windows 7 with each image being dedicated to a single user just as if it was a physical desktop.  We could, theoretically, do the same thing with Linux or some other flavor of Unix but as those systems do not have per user licensing or desktop specific versions and since they always run their desktops in a server mode we would only be able to differentiate between a true virtualized desktop and a Unix-based terminal server in its usage and not by any strict technological means as they are one and the same.  Only Windows truly offers a dedicated desktop model that allows this to happen in this particular manner without the concept of shared access to a single image simultaneously.

Due to licensing restrictions from Microsoft, Windows desktops must be installed one image per user even if technologies exist to make this technologically unnecessary, but still there are benefits to this model.  The big benefits to virtualized desktops definitely go to companies who have employees who roam either internally or even externally.

Using virtualized desktops provides far more control to the company than does providing laptops.  Laptops can be stolen, lost or damaged.  Laptops wear out and need to be replaced regularly.  A virtual desktop that is made accessible from the outside of the company can be secured and protected in ways that a laptop cannot be.  Upgrades are much simpler and there is no concern of the virtual desktop becoming cut off from the corporate network and being unable to be supported by the IT staff.

Almost any worker who uses a computer in the office already has one at home for personal use and often has a  laptop as well in addition to high speed Internet access.  Providing remote access to a virtual desktop at the office therefore potentially incurs no additional hardware expense for the company or staff while easing administrative burdens, lowering power consumption and increasing security.  Some workers will always require laptops but many will not.

For workers still sitting at a traditional desk inside of the company’s offices there is still a need for something physically sitting on the desk that will connect the keyboard, mouse and monitor to the newly virtualized desktop.  This could be an old PC that was planned for retirement, a dedicated hardware thin client or even a laptop.  Internal staff can then move around the office or between offices and sit at any available desk with a thin client and log in to their own dedicated virtual desktop and work exactly as if they were at their own desk.  They can then go home and work from there as well if this is allowed.

Like virtualized servers, desktops, if the need is warranted, can be easily backed up using either traditional means or by simply taking complete system images.  The flexibility is there to do whatever makes the most sense in your environment.

With the complexity and surprise cost of licensing as well as lack of ability to completely do away with hardware on the desktop except for solely remote users, desktop virtualization is hardly the no-brainer that server virtualization is.  Desktop virtualization will require careful analysis on a case by case basis to determine if it will meet the cost and usability needs of the individual organization.  Most organizations who choose to go this route will likely opt to only partially virtualize – using it only in cases where it makes the most sense such as roaming users and remote workers while keeping traditional desktops for those users who would seldom be in a position to take advantage of this technology.  Using terminal server options will often be far more common than “true desktop virtualization” which often makes sense only for power users, developers or to support certain applications that work poorly in a terminal server mode.

There is a final usage of virtualization that warrants discussion if only because it is important to understand its use in the business environment.  This final type of virtualization is not used to put operating systems into the datacenter on server hardware but instead is used to run additional operating system images on traditional desktops and laptops.  This is a common scenario for people who need to test multiple operating systems for support or development.  It is not useful for production systems and is generally outside the scope of this discussion.  It is a highly useful use of the technology but it is rather a niche scenario primarily useful for compatibility testing.

In all of this discussion there has been, somewhat conspicuously, no mention of Apple’s Mac OSX products.  There is a reason for this.  Apple does not license Mac OSX so that it may be virtualized on non-Apple hardware and Apple does not have an enterprise-ready virtualization product ready for its own platform.  The only way to virtualize Mac OSX is to purchase full, additional licenses for each operating system instance thereby eliminating most of the cost benefits of this approach and to run it on a host-based virtualization product such as VMWare Fusion or Parallels which are designed for use on top of a desktop and not as a server-class product.  This is a major gap in the Mac OSX portfolio and one of the ways in which Apple continues to lag behind the rest of the market in capability and in its understanding of its business customers’ needs.  If Apple were to change its licensing strategy around virtualization Mac OSX would prove to be an extremely popular and useful operating system to virtualize both from the server and desktop perspective.

Virtualization is a great opportunity to lower cost and raise productivity while reducing risk for businesses of any size and with budgets as low as zero.  Many technologies promise important improvements for businesses but most create questionable value while incurring real cost.  Virtualization brings real, measurable value while often costing nothing and often reducing spending immediately.  For many businesses virtualization is the technology that they have always dreamed of and is, in fact, available today.

The SMB IT and Vendor Relationship Dilemma

When most people compare enterprise IT and the small business IT markets they generally think about size and scale.  Enterprise environments are huge and small business IT often consists of just one or a few IT professionals holding a company together.  The differences between these two classes of environments are much deeper than just size.  Thinking of the small and medium business market as being small-scaled enterprises is a great way to misunderstand what this market is all about.   There are fundamental behavioral differences between these organizational types and I would put forth that this behavior is likely a far better determinant between what constitutes a small or medium business and what constitutes an enterprise business from an IT perspective.

One of the places in which this difference in behavior is most visible is in vendor relationships.  In the enterprise space, as well as in large businesses, vendors act very much as a partner with the corporate IT department.  Often vendors will have dedicated representatives who spend some or possibly all of their time at the customer site and are available to answer questions, make contact with support, provide input and guidance – whatever is needed by the IT department as it relates to that vendor’s products and in some rare cases even outside of the scope of the vendor’s own products.  In exchange the vendor has nearly constant access to the “ears” of IT and management in order to inform them and to sway their opinion in favor of said vendor’s products.  This also gives the vendor direct access, in many cases, to the “on the ground” IT people who are using their products and providing them with critical, non-management feedback.

In many ways this relationship causes “the conversation” between the vendor and the “market”, as proposed by Levine, Locke, Searls and Weinberger in their groundbreaking 1999 tome “The Cluetrain Manifesto”, to take place in-person, in real-time in a way that is very traditional and effective.  When the company wants product information it simply contacts its vendor representative and that rep will provide samples, get documentation, give a presentation, organize training sessions, obtain roadmaps and more.  If the products do not meet the company’s needs the feedback is immediate and meaningful.  The relationship is symbiotic and everyone gains from the tight communication channel that is created between the enterprise IT department and their vendors.

The small business market sees none of this.  There are many reasons for this.  The scale on which the SMB IT department operates does not allow a vendor to dedicate a sales resource, let alone a technical resource, to a single client.  This one, simple difference breaks the communication channel leaving SMB IT departments in a far different position than their enterprise counterparts.  Any conversation held between an SMB IT manager and a vendor is an ad-hoc, temporary conversation.  Vendors do not get to know their clients.  They don’t have a deep understanding of their business.  They don’t see their clients as individuals but as a pool of consumers more akin to the standard, personal consumer market than to the enterprise where each customer is well known and appreciated individually.

The differences in interaction are not solely from the vendor’s perspective.  In the enterprise the IT department typically has resources with time to dedicate to interacting with vendor representatives.  Technical support roles such as server administrators may work directly with sales and engineering resources for support issues and purchasing recommendations while architectural professionals may use vendor representatives to assist in capacity planning, system design or to establish performance metrics.  In the SMB there do not exist these dedicated internal roles and the available IT resources are often overworked and spread too thinly between many different tasks leaving little or no available time to focus on single issues such as these even if the vendors were to provide such resources.  Enterprise departments often manage to even allow regular, “in the trenches” technical staff to attend sales luncheons and other vendor-sponsored events only loosely tied to their job functions.  In the SMB space this is all but unheard of.

Another key difference between the SMB and enterprise markets is in the way that they purchase for IT.  Enterprises generally view their purchasing process in terms of services.  These may include warranty services, datacenter management, software customization, hardware leases, software customization, etc.  The small business market generally sees purchasing in terms of products – either hardware or software.  Small businesses think in terms of buying desktops, monitors, servers, software licenses, etc.  Small businesses purchase the same whether buying directly from their vendor, from the channel or from the local store.  The transactions are very simple.  Enterprises think of a server in terms of its monthly support cost and total lifespan while SMBs simply see a price tag.  This does not mean that SMBs never purchases services – only that they do so typically in a very up-front, set price sort of way although they typically purchase far fewer services than do enterprise IT departments.

Enterprise IT environments have the distinct advantage of large scale peer interaction both internally and externally.  IT professionals working in large environments are constantly learning about new products, technologies and techniques from their counterparts within their own organization as well as from peers in competing organizations in their market verticals.   This gives enterprise staff an advantage in working with their vendors because they see how these vendors interact with their peers locally and elsewhere and get feedback on how other vendors in competing areas work with their clients.  This creates a competitive market for vendors based on their level of service to their clients.  In small and medium business there is very little insight into these relationships at other, similar companies.  SMBs naturally do not get interaction with a direct peer group.  At best they can hope for peer support groups for organizations of similar size, but even that is extremely rare.  Vendor relationships with the SMB market are very much isolated from peer review and market pressures.

SMB IT professionals seldom get a chance to attend industry events like their enterprise counterparts either.  They often do attend some but few by comparison.  This provides fewer opportunities for SMBs to learn about vendors with whom they do not already have a relationship.  This is very beneficial to big vendors like HP, Dell, IBM and Microsoft who need no introduction to any IT professional, but smaller vendors, new vendors and niche vendors will often find it hard to make SMBs aware of their existence let alone find an opportunity to discuss their products and services directly with them.  Making connections between SMBs and vendors capable of meeting their needs is a significant challenge in most cases.

SMBs also suffer from not having industry publications and other vertical resources available to them in most cases.  SMB IT managers may use general resources from the IT field such as technology publications and online magazines to investigate what others in their peer group are doing, but targeted materials designed specifically for their technology needs are rare if not non-existent.

Another difference in how SMB and enterprise IT departments behave is in their driving force behind purchasing.  Enterprise customers typically purchase products strategically.  This purchasing may be driven by a desire for datacenter consolidation, power reduction, features, easing administrative burdens, market pricing advantages and more.  Careful cost analysis will often cause them to buy opportunistically and a tightly coupled vendor relationship helps to enable this.  SMBs, on the contrary, are typically tactical (demand-driven.)  They purchase new products when the old are no longer serviceable, no longer meeting demand, no longer supported or additional capacity is needed.  They will seldom buy when market pressures make purchasing most advantageous but will do so quite suddenly with relatively little research leading up to the point of spending.

The SMB market is very likely to be keenly aware of the bottom line of any purchase.  This seems obvious but in the enterprise space there is normally much more room for a technical specialist to ask for features that carry extra cost because they simply feel confident that they will be beneficial.  Enterprises are often more likely to trust the hunches of their technical staff and to pay for “soft benefits” that are not easily quantifiable.  SMBs will almost always look at the bottom line and if a feature does not meet a clear requirement or provide a rather certain return on investment then they will typically opt for the lower priced option.

The final difference that I would like to address is in how prices are determined.  Enterprise customers typically negotiate a blanket discount rate that applies to everything that they purchase from their vendor.  Getting pricing on new products or price comparing many products is easy.  Very easy.  Pricing for the enterprise is quite transparent making it very simple to do cost analysis on one solution over another.

In the SMB market prices are generally negotiated on a purchase by purchase basis.  Because of this SMB IT departments generally have only a very general idea of the price differences between two different solutions – especially if those products come from two different vendors.  Gathering enough data to do a large cost analysis study is both time prohibitive and ineffectual as prices continuously change and vendors will change discounts regularly based on other factors and behaviors.  SMB IT managers cannot simply go to a single web site and look up many different discounted prices and do a quick comparison of many different products giving them a strategic disadvantage over their enterprise counterparts.

This leaves us with a significant challenge.  Now that we see why small and medium businesses are fundamentally and behaviorally different than large enterprise businesses we have the obvious question of “how are vendors and SMB customers going to overcome their natural barriers?”

To some degree there is no simple answer.  Both vendors and small business IT managers need to be aware of how vendors and their customers behave and think so that they can begin moving toward each other in a meaningful way, but this is only the first step.

Vendors need to have dedicated small and medium business representatives who specialize in the needs of this market.  These need to be professionals who have truly studied the market and understand how very small and moderately small businesses behave, what products are generally in use, what their architectures normally look like and more.  Vendors often think that SMB IT managers spend their day thinking about ERP, CRM, rapid disaster recovery planning and datacenter consolidation problems as do enterprise CIOs but, in fact, most are concerned with desktop management, virtualization, basic security and maybe even purchasing their very first server!  Vendors need empathy with the small business market in order to service it well.  Even vendors with amazing products that are perfect for this market often fail to inform their potential customers on when these products may make sense for them or may lack the ability to support them in the configurations that make the most sense.

Most importantly vendors need to find a way to join the conversation (as put forth in “The Cluetrain Manifesto”). In the enterprise space the conversation takes place inside the organization as well as in peer groups and conferences.  It is everywhere and finding it is simple.  Small businesses struggle with joining the conversation themselves – mostly because they cannot always find it, but it is there.

A perfect example of where this conversation is beginning to emerge is in online technology social media platforms like the SpiceWorks Community.  This online community has hundreds of thousands of small and medium business IT professionals and managers online and engaged in ongoing discussions on everything from low level technical problems and architecture concerns to product selection and vendor relationship management.  A few progressive vendors have joined the community and are interfacing with their customers and potential customers in a mode that, in many ways, mimics the behavior found in the enterprise.  Suddenly vendors and customers have an opportunity for personal interaction and open dialogue.

Through this conversation between vendors and customers there is a real opportunity for vendors to learn about the needs and desires of their customers, interact with customer peers, share resources and, most importantly, simply have an open discussion where concerns and needs can be exposed and addressed.  Customers have questions, often a lot of them.  There is not time during a sales call requesting pricing for the customer and the vendor to get to know one another and become acquainted with each other’s needs and offerings.  Through ongoing conversations, not only when a customer is considering an immediate purchase but on a regular basis, the relationship between vendor and customer can be formed allowing them to understand one another, feel comfortable reaching out with questions and suggestions and more.

Vendors have more than simply the chance to answer product questions when they are part of a larger conversation.  They can also provide input into conversations that are not necessarily directly related to their own products.  They can provide insight into larger architectural and design decisions.  In many cases they can take the time to explain how their products work or why they are valuable to their customers.  It is not uncommon, especially in the SMB space, for potential customers to have no previous knowledge of products that are available to them or if products would apply to them, work in their environment or integrate with their architecture.

Because the conversation is an opt-in experience vendors can talk with customers or potential customers without the need for a sales or marketing interface.  The customers are ready to hear about products.  They want know and they want to learn.  This is a marketplace where sales lead generation is already done simply by the fact that the customers are present.  They have already given the vendor their ear.

Learning how to behave in this open conversation marketplace is difficult for many vendors – especially those that are very well established large businesses. Adapting is critical as those companies that are perceived as caring about their customers will have a significant advantage over those companies who appear to find it a burden to stoop to interacting with small clients.

Large businesses are accustomed to keeping the SMB market at arm’s length often arguing that the “channel” – the reseller and system integration market – was their interface to small business.  The channel, however, acts as a chasm keeping small businesses from ever speaking directly to their vendors causing both to rely on a third party, who may not share any common interest with either, to broker any semblance of a conversation.  The channel is not incentivized to act in the interest of either party and will likely only present products and services that they themselves support and those with the greatest profit margins rather than exploring niche product options and exotic solutions that may be a better fit.  The interest of the customers are then not passed back to the vendors leaving the vendors guessing blindly what products and services would be useful to the SMB marketplace.  The lack of experience with SMBs often means that vendors are completely unknowledgeable about their customers or in many cases simply do not even have those customers.

A perfect example of this breakdown in communications is with IBM.  I watched an active online conversation involving IBM where a large group of heavily experience SMB IT professionals were discussing IBM and its place in the SMB space – what products it offered, how they would compete with other vendors and IBM’s specific relationship with small businesses.  In this conversation I heard repeatedly people speak about IBM’s only SMB focused offerings being its desktops and laptops.  I was shocked, as I suppose was IBM itself, since IBM stopped manufacturing these products many years ago having sold that division to Lenovo.  Even experienced IT professionals taking an interest in IBM, enough to participate in what evolved into a virtual panel discussion on their role in the market, were kept so far removed from IBM itself that they were unaware of even who IBM was and what they offered in the market.  A significant eye opener for everyone.  Likely this breakdown in market communications has been caused by IBM’s reliance on the channel to provide them an interface to their customers and that channel finding it better to sell Lenovo products as IBM products to customers who know the name IBM but do not know Lenovo than to take the time to educate their customers.

IBM is certainly not alone here but with their relatively recent divestment of their desktop and laptop business to Lenovo has created a unique and dramatic challenge in their interface to the SMB market.  IBM’s key competitors, Hewlett-Packard and Dell, use their desktop, laptop, display, networking and printer products as their key “in” with SMB customers and then, once chosen as a vendor, are able to make the relatively rare server sales to this market as well.  IBM has the challenge of selling servers and services to a market that is guaranteed to be buying its desktops and other products from a competing vendor.

Sun (now a part of Oracle) has long faced this same challenge in this market.  SMB IT managers understand desktops and laptops well – this is their bread and butter, what they deal with primarily every day.  Most SMB concerns are desktop related and the bulk of their purchasing is done there.  SMBs do not buy servers in large quantities with rare exception and using a different vendor for infrequent server purchases, which would involve separate vendor relationships and managing different support contracts, is not something that SMB IT managers are going to seek out.  Companies like IBM and Sun need to be involved directly with these customers and make them aware of their unique product offerings, such as Power and Sparc platforms in this example, to even have customers understand who they are and what they may offer.

This issue, hardly unique to IBM and Sun, is exacerbated by the use of the channel.  SMB IT shops will generally only turn to one system integrator, managed service provider or vendor to supply them with hardware.  Since PCs drive SMB IT this means that SMB shops will, by necessity, be turning to managed service providers who are partnered with someone who supplies desktops.  That then makes it rather unlikely that those service providers would additionally be partnered with someone like IBM or Sun.  This then, in turn, causes that service provider to automatically recommend products only from the vendor(s) with whom they are partnered further isolating customers from potential solutions from alternative vendors.  This isolation can be mitigated through direct vendor to customer relationships even if purchasing itself is still handled through a channel provider.  It is in both the vendor and the customer’s interests to interface directly and to engage in a conversation.

It is not uncommon to see IT managers choose a vendor based primarily upon that vendor’s willingness to engage in an open conversation.  Customers like vendors with whom they have a relationship.  They really like knowing that when something goes wrong or when a great new, but not entirely understood, opportunity arises that they can turn to a vendor representative, especially in an open community like SpiceWorks, and ask them for assistance or guidance.  No one expects the representative themselves to have all, or even any, of the answers.  They expect that person to have the resources necessary to reach out internally at the vendor and engage the right people.  Not only is this method friendly and cost effective but it is also very low stress.  Customers often don’t know where the problem may reside and do not have contacts internal to the vendor, unlike enterprise customers who often deal with specific issues so often that they know the necessary resources at the vendor, and without a representative to whom they could turn they may be left without the necessary contact information or channels to get the assistance that they need.  In some cases this may result in customers feeling that the product is poorly supported or just does not work and in others could result in new opportunities being lost or the customer turning to another vendor whom they know offers a workable solution.

While the online SpiceWorks community is hardly the only venue for vendor to customer interactions it is rapidly becoming a unique place, do to its scale, reach and unique SMB focus, where vendors and customers can make connections, join in open discussions, create relationships and get support.  The community is extremely large, over 700,000 IT professionals all from the SMB ranks and is rapidly expanding both with its online presence but also with local users groups and regional SMB IT conferences – all of which present opportunities for vendors to interact with the SMB marketplace in new and exciting ways.  SpiceWorks represents, I feel, a key component in the future of vendor relationships in the SMB IT market.  SpiceWorks acts as a broker to the conversation providing the venue and framework necessary to make customer/vendor interactions as simple and valuable as possible.  As the community continues to grow and as more vendors decide to become a part of the conversation I expect to see the value of this forum expand exponentially.  It is in communities like this that those vendors serious about the SMB IT market will succeed in differentiating themselves and engaging current and potential customers.

The Dangers of Blade Servers in SMB – Debunking the Blade Server Myth

Blade Servers are the hottest trend in datacenters today.  I am sure that you have heard the hype: lower cost and better efficiency.  To be sure, blades have come a long way in the last few years and are looking better than ever, but considering putting blades into your own business is something that should be considered very carefully.  There are many hidden dangers inherent to the blade concept that are often overlooked and these hidden dangers can come back to haunt you long after you have committed to the idea of blades.

Before we look into blades themselves I want to discuss what blades are.  According to Wikipedia: “Blade servers are stripped down computer servers with a modular design optimized to minimize the use of physical space. Whereas a standard rackmount server can function with (at least) a power cord and network cable, blade servers have many components removed to save space, minimize power consumption and other considerations, while still having all the functional components to be considered a computer.”  It is important to define blade servers because it has become common, especially in the used server market, for resellers to use the term blade to refer to standard, 1U and 2U rackmount servers in the hopes of confusing customers new to the blade market.  Blades are a specific hardware category that requires the use of an enclosure and are not simply “small” servers.  Blade servers use shared components in the enclosure, such as power supplies and remote management consoles, reducing the components necessary in each individual blade server.

The first danger of blades is cost.  Blade enclosures are generally very expensive even though blades themselves are often less expensive than their rackmount counterparts.  In a quick price comparison of a large blade vendor’s offerings the enclosure was approximately $5,000 and could hold a maximum of eight blade servers.  Each blade was roughly $500 less expensive than the matching vendor’s rackmount server of the same or similar specs.  This means that a fully populated blade enclosure, at list price, from this vendor would cost $1,000 more than the equivalent computational power in traditional form factors.  And every blade slot not populated would be an additional $500 deficit.

The cost of blades is not just a total cost factor.  Blade enclosures, often holding eight to sixteen blade servers, need to be purchased up front.  If you need enough servers to match the capacity of an enclosure this is not a factor, but if you are looking to buy only a single server now you may be making a significant investment in proposed future server farm growth.  This means increased risk as well as an investment against the time-value of your dollar.

Hardware cost is always a difficult number to nail down.  Stated prices from the vendors rarely reflect reality and, as most companies know, dramatically lower prices are available if you demand them.  I have known companies to get their blade enclosures for free, for example, which completely changes the cost equation of blades.  But in the same breath one must remember that if a blade enclosure is available for free that serious discounts on traditional rackmount servers are likely also available.  So the list prices are often a good judge of relative prices even if not absolute ones.  Your mileage will vary – so due diligence is necessary to create a cost analysis appropriate for your given situation and the deal that you receive from your vendor.

The second danger of blades is technological obsolescence.  Unlike traditional racks which have gone practically unchanged for many decades, blade enclosures are new and relatively dynamic.  Several generations of blade enclosures have come and gone since their inception in 2001 and each subsequent generation, thus far, has required shops to replace their enclosures to support new blade servers.  This is a high risk if you are not buying servers often enough and in large enough quantity to justify the technology churn in the enclosures.  This rate of change is slowing as the technologies mature, but risks remains.  When doing a proper cost analysis of blade servers this rate of change needs to be factored.

The third danger is vendor lock-in.  Traditional rack technologies are vendor agnostic.  Most shops will mix and match not only servers but batteries, routers, switches, monitoring equipment and other gear into their racks.  Blades are vendor specific.  For a large enterprise this is of little or no concern.  In a small shop with a limited number of servers it can be crucial not to give up the ability to use different vendors and technologies.  This can be a limitation on technology but also is a limitation on leverage to obtain premium vendor pricing discounts in the future.

Take, as an example, a shop that wishes to run HP Integrity blades with their Intel Itanium processors today.  They invest in blade enclosures and begin using them.  In three years they purchase software that runs on Sun UltraSparc or IBM Power processors.  In order to use blades each of these technologies will require their own brand of blade enclosure and will significantly increase the risk in a small shop that enclosures will not be able to be fully populated.  There is much more flexibility in technologies using traditional rackmount servers because each vendor generally supplies one set of RISC or EPIC-based systems and one set of AMD / Intel-based commodity systems.  If you want more than that blades will become quite difficult for a small shop to manage.  I have worked first hand with shops that use multiple technologies like this on a regular basis making blades a most difficult choice today before considering potential future platform desicions.  The use of Apple Mac OSX must also be mentioned as Apple does not provide blade servers so any deployment of OSX-based servers cannot be integrated into a blade enclosure.

The fourth danger is the shared backplane and other key components.  A blade enclosure, while generally built with massive redundancy and with truly amazing design, still represents a single point of failure that must be considered.  If your enclosure fails you do not lose just a single server but as many as sixteen physical server platforms.  With rackmounts servers you can add redundancy simply by adding an additional server – typically one matching server for each server you need.  With blades you have to have redundant enclosures for the same level of reliability.  Again, for a large enterprise this is trivial and obvious.  For a small business the need to suddenly own dual enclosures for full redundancy will often result in simply foregoing that level of protection and increasing risk.

The fifth danger is in the cost of flexibility.  Small IT shops may not often move their equipment around.  The option is generally there, though.  If a small business owns three servers and replaces one with a shiny, new unit the option is almost always there to redeploy the old server to another role elsewhere in the company – perhaps in a branch office.  With blades the old blades can only be redeployed in a location that has a blade enclosure matching the one from which the blade was pulled.  This is a cost of lost opportunity late in the server lifecycle and often completely ignored in cost analysis of blades.  If there is not a spot ready for an older server it is far more likely to be discarded in the blade model rather than redeployed unless the company is large enough to have many enclosures available all of the same generation and with available space ready to accept an older server.

The sixth danger of blades is the high cost of storage.  Storage is a subject all its own these days with SAN, NAS and DAS as possible options.  Shops of all sizes are moving to SAN and NAS quickly and with enough network storage in place this can alleviate much of the storage risk associated with blade servers.  Many shops, however, use circular reasoning and justify SAN because of the blades and blades because of the SAN.  Taking a holistic view of the server and storage picture is crucial.

A typical blade server can house only one or two 2.5″ SAS or SATA drives.  This is far less than a typical rackmount server would provide as potential storage space.  It is common to find eight to sixteen drive bays available in popular 2U rackmount configurations – sometimes using 3.5″ drives rather than 2.5″ drives.  One popular and very cost effective 2U server can hold 28TB of low-cost storage on fourteen spindles.  You cannot put this type of storage into a blade enclosure.  Because local drive space is simply not available, blade server owners are forced to use minimal direct attached storage and use SAN or NAS instead even when DAS would provide better performance and cost (otherwise) for that particular application.

To bridge this need most blade vendors provide storage blades – blade servers that act as tiny, low volume SAN devices and fit directly into the blade enclosure.  These units are generally of rather low capacity, often just six drives, and rather expensive compared to other means of providing storage.  Additionally they use a critical enclosure bay removing one of the potential slots necessary for a blade enclosure to provide server density.  So an eight bay blade enclosure with two small storage blades would only be able to house six blade servers.

Obviously buying a blade enclosure does not mean that you have given up the ability to also use rackmount servers when appropriate.  You can continue to mix and match.  But to obtain the numbers necessary for a small business to cost justify the blade infrastructure often requires that purchases lean heavily towards blade servers to fill the enclosure(s) as densely as possible.

Much of the danger of blades is in the potential for lost opportunities.  Small businesses especially function best and compete most strongly against larger businesses by being flexible and agile.  Blades are the opposite of agile.  They require large, upfront infrastructure planning that includes technological, physical and geographic lock-in.  Even if a business plans ahead and sees no obstacles to adoption this does not mean that opportunities will not be missed in the future, caused by a lack of flexibility to adapt to changing business conditions effectively.  Once a blade enclosure is in place purchasing decisions almost certainly are made based on the investment already made and no longer on simply what is best for the company.  This doesn’t have to happen but almost certainly will.  The existing investment needs to be protected.  This is the natural reaction to have.

All of this being said, blade servers can still make a lot of sense for certain businesses.  Blade servers generally consume less power than their non-blade counterparts due to their shared system components.  Be sure to consider the power consumption differences in the storage area, however, as blades push power consumption from the server to the SAN and can often be misleading as to where the power is going.  A savings in one place is only valuable if the cost does not appear again in another.

Blades are easy to transport and relocate when enclosures are available.  This can be a bigger factor than is obvious especially when it means that there are several additional staff members capable of relocating a server.  Almost anyone can lift and move a blade server.

When combined with a very aggressive SAN infrastructure, blades can be very beneficial to a virtualization environment.  This combination gives the maximum cost and flexibility advantage to businesses large enough to leverage it.  The SMB market mostly consists of businesses for whom this would be very prohibitive, though, and this solution will continue to be relegated to businesses at the larger end of the SMB spectrum.  Virtualization will, in fact, reduce the number of servers needed by most businesses making it even harder to justify blades to smaller businesses where previously a dozen or more servers would have been needed but today only two to four are needed to not only meet but to surpass earlier service levels.

If you can support adequate densities or get really aggressive vendor incentives then blades can be quite cost effective if you calculate against your risks.  Blades are always a little more risky, but if your cost is reduced significantly in buying them then they may be very much worth the risk in flexibility.  The cost of the enclosure is a key factor here.  If your enclosure is free then suddenly the cost savings of a blade system can be enormous – especially if a large number of blades are purchased providing really good enclosure density.

Blade servers are a great technology and show a lot of promise for the future.  As enclosure lifecycles slow, new technologies emerge, costs are reduced, volumes increase and, hopefully, as vendor-neutral standards emerge I am confident that blades will become the de facto standard in even the smallest datacenters.  I see this as taking at least another market cycle before this will really occur.  Most likely, in my opinion, it will be another five to seven years before the form factor truly displaces the rackmount server in general utility.

The Information Technology Resource for Small Business