When to Consider a SAN?

Everyone seems to want to jump into purchasing a SAN, sometimes quite passionately.  SANs are, admittedly, pretty cool.  They are one of the more fun and exciting, large scale hardware items that most IT professionals get a chance to have in their own shop.  Often the desire to have a SAN of ones own is a matter of “keeping up with the Jones” as using a SAN has become a bit of a status symbol – one of those last bastions of big business IT that you only see in a dedicated server closet and never in someone’s home (well, almost never.)  SANs are pushed heavily, advertised and sold as amazing boxes with internal redundancy making them infallible, speed that defies logic and loaded with features that you never knew that you needed.  When speaking to IT pros designing new systems, one of the most common design aspects that I hear is “well we don’t know much about our final design, but we know that we need a SAN.”

In the context of this article, I use SAN in its most common context, that is to mean a “block storage device” and not to refer to the entire storage network itself.  A storage network can exist for NAS but not use a SAN block storage device at all. So for this article SAN refers exclusively to SAN as a device, not SAN as a network.  SAN is a soft term used to mean multiple things at different times and can become quite confusing.  A SAN configured without a network becomes DAS.  DAS that is networked becomes SAN.

Let’s stop for a moment.  SAN is your back end storage.  The need for it would be, in all cases, determined by other aspects of your architecture.  If you have not yet decided upon many other pieces, you simply cannot know that a SAN is going to be needed, or even useful, in the final design.  Red flags. Red flags everywhere.  Imagine a Roman chariot race with the horses pushes the chariots (if you know what I mean.)

It is clear that the drive to implement a SAN is so strong that often entire projects are devised with little purpose except, it would seem, to justify the purchase of the SAN.  As with any project, the first question that one must ask is “What is the business need that we are attempting to fill?”   And work from there, not “We want to buy a SAN, where can we use it?”  SANs are complex, and with complexity comes fragility.  Very often SANs carry high cost.  But the scariest aspect of a SAN is the widespread lack of deep industry knowledge concerning them.  SANs pose huge technical and business risk that must be overcome to justify their use.  SANs are, without a doubt, very exciting and quite useful, but that is seldom good enough to warrant the desire for one.

We refer to SANs as “the storage of last resort.”  What this means is, when picking types of storage, you hope that you can use any of the other alternatives such as local drives, DAS (Direct Attach Storage) or NAS (Network Attached Storage) rather than SAN.  Most times, other options work wonderfully.  But there are times when the business needs demand requirements that can only reasonably be met with a SAN.  When those come up, we have no choice and must use a SAN.  But generally it can be avoided in favor of simpler and normally less costly or risky options.

I find that most people looking to implement a SAN are doing so under a number of misconceptions.

The first is that SANs, by their very nature, are highly reliable.  While there are certainly many SAN vendors and specific SAN products that are amazingly reliable, the same could be said about any IT product.  High end servers in the price range of high end SANs are every bit as reliable as SANs.  Since SANs are made from the same hardware components as normal servers, there is no magic to making them more reliable.  Anything that can be used to make a SAN reliable is a trickle down of server RAS (Reliability, Availability and Serviceability) technologies.  Just like SAN, NAS and DAS, as well as local disks, can be made incredibly reliable.  SAN only refers to the device being used to serve block storage rather than perform some other task.  A SAN is just a very simple server.  SANs encompass the entire range of reliability with mainframe-like reliability at the top end to devices that are nothing more than external hard drives – the most unreliable network devices on your network – on the bottom end.  So rather than SAN meaning reliability, it actually offers a few special cases of being the lowest reliability you can imagine.  But, for all intents and purposes, all servers share roughly equal reliability concerns.  SANs gain a reputation for reliability because often businesses put extreme budgets into their SANs that they do not put into their servers so that the comparison is a relatively high end SAN to a relatively budget server.

The second is that SAN means “big” and NAS means “small.”  There is no such association.  Both SANs and NASs can be of nearly any scale or quality.  They both run the gamut and there isn’t the slightest suggestion from the technology chosen whether a device is large or not.  Again, as above, SAN actually can technically come “smaller” than a NAS solution due to its possible simplicity but this is a specialty case and mostly only theoretical although there are SAN products on the market that are in this category, just very rare to find them in use.

The third is that SAN and NAS are dramatically different inside the chassis.  This is certainly not the case as the majority of SAN and NAS devices today are what is called “unified storage” meaning a storage appliance that acts simultaneously as both SAN and NAS.  This highlights that the key difference between the two is not in backend technology or hardware or size or reliability but the defining difference is the protocols used to transfer storage.  SANs are block storage exposing raw block devices onto the network using protocols like fibre channel, iSCSI, SAS, ZSAN, ATA over Ethernet (AoE) or Fibre Channel over Ethernet (FCoE.)  NAS, on the other hand, uses a network file system and exposes files onto the network using application layer protocols like NFS, SMB, AFP, HTTP and FTP which then ride over TCP/IP.

The fourth is that SANs are inherently a file sharing technology.  This is NAS.  SAN is simply taking your block storage (hard disk subsystem) and making it remotely available over a network.  The nature of networks suggests that we can attach that storage to multiple devices at once and indeed, physically, we can.  Just as we used to be able to physically attach multiple controllers to opposite ends of a SCSI ribbon cable with hard drives dangling in the middle.  This will, under normal circumstances, destroy all of the data on the drives as the controllers, which know nothing about each other, overwrite data from each other causing near instant corruption.  There are mechanisms available in special clustered filesystems and their drivers to allow for this, but this requires special knowledge and understanding that is far more technical than many people acquiring SANs are aware that they need for what they often believe is the very purpose of the SAN – a disaster so common that I probably speak to someone who has done just this almost weekly.  That the SAN puts at risk the very use case that most people believe it is designed to handle and not only fails to deliver the nearly magic protection sought but is, to the contrary, the very cause of the loss of data exposes the level of risk that implemented misunderstood storage technology carrier with it.

The fifth is that SANs are fast.  SANs can be fast; they can also be horrifically slow.  There is no intrinsic speed boost from the use of SAN technology on its own.  It is actually fairly difficult for SANs to overcome the inherent bottlenecks introduced by the network on which they sit.  As some other storage options such as DAS use all the same technologies as SAN but lack the bottleneck and latency of the actual network an equivalent DAS will also be just a little faster than its SAN complement.  SANs are generally a little faster than a hardware-identical NAS equivalent, but even this is not guaranteed.  SAN and NAS behave differently and in different use cases either may be the better performing.  SAN would rarely be chosen as a solution based on performance needs.

The sixth is that by being a SAN that the inherent problems associated with storage choices no longer apply.  A good example is the use of RAID 5.  This would be considered bad practice to do in a server, but when working with a SAN (which in theory is far more critical than a stand alone server) often careful storage subsystem planning is eschewed based on a belief that being a SAN that it has somehow fixed those issues or that they do not apply.  It is true that some high end SANs do have some amount of risk mitigation features unlikely to be found elsewhere, but these are rare and exclusively relegated to very high end units where using fragile designs would already be uncommon.  It is a dangerous, but very common practice, to take great care and consideration when planning storage for a physical server but when using a SAN that same planning and oversight is often skipped based on the assumption that the SAN handles all of that internally or that it is simply no longer needed.

Having shot down many misconceptions about SAN one may be wondering if SANs are ever appropriate.  They are, of course, quite important and incredibly valuable when used correctly.  The strongest points of SANs come from consolidation and special types of shared storage.

Consolidation was the historical driver bringing customers to SAN solutions.  A SAN allows us to combine many filesystems into a single disk array allowing far more efficient use of storage resources.  Because SAN is block level it is able to do this anytime that a traditional, local disk subsystem could be employed.  In many servers, and even many desktops, storage space is wasted due to the necessities of growth, planning and disk capacity granularity.  If we have twenty servers each with 300GB drive arrays but each only using 80GB of that capacity, we have large waste.  With a SAN would could consolidate to just 1.6TB plus a small amount necessary for overhead and spend far less on physical disks than if each server was maintaining its own storage.

Once we begin consolidating storage we begin to look for advanced consolidation opportunities.  Having consolidated many server filessytems onto a single SAN we have the chance, if our SAN implementation supports it, to deduplicate and compress that data which, in many cases such as server filesystems, can potentially result in significant utilization reduction.  So out 1.6TB in our example above might actually end up being only 800GB or less.  Suddenly our consolidation numbers are getting better and better.

To efficiently leverage consolidation it is necessary to have scale and this is where SANs really shine – when scale but in capacity and, more importantly, in the number of attaching nodes become very large.  SANs are best suited to large scale storage consolidation.  This is their sweet spot and what makes them nearly ubiquitous in large enterprises and very rare in small ones.

SANs are also very important for certain types of clustering and shared storage that requires single shared filesystem access.  These is actually a pretty rare need outside of one special circumstance – databases.  Most applications are happy to utilize any type of storage provided to them, but databases often require low level block access to be able to properly manipulate their data most effectively.  Because of this they can rarely be used, or used effectively, on NAS or file servers.  Providing high availability storage environments for database clusters has long been a key use case of SAN storage.

Outside of these two primary use cases, which justify the vast majority of SAN installations, SAN also provides for high levels of storage flexibility in making it potentially very simple to move, grow and modify storage in a large environment without needing to deal with physical moves or complicated procurement and provisioning.  Again, like consolidation, this is an artifact of large scale.

In very large environments, the use of SAN can also be used to provide a point a demarcation between storage and system engineering teams allowing there to be a handoff at the network layer, generally of fibre channel or iSCSI.  This clear separation of duties can be critical in allowing for teams to be highly segregated in companies that want highly discrete storage, network and systems teams.  This allows the storage team to do nothing but focus on storage and the systems team to do nothing but focus on the systems without any need for knowledge of the other team’s implementations.

For a long time SANs also presented themselves as a convenient means to improve storage performance.  This is not an intrinsic component of SAN but an outgrowth of their common use for consolidation.  Similarly to virtualization when used as consolidation, shared SANs will have a nature advantage of having better utilization of available spindles, centralized caches and bigger hardware than the equivalent storage spread out among many individual servers.  Like shared CPU resources, when the SAN is not receiving requests from multiple clients it has the ability to dedicate all of its capacity to servicing the requests of a single client providing an average performance experience potentially far higher than what an individual server would be able to affordably achieve on its own.

Using SAN for performance is rapidly fading from favor, however, because of the advent of SSD storage becoming very common.  As SSDs with incredibly low latency and high IOPS performance drop in price to the point where they are being added to stand alone servers as local cache or potentially even being used as mainline storage the bottleneck of the SANs networking becomes a larger and larger factor making it increasingly difficult for the consolidation benefits of a SAN to offset the performance benefits of local SSDs.  SSDs are potentially very disruptive for the shared storage market as they bring the performance advantage back towards local storage – just the latest in the ebb and flow of storage architecture design.

The most important aspect of SAN usage to remember is that SAN should not be a default starting point in storage planning.  It is one of many technology choices and one that often does not fit the bill as intended or does so but at an unnecessarily high price point either in monetary or complexity terms.  Start by defining business goals and needs.  Select SAN when it solves those needs most effectively, but keep an open mind and consider the overall storage needs of the environment.

Dreaded Array Confusion

Dreaded Array Confusion, or DAC, is a term given to a group of RAID array failure types which are effectively impossible to diagnose but are categorized by the commonality that they experience no drive failure in conjunction with complete array failure resulting in total data loss.  It is hypothesized that three key causes result in the majority of DAC:

Software or Firmware Bugs: While dramatic bugs in RAID behavior are rare today, they are always possible, especially with more complicated array types such as parity RAID where reconstructive calculations must be performed on the array.  A bug in RAID software or firmware (depending on if we are talking about software of hardware RAID) could manifest itself in any number of ways including the accidental destruction of the array.  Firmware issues could occur in the drives themselves as well.

Hardware Failure:  Failure in hardware such as processors, memory or controllers can have dramatic effects on a RAID array.  Memory errors especially could easily result in total array loss.  This is thought to be the least common cause of DAC.

Drive Shake: In this scenario individual drives shake loose and disconnect from the backplane and later shake back into place triggering a resilvering event.  If this were to happen with multiple drives during a resilver cycle or if a URE were encountered during a resilver we would see total array loss on parity arrays potentially even without any hardware failure occurring.

Because of the nature of DAC and because it is not an issue with RAID itself but with the support components for it we are left in a very difficult position to attempt to identify or quantify the risk.  No one knows how likely DAC is to happen and while we know that DAC is a more significant threat on parity RAID systems we do not know by how much.  Anecdotal evidence suggests the risk on mirrored RAID is immeasurably low and on parity RAID may rise above background noise in risk analysis.  Of the failure modes, software bugs and drive shake both present much higher risk to systems running on parity RAID because URE risk only impacts parity arrays and the software necessary for parity is far more complex than the software needed for mirroring.  Parity RAID simply is more fragile and carries more types of risks exposing it to DAC in more ways than mirrored RAID is.

Because DAC is a number of possibilities and because it is effectively impossible to identify after it has occurred there is little possible means of any data being collected on it.  Since having identified DAC as a risk many people have come forth, predominantly in the Spiceworks community, to provide anecdotal eye witness accounts of DAC array failures.  The nature of end user IT is that statistics, especially on nebulous concepts like DAC which are not widely known, are not gathered and cannot be.  DAC arises in shops all over the world where a system administrator returns to the office to find a server with all data gone and no hardware having failed.  The data is already lost.  Diagnostics will not likely be run, logs will not exist and even if the issue can be identified to whom would it be reported and even if reported, how do we quantify how often it happens versus how often it does not or how often it might but not be reported.  Sadly all I know is that in having identified and somewhat publicized the risk and its symptoms that suddenly many people came forth acknowledging that they had seen DAC first hand as well and had no idea what had happened.

If my anecdotal studies are any indicator, it would seem that DAC actually poses a sizable risk to parity arrays with failures existing in an appreciable percentage of arrays but the accuracy and size of the cross section of that data collection was tiny.  However, it was original though that DAC was so rare that theoretically you would be unable to find anyone who had ever observed it but this does not appear to be the case.  I already am aware of many people who have experienced it.

We are forced, by the nature of the industry, to accept DAC as a potential risk and list it as an unknown “minor” risk in risk evaluations and be prepared for it but cannot calculate against it.  But knowing that it can be a risk and understanding why it can happen are important in evaluating risk and risk mitigation.

[Anecdotal evidence suggests that DAC is almost always exclusive to hardware RAID implementations of single parity RAID arrays on SCSI controllers.]

The Inverted Pyramid of Doom

The 3-2-1 model of system architecture is extremely common today and almost always exactly the opposite of what a business needs or even wants if they were to take the time to write down their business goals rather than approaching an architecture from a technology first perspective.  Designing a solution requires starting with business requirements, otherwise we not only risk the architecture being inappropriately designed for the business but rather expect it.

The name refers to three (this is a soft point, it is often two or more) redundant virtualization host servers connected to two (or potentially more) redundant switches connected to a single storage device, normally a SAN (but DAS or NAS are valid here as well.) It’s an inverted pyramid because the part that matters, the virtualization hosts, depend completely on the network which, in turn, depends completely on the single SAN or alternative storage device. So everything rests on a single point of failure device and all of the protection and redundancy is built more and more on top of that fragile foundation. Unlike a proper pyramid with a wide, stable base and a point on top, this is built with all of the weakness at the bottom. (Often the ‘unicorn farts’ marketing model of “SANs are magic and can’t fail because of dual controllers” comes out here as people try to explain how this isn’t a single point of failure, but it is a single point of failure in every sense.)

So the solution, often called a 3-2-1 design, can also be called the “Inverted Pyramid of Doom” because it is an upside down pyramid that is too fragile to run and extremely expensive for what is delivered. So unlike many other fragile models, it is very costly, not very flexible and not as reliable as simply not doing anything beyond having a single quality server.

There are times that a 3-2-1 makes sense, but mostly these are extreme edge cases where a fragile environment is desired and high levels of shared storage with massive processing capabilities are needed – not things you would see in the SMB world and very rarely elsewhere.

The inverted pyramid looks great to people who are not aware of the entire architecture, such as managers and business people.  There are a lot of boxes, a lot of wires, there are software components typically which are labeled “HA” which, to the outside observer, makes it sounds like the entire solution must be highly reliable.  Inverted Pyramids are popular because they offer “HA” from a marketing perspective making everything sound wonderful and they keep the overall cost within reason so it seems almost like a miracle – High Availability promises without the traditional costs.  The additional “redundancy” of some of the components is great for marketing.  As reliability is difficult to measure, business people and technical people alike often resort to speaking of redundancy instead of reliability as it is easy to see redundancy.  The inverted pyramid speaks well to these people as it provides redundancy without reliability.  The redundancy is not where it matters most.  It is absolutely critical to remember that redundancy is not a check box nor is redundancy a goal, it is a tool to use to obtain reliability improvements.  Improper redundancy has no value.  What good is a car with a redundant steering wheel in the trunk?  What good is a redundant aircraft if you die when the first one crashes?  What good is a redundant sever if your business is down and data lost when the single SAN went up in smoke?

The inverted pyramid is one of the most obvious and ubiquitous examples of “The Emperor’s New Clothes” used in technology sales.  Because it meets the needs of the resellers and vendors by promoting high margin sales and minimizing low margin ones and because nearly every vendor promotes it because of its financial advantages to the seller it has become widely accepted as a great solution because it is just complicated and technical enough that widespread repudiation does not occur and the incredible market pressure from the vast array of vendors benefiting from the architecture it has become the status quo and few people stop and question if the entire architecture has any merit.  That, combined with the fact that all systems today are highly reliable compared to systems of just a decade ago causing failures to be uncommon enough that the fact that they are more common that they should be and statistical failure rates are not shared between SMBs, means that the architecture thrives and has become the de facto solution set for most SMBs.

The bottom line is that the Inverted Pyramid approach makes no sense – it is far more unreliable than simpler solutions, even just a single server standing on its own, while costing many times more.  If cost is a key driver, it should be ruled out completely.  If reliability is a key driver, it should be ruled out completely.  Only if cost and reliability take very far back seats to flexibility should it even be put on the table and even then it is rare that a lower cost, more reliable solution doesn’t match it in overall flexibility within the anticipated scope of flexibility.  It is best avoided altogether.

Originally published on Spiceworks in abridged form: http://community.spiceworks.com/topic/312493-the-inverted-pyramid-of-doom

When to Consider a Private Cloud?

The idea of running a private cloud, hosted or on premise, for a single company is rapidly becoming a commonplace one.  More and more businesses are learning of cloud computing and seeing that running their own cloud platform is both feasible and potentially valuable to the business.  But do to a general lack of cloud knowledge it is becoming more and more common that clouds are recommended when they do not suit the needs of the business at all, instead being mistaken for traditional virtualization management systems.

A cloud is a special type of virtualization platform and fills a unique niche.  Cloud computing takes traditional virtualization and layers it with automated scaling and provisioning that allows for rapid, horizontal scaling of applications.  This is not a normal business need.  Cloud also lends itself, and is often tied to, self-service of resource provisioning but this alone does not make something a cloud nor justify the move to a cloud platform, but could be an added incentive.  What makes cloud interesting is the ability to provide self-service portals to end users and the ability for applications to self-provision themselves.  These are the critical aspects that set a cloud platform apart from traditional virtualization.

What a cloud does not imply are features such as simplified whole-domain system management from a single pane of glass, large scale consolidation, easy migration between hardware systems, rapid provisioning of new systems, virtualization, high availability, resource over-commitment, etc.  These features are all available in other ways, primarily through or on top of standard platform virtualization (VMware vSphere, Microsoft’s HyperV, Xen, et. al.)  It is not that these features cannot be made available in a private cloud, but the features are not aspects of the cloud but rather of the underlying virtualization platform.  The cloud layer is above these and simply passes through the benefits of the underlying layers.

Often cloud is approached because of a misunderstanding that many of the features commonly associated with private clouds are not available in some other, simpler form.  This is rarely the case.  Normal virtualization platforms, most commonly VMware’s vSphere and Microsoft’s HyperV, offer all of these options.  They can be used to make robust clusters of physical servers, managed from a single interface, with incredibly high reliability and rapid provisioning of new systems that require minimal specialty knowledge from the IT department and maintain traditional business workflows.  Most times, when I am speaking with businesses that believe that they may be interested in pursuing the ownership of their own cloud, the features that they really want are not cloud features at all.

The term “cloud” has simply become so popular recently that people begin to assume that important features for nearly everyone must be attributed to it to explain the sudden surge in importance, but this is simply not the case.  Cloud remains, and will remain, a predominantly niche solution appropriate for only a very small number of companies to own themselves.  The use of public clouds or the use of hosted services delivered from cloud platforms will become, and indeed has already become, nearly ubiquitous   But ownership of a private cloud for the use of a single company is a long way from being a business need for most businesses or business units and in many cases, I suspect, never will become so.

Private clouds shine in two key areas.  The first is a business who needs a large number of temporary or ad hoc systems “spun up” on a regular basis.  This often occurs with large development teams and application testing groups, especially if these groups target multiple operating systems.  The ability to rapidly provision temporary testing systems or lab systems can be very advantageous and the nature of cloud computing to easily expose provisioning tools that allow business customers to create, manage and destroy their own system instances with, we would expect, built-in charge back mechanisms can be very beneficial to corporate efficiency as the interaction between the IT department and the end users becomes nearly frictionless for this transaction.  Responsibility for maintaining the cloud as a whole can easily be segregated from the responsibilities of maintaining individual systems.  Seldom used in this manner for production workloads, this allows a self-service approach that many business units desperately seek today.  Impractical on a small scale due to the overhead of creating and maintaining the cloud platform itself but on a large scale can be hugely productive.  In addition to technical advantages, this aspect of cloud computing can serve as a model for thinking of IT as an internal service provider and departments as customers.  We have long discussed IT and other business units in these terms but we rarely truly think of them in this way.

The second area where cloud computing really comes into its own and the one for which the concept was developed originally is to handle auto provisioning for horizontally scaling applications.  That is, application workloads that are able to increase in their capacity handling ability by spawning new instances for themselves.  On a small scale, many web applications, due to their stateless nature, do this within a single system by spawning new thread workers to handle additional connections.  An Apache web server might start with eight listeners ready to service requests but as those threads become exhausted it automatically starts new threads to handle additional incoming connections so that it is able to scale within the confines of a single server.  To expand on this concept, applied to cloud computing, that same application sensing thread exhaustion approaching on a system-wide level (or based on other metrics such as a lack of free memory or a loss of performance) would use an API exposed from the cloud computing platform to signal the cloud management system to provision a new copy of the system that was calling it – essentially cloning itself on the fly.  In a matter of seconds, a new virtual server, identical to the first, would be up and running and joining its parent in servicing incoming requests.  This child or clone system would likewise spawn new threads internally, as needed, and then if it too sensed exhaustion would call the cloud platform to create yet another new system to handle even more threads.  In this way the application can grow itself almost infinitely (within the hardware limits of the entire cloud platform) as needed, on the fly, automatically.  Then, as individual systems become idle, workloads die down, one at a time a system can signal that it is no longer needed to the cloud management system and the system will be powered off and destroyed as it was simply a stateless clone, freeing system capacity for other applications and workloads that may need to take advantage of the spare capacity.

As we can see, cloud computing is massively powerful, especially with the bulk of today’s public and enterprise applications being written in a stateless manner in order to take advantage of web protocols and end user interfaces. Web applications are especially adept at leveraging cloud computing’s scalability model and most large scale web applications leverage this elastic expanding and contracting of capacity today.  Many new NoSQL models are beginning to emerge that signal that databases, in addition to application front end processing nodes, may soon benefit from similar models on a large scale.  This can certainly be leveraged for internal applications as well as publicly facing ones, however internal applications rarely need to scale beyond a single system and so it is quite rare to find private clouds being leveraged in quite this way.

The dangers around cloud computing come in the form of additional complexity above and beyond normal virtualization.  There is the potential for complex storage needed to support the platform and more layers to learn and maintain.  Cloud computing’s ability to rapidly create and destroy systems can make it tempting for users to attempt to use cloud resources as if they were persistent systems, which they can be made to be, which can result in data loss from users receiving behavior very different from what is traditional and expected.  Possibly the biggest cloud concern is a human one and that is the increased likelihood of experiencing uncontrolled system sprawl as end users wildly spin up more and more new systems which, as they are created by end users and not IT, are probably not tightly controlled and monitored leaving systems in a rogue, and oft forgotten state.  This can lead to a maintenance and security nightmare as systems go unpatched and uncared for increasing risk and draining resources.  And most worrisome is the possibility that systems will be created and forgotten and potentially exist without proper licensing.  Tracking and reporting on auto provisioned systems carries process risk caused by the huge shift in how systems are created.  IT departments are accustomed to the heavy licensing processes necessary to maintain compliance but with cloud computing there is a potential for this process to be exposed to the business units in a way for which they are not at all equipped to handle.  There are accommodations for the licensing needs of cloud computing, but this is extra complexity and management that must be addressed.  Allowing systems to exist without direct IT department oversight clearly carries risk of a potentially unforeseen nature.

Private cloud ownership brings many exciting possibilities, but it is clear that these benefits and opportunities are not for everyone.  They cater to larger businesses, to those with good process control, to companies running especially adapted applications that are capable of taking advantage of the system-level elasticity of the resources and those needing large scale ad hoc system creation and destruction provided, as a service, for end users to self-provision.  Most large enterprises will find limited use for cloud computing in house.  Smaller organizations will rarely find cloud computing to be advantageous in the near future, if ever.