Category Archives: Virtualization

Understanding Virtual Desktop Infrastructure

VDI (or Virtual Desktop Infrastructure) has been all the rage in IT circles for the past couple of years. Once the virtualization of servers became the norm, looking to desktops as the next frontier in virtualization was natural. Unlike servers, however, desktops are affected by several major factors that need to be addressed and considered before we simply jump on the VDI bandwagon. VDI is a great and wonderful technology, but like any technology has its place and needs to be considered carefully.

First we have to look at an important concept that affects VDI – the idea of shared computing. With servers we naturally assume that the servers and the services that they provide are not for one to one consumption but will be utilized by multiple users. This concept exists and has always existed in the world of desktops and is often referred to as terminal services. Terminal Servers are the server world’s answer to the need for centralized desktop resources and have been standard since literally before desktops even existed.

It is an interesting bit of computing history that brings us to how the Windows world interacts with the concept of terminal servers. Most operating systems and nearly all that remain in use today are designed from the ground up, and always have been, as multi-user systems. The idea that one user would sit in front of the computer as the “main” user and others would either be second class citizens or not exist at all did not really exist and all users were considered equal. Windows, unlike the UNIX family for example, came from a world of single user systems originating with DOS and DOS/Windows which were not multi-user and when Windows NT was developed as a multiuser system a great many software developers treated it as they always had making software that would not function well or often even at all in a multiuser mode.

This software ecosystem that is unique to Windows (it could effectively never exist on Linux, for example, because such software would simply be seen as broken due to the way that the ecosystem and deployments exist) has created an interesting problem making some software and some tasks easily addressable by the use of terminal servers identically to how it would be tackled by any UNIX OS while many other applications cannot be addressed using a terminal server and require a dedicated operating system instance for every user.

It is because of this historical factor leading to a significant difference in software ecosystems that has created the fundamental need for VDI and explains why VDI arose as a unique phenomenon within the Windows world and remains, for all intents and purposes, exclusive to it. So, it is very important to understand, that VDI arose conceptually as a means of addressing a need that existed only by a deficiency in third party applications and not because of an intrinsic nature of Windows itself in its current form or because VDI was a better approach to virtualizing or hosting end user desktop services. In fact, we could even look at VDI as an unfortunate kludge only needed in situations where we want to virtualize or centralize desktop resources and where some of the software needed to run on those systems cannot run in a multiuser mode. VDI is a fallback mechanism for special situations and not a desired approach to virtualized or centralized end user services.

It is important to note that due to the widespread use of VDI and necessity of it that the investment into support technologies around VDI has led to the possibility that in many cases VDI might actually outperform terminal servers even though architecturally this seems almost impossible. Basically this is happening because the incredible amount of research and development going into the hypervisor layer may be outpacing the same components in the operating system itself making for potentially better CPU and memory management and resource sharing. This is completely dependent on the unique situation, of course, as every OS and every hypervisor and every set of VDI tools is unique as well as the workloads being tested so mileage will vary significantly.

Also of serious consideration is, because of the aforementioned Windows-centric nature of the VDI concept, licensing. If we were to look at VDI from the Linux perspective we would have little to no licensing concerns and VDI would be up against traditional terminal services based on technical merits alone, but this is effectively never the case. The biggest single factor around VDI decision making is Microsoft licensing.

VDI licensing is both expensive, as well as complex. Companies wanting to consider the virtualization of Windows desktop resources have to carefully way the benefits against both the large cost of appropriate licensing and also the potentially large overhead of license management. Moving into VDI will likely mean lots of expensive IT time dedicated to license research, monitoring and training which is an often overlooked aspect of licensing costs.

VDI is a somewhat difficult concept to speak about in generalities because it is a slightly amorphous topic. If we virtualize a desktop, does it not become a server? If we use an operating system intended for server use, does that change what is and is not VDI? Is VDI based around use cases, licensing or product categories?

The real answer lies in that to the industry VDI is technically one thing but in practical terms to Microsoft, the only key licensing player in the space, it means something somewhat different. VDI is technically the virtualization of one to one “graphical end user” instances – meaning a single virtual machine being used by a single user much as a traditional, physical desktop or laptop would be used. To Microsoft, whose concerns are slightly different than those of the industry, the term refers to the virtualization of Windows “desktop class” operating systems. If you virtualize Windows “server class” operating systems, Microsoft does not view you as doing VDI. So we have to understand these two views of the concept to keep from becoming confused. In fact, using Windows Server OSes to get around VDI licensing needs from Windows desktops has become very standard and common. However, we have to remember the kludge nature of VDI and while this does solve the failure to write software that is multiuser in nature it does not address the very real potential that software was written with expectations of desktop-branded operating systems and we are somewhat likely to find end user software that is either locked (intentionally or unintentionally) to desktop operating systems only or is potentially licensed only on those platforms.

The last major consideration around VDI decision making is that unlike servers which when virtualized are completely virtualized, a desktop cannot be treated in the same way because there is always a physical component to it. The end user will always need a monitor to look at, a keyboard to type on, speakers to listen to, and so on. So when we are looking to move to VDI we must take care not to overlook the fact that we are not eliminating the need to purchase and maintain desktops, we are simply moving where the operating system will reside. We may redeploy older hardware to be used for remote access, move to thin clients or the newly termed and mostly meaningless zero clients or use otherwise in use “fat clients” to pull double duty handling both activities as a remote access client as well as providing its own desktop services.

Certainly virtualizing the desktop offers us many great opportunities and much value if we are doing it for the right reasons and understanding the hows, whys and whens of VDI. It has sadly, like so many technology trends, become a knee-jerk reaction to want to move to VDI without performing proper evaluations and developing a clear picture as to how VDI will fit into our own environments. If we lack a clear reason for choosing VDI it will be very unlikely that we deploy it in a positive manner.

Finally it is very important that we consider the necessary skill sets that will be required in order to properly move to VDI. From a purely technical standpoint, throwing a Windows 10 VM onto Hyper-V constitutes VDI but from a practical perspective this is not how effective VDI will be designed. VDI not only requires the special licensing knowledge that I mentioned above but will typically involve rather unique knowledge of modern and very specialized VDI toolsets and products, shared storage as it applies to VDI, remote access protocols, thin clients or zero clients, and more. VDI deployments tend to be one of the technical and unique components of an infrastructure leading to a great number of unknowns and challenges for any organization.

Virtualizing Even a Single Server

I find it very common in conversations involving virtualization to have the concept of consolidation, which in the context of server virtualization refers to putting multiple formerly physical workloads onto a single physical box with separation handled by the virtual machine barriers, treated as being the core tenant and fundamental feature of virtualization.  Without a doubt, workload consolidation represents an amazing opportunity with virtualization, but it is extremely important that the value of virtualization and the value of consolidation not be confused.  Too often I have found that consolidation is viewed as the key value in virtualization and the primary justification for it but this is not the case.  Consolidation is a bonus feature, but should never be needed when justifying virtualization.  Virtualization should be a nearly foregone conclusion while consolidation must be evaluated and many times would not be used.  That workloads should not be consolidated should never lead to the belief that those workloads should not be virtual.  I would like to explore the virtualization decision space to see how we should be looking at this point.

Virtualization should be thought of as hardware abstraction as that is truly what it is, in a practical sense.  Virtualization encapsulates the hardware and presents a predictable, pristine hardware set to guest operating systems.  This may sound like it adds complication but, in reality, it actually simplifies a lot of things both for the makers of operating systems and drivers as well as for IT practitioners designing systems.  It is because computers, computer peripherals and operating systems are such complex beasts that this additional layer actually ends up removing complexity from the system by creating standard interfaces.  From standardization comes simplicity.

This exact same concept of presenting a standard, virtual machine to a software layer exists in other areas of computing as well, such as with how many programming languages are implemented.  This is a very mature and reliable computing model.

Hardware abstraction and the stability that it brings alone are reason enough to standardize on virtualization across the board but the practical nature of hardware abstraction as implemented by all enterprise virtualization products available to us today brings us even more important features.  To be sure, most benefits of virtualization can be found in some other way but rarely as completely, reliably, simply or freely as from virtualization.

The biggest set of additional features typically come from the abstraction of storage and memory allowing for the ability to snapshot storage or even the entire running state of a virtual machine, that is to take an image of the running system and store it in a file.  This ability leads to many very important capabilities such as the ability to take a system snapshot before installing new software, changing configurations or patching; allowing for extremely rapid rollbacks should anything go wrong.  This seemingly minor feature can lead to big peace of mind and overall system reliability.  It also makes testing of features and rolling back or repeated testing very easy in non-production environments.

The ability to snapshot from the abstraction layer also leads to the ability to take “image-based backups”, that is backups taken via the snapshot mechanism at a block device layer rather than from within the operating system’s file system layer.  This allows for operating system agnostic backup mechanisms and backups that include the entire system storage pool all at once.  Image backups allow for what were traditionally known as “bare metal restores” – the entire system can be restored to a fully running state without additional interaction – easily and very quickly.  Not all hypervisor makers include this capability or include it to equal levels so while conceptually a major feature it is critical that the extent to which this feature exists or is licensed must be considered on a case by case basis (notably HyperV includes this fully, XenServer includes it partially and VMware vSphere only includes it with non-free license levels.)  When available, image-based backups allow for extremely rapid recovery at speeds unthinkable with other backup methodologies.  Restoring systems in minutes is possible, from disaster to recovery!

The ability to treat virtual machines as files (at least when not actively running) provides additional benefits that are related to the backup benefits listed above.  Namely the ability to rapidly and easily migrate between physical hosts and even to move between disparate hardware.  Traditionally hardware upgrades or replacements meant a complicated migration process fraught with peril.  With modern virtualization, moving from existing hardware to new hardware can be a reliable, non-destructive process with safe fallback options and little or possibly even zero downtime!  Tasks that are uncommon but were very risky previously can often become trivial today.

Often this is the true benefit of virtualization and abstraction mechanisms.  It is not, necessarily, to improve the day to day operations of a system but to reduce risk and provide flexibility and options in the future.  Preparing for unknowns that are either unpredictable or are simply ignored in most common situations.  Rarely is such planning done at all, much to the chagrin of IT departments left with difficult and dangerous upgrades that could have been easily mitigated.

There are many features of virtualization that are applicable to only special scenarios.  Many virtualization products include live migration tools for moving running workloads between hosts, or possibly even between storage devices, without downtime.  High availability and fault tolerant options are often available allowing some workloads to rapidly or even transparently recover from system hardware failure, moving from failed hardware to redundant hardware without user intervention.  While more of a niche benefit and certainly not to be included in a list of why “nearly all workloads”  should be virtual, it is worth noting as a primary example of features that are often available and could be added later if a need for them arises as long as virtualization is used from the beginning.  Otherwise a migration to virtualization would be needed prior to being able to leverage such features.

Virtualization products typically come with extensive additional features that only matter in certain cases.  A great many of them fall into a large pool of “in case of future need.”  Possibly the biggest of all of these is the concept of consolidation, as I had mentioned at the beginning of this article.  Like other advanced features like high availability, consolidation is not a core value of virtualization but is often confused for it.  Workloads not intending to leverage high availability or consolidation should still be virtualized – without a doubt.  But these features are so potentially valuable as future options, even for scenarios where they will not be used today, that they are worth mentioning regardless.

Consolidation can be extremely valuable and it can easily be understood why so many people simply assume that it will be used as it is so often so valuable.  The availability of this once an infrastructure is in place is a key point of flexibility for handling the unknowns of future workloads.  Even when consolidation is completely unneeded today, there is a very good chance, even in the smallest of companies, that it will be useful at some unknown time in the future.  Virtualization provides us with a hedge against the unknown by preparing our systems for the maximum in flexibility.  One of the most important aspects of any IT decision is managing and reducing risk.  Virtualization does this.

Virtualization is about stability, flexibility, standardization, manageability and following best practices.  No major enterprise virtualization product is not available, at least in some form, for free today.  Any purchase would, of course, require a careful analysis of value versus expenditure.  However, with excellent enterprise options available for free from all four key product lines in this space currently (Xen, KVM, HyperV and VMware vSphere) we need make no such analysis.  We need only show that the implementation is a non-negative.

What makes the decision making easy is that when we consider the nominal case – the bare minimum that all enterprise virtualization provides which is the zero cost, abstraction, encapsulation and storage based benefits we find that we have a small benefit in effectively all cases, no measureable downsides and a very large potential benefit from the areas of flexibility and hedging against future needs.  This leaves us with a clear win and a simple decision that virtualization, being free and with essentially no downsides on its own, should be used in any case where it can be (which, at this point, is essentially all workloads.)  Additional, non-core, features like consolidation and high availability should be evaluated separately and only after the decision to virtualize has already been solidified.  No lack of need for those extended features, in any way, suggests that virtualization should not be chosen based on its own merits.

This is simply an explanation of existing industry best practices which have been to virtualize all potential workloads for many years.  This is not new nor a change of direction.  Just the fact that across the board virtualization has been an industry best practice for nearly a decade shows what a proven and accepted methodology this is.  There will always be workloads that, for one reason or another, simply cannot be virtualized, but these should be very few and far between and should prompt a deep review to find out why this is the case.

When deciding whether or not to virtualize, the approach should always be to assume that virtualization is a foregone conclusion and only vary from this if a solid, defended technical reason makes this impossible.  Nearly all arguments against virtualization come from a position of misunderstanding with a belief that consolidation, high availability, external storage, licensing cost and other loosely related or unrelated concepts are somehow intrinsic to virtualization.  They are not and should not be included in a virtualization versus physical deployment decision.  They are separate and should be evaluated as separate options.

It is worth noting that because consolidation is not part of our decision matrix in creating base value for virtualization, that all of the reasons that we are using apply equally to both one to one deployments (that is a single virtual machine on a single physical device) as to consolidated workloads (that is multiple virtual machines on a single physical device.)  There is no situation in which a workload is “too small” to be virtualized.  If anything, it is the opposite, only the largest workloads, typically with extreme latency sensitivity, where a niche scenario of non-virtualization still exists as an edge case but even these cases are rapidly disappearing as latency improvements in virtualization and total workload capacities are improved.  These cases are so rare and vanishing so quickly that even taking the time to mention these cases is probably unwise as it suggests that exceptions, based on capacity needs, are common enough to evaluate for, which they are not, especially not in the SMB market.  The smaller the workload, the more ideal for virtualization, but this is only to reinforce that small business, with singular workloads, are the most ideal case for virtualization across the board rather than an exception to best practices, not to suggest that larger businesses should be looking for exceptions themselves.

The Inverted Pyramid of Doom

The 3-2-1 model of system architecture is extremely common today and almost always exactly the opposite of what a business needs or even wants if they were to take the time to write down their business goals rather than approaching an architecture from a technology first perspective.  Designing a solution requires starting with business requirements, otherwise we not only risk the architecture being inappropriately designed for the business but rather expect it.

The name refers to three (this is a soft point, it is often two or more) redundant virtualization host servers connected to two (or potentially more) redundant switches connected to a single storage device, normally a SAN (but DAS or NAS are valid here as well.) It’s an inverted pyramid because the part that matters, the virtualization hosts, depend completely on the network which, in turn, depends completely on the single SAN or alternative storage device. So everything rests on a single point of failure device and all of the protection and redundancy is built more and more on top of that fragile foundation. Unlike a proper pyramid with a wide, stable base and a point on top, this is built with all of the weakness at the bottom. (Often the ‘unicorn farts’ marketing model of “SANs are magic and can’t fail because of dual controllers” comes out here as people try to explain how this isn’t a single point of failure, but it is a single point of failure in every sense.)

So the solution, often called a 3-2-1 design, can also be called the “Inverted Pyramid of Doom” because it is an upside down pyramid that is too fragile to run and extremely expensive for what is delivered. So unlike many other fragile models, it is very costly, not very flexible and not as reliable as simply not doing anything beyond having a single quality server.

There are times that a 3-2-1 makes sense, but mostly these are extreme edge cases where a fragile environment is desired and high levels of shared storage with massive processing capabilities are needed – not things you would see in the SMB world and very rarely elsewhere.

The inverted pyramid looks great to people who are not aware of the entire architecture, such as managers and business people.  There are a lot of boxes, a lot of wires, there are software components typically which are labeled “HA” which, to the outside observer, makes it sounds like the entire solution must be highly reliable.  Inverted Pyramids are popular because they offer “HA” from a marketing perspective making everything sound wonderful and they keep the overall cost within reason so it seems almost like a miracle – High Availability promises without the traditional costs.  The additional “redundancy” of some of the components is great for marketing.  As reliability is difficult to measure, business people and technical people alike often resort to speaking of redundancy instead of reliability as it is easy to see redundancy.  The inverted pyramid speaks well to these people as it provides redundancy without reliability.  The redundancy is not where it matters most.  It is absolutely critical to remember that redundancy is not a check box nor is redundancy a goal, it is a tool to use to obtain reliability improvements.  Improper redundancy has no value.  What good is a car with a redundant steering wheel in the trunk?  What good is a redundant aircraft if you die when the first one crashes?  What good is a redundant sever if your business is down and data lost when the single SAN went up in smoke?

The inverted pyramid is one of the most obvious and ubiquitous examples of “The Emperor’s New Clothes” used in technology sales.  Because it meets the needs of the resellers and vendors by promoting high margin sales and minimizing low margin ones and because nearly every vendor promotes it because of its financial advantages to the seller it has become widely accepted as a great solution because it is just complicated and technical enough that widespread repudiation does not occur and the incredible market pressure from the vast array of vendors benefiting from the architecture it has become the status quo and few people stop and question if the entire architecture has any merit.  That, combined with the fact that all systems today are highly reliable compared to systems of just a decade ago causing failures to be uncommon enough that the fact that they are more common that they should be and statistical failure rates are not shared between SMBs, means that the architecture thrives and has become the de facto solution set for most SMBs.

The bottom line is that the Inverted Pyramid approach makes no sense – it is far more unreliable than simpler solutions, even just a single server standing on its own, while costing many times more.  If cost is a key driver, it should be ruled out completely.  If reliability is a key driver, it should be ruled out completely.  Only if cost and reliability take very far back seats to flexibility should it even be put on the table and even then it is rare that a lower cost, more reliable solution doesn’t match it in overall flexibility within the anticipated scope of flexibility.  It is best avoided altogether.

Originally published on Spiceworks in abridged form: http://community.spiceworks.com/topic/312493-the-inverted-pyramid-of-doom

When to Consider a Private Cloud?

The idea of running a private cloud, hosted or on premise, for a single company is rapidly becoming a commonplace one.  More and more businesses are learning of cloud computing and seeing that running their own cloud platform is both feasible and potentially valuable to the business.  But do to a general lack of cloud knowledge it is becoming more and more common that clouds are recommended when they do not suit the needs of the business at all, instead being mistaken for traditional virtualization management systems.

A cloud is a special type of virtualization platform and fills a unique niche.  Cloud computing takes traditional virtualization and layers it with automated scaling and provisioning that allows for rapid, horizontal scaling of applications.  This is not a normal business need.  Cloud also lends itself, and is often tied to, self-service of resource provisioning but this alone does not make something a cloud nor justify the move to a cloud platform, but could be an added incentive.  What makes cloud interesting is the ability to provide self-service portals to end users and the ability for applications to self-provision themselves.  These are the critical aspects that set a cloud platform apart from traditional virtualization.

What a cloud does not imply are features such as simplified whole-domain system management from a single pane of glass, large scale consolidation, easy migration between hardware systems, rapid provisioning of new systems, virtualization, high availability, resource over-commitment, etc.  These features are all available in other ways, primarily through or on top of standard platform virtualization (VMware vSphere, Microsoft’s HyperV, Xen, et. al.)  It is not that these features cannot be made available in a private cloud, but the features are not aspects of the cloud but rather of the underlying virtualization platform.  The cloud layer is above these and simply passes through the benefits of the underlying layers.

Often cloud is approached because of a misunderstanding that many of the features commonly associated with private clouds are not available in some other, simpler form.  This is rarely the case.  Normal virtualization platforms, most commonly VMware’s vSphere and Microsoft’s HyperV, offer all of these options.  They can be used to make robust clusters of physical servers, managed from a single interface, with incredibly high reliability and rapid provisioning of new systems that require minimal specialty knowledge from the IT department and maintain traditional business workflows.  Most times, when I am speaking with businesses that believe that they may be interested in pursuing the ownership of their own cloud, the features that they really want are not cloud features at all.

The term “cloud” has simply become so popular recently that people begin to assume that important features for nearly everyone must be attributed to it to explain the sudden surge in importance, but this is simply not the case.  Cloud remains, and will remain, a predominantly niche solution appropriate for only a very small number of companies to own themselves.  The use of public clouds or the use of hosted services delivered from cloud platforms will become, and indeed has already become, nearly ubiquitous   But ownership of a private cloud for the use of a single company is a long way from being a business need for most businesses or business units and in many cases, I suspect, never will become so.

Private clouds shine in two key areas.  The first is a business who needs a large number of temporary or ad hoc systems “spun up” on a regular basis.  This often occurs with large development teams and application testing groups, especially if these groups target multiple operating systems.  The ability to rapidly provision temporary testing systems or lab systems can be very advantageous and the nature of cloud computing to easily expose provisioning tools that allow business customers to create, manage and destroy their own system instances with, we would expect, built-in charge back mechanisms can be very beneficial to corporate efficiency as the interaction between the IT department and the end users becomes nearly frictionless for this transaction.  Responsibility for maintaining the cloud as a whole can easily be segregated from the responsibilities of maintaining individual systems.  Seldom used in this manner for production workloads, this allows a self-service approach that many business units desperately seek today.  Impractical on a small scale due to the overhead of creating and maintaining the cloud platform itself but on a large scale can be hugely productive.  In addition to technical advantages, this aspect of cloud computing can serve as a model for thinking of IT as an internal service provider and departments as customers.  We have long discussed IT and other business units in these terms but we rarely truly think of them in this way.

The second area where cloud computing really comes into its own and the one for which the concept was developed originally is to handle auto provisioning for horizontally scaling applications.  That is, application workloads that are able to increase in their capacity handling ability by spawning new instances for themselves.  On a small scale, many web applications, due to their stateless nature, do this within a single system by spawning new thread workers to handle additional connections.  An Apache web server might start with eight listeners ready to service requests but as those threads become exhausted it automatically starts new threads to handle additional incoming connections so that it is able to scale within the confines of a single server.  To expand on this concept, applied to cloud computing, that same application sensing thread exhaustion approaching on a system-wide level (or based on other metrics such as a lack of free memory or a loss of performance) would use an API exposed from the cloud computing platform to signal the cloud management system to provision a new copy of the system that was calling it – essentially cloning itself on the fly.  In a matter of seconds, a new virtual server, identical to the first, would be up and running and joining its parent in servicing incoming requests.  This child or clone system would likewise spawn new threads internally, as needed, and then if it too sensed exhaustion would call the cloud platform to create yet another new system to handle even more threads.  In this way the application can grow itself almost infinitely (within the hardware limits of the entire cloud platform) as needed, on the fly, automatically.  Then, as individual systems become idle, workloads die down, one at a time a system can signal that it is no longer needed to the cloud management system and the system will be powered off and destroyed as it was simply a stateless clone, freeing system capacity for other applications and workloads that may need to take advantage of the spare capacity.

As we can see, cloud computing is massively powerful, especially with the bulk of today’s public and enterprise applications being written in a stateless manner in order to take advantage of web protocols and end user interfaces. Web applications are especially adept at leveraging cloud computing’s scalability model and most large scale web applications leverage this elastic expanding and contracting of capacity today.  Many new NoSQL models are beginning to emerge that signal that databases, in addition to application front end processing nodes, may soon benefit from similar models on a large scale.  This can certainly be leveraged for internal applications as well as publicly facing ones, however internal applications rarely need to scale beyond a single system and so it is quite rare to find private clouds being leveraged in quite this way.

The dangers around cloud computing come in the form of additional complexity above and beyond normal virtualization.  There is the potential for complex storage needed to support the platform and more layers to learn and maintain.  Cloud computing’s ability to rapidly create and destroy systems can make it tempting for users to attempt to use cloud resources as if they were persistent systems, which they can be made to be, which can result in data loss from users receiving behavior very different from what is traditional and expected.  Possibly the biggest cloud concern is a human one and that is the increased likelihood of experiencing uncontrolled system sprawl as end users wildly spin up more and more new systems which, as they are created by end users and not IT, are probably not tightly controlled and monitored leaving systems in a rogue, and oft forgotten state.  This can lead to a maintenance and security nightmare as systems go unpatched and uncared for increasing risk and draining resources.  And most worrisome is the possibility that systems will be created and forgotten and potentially exist without proper licensing.  Tracking and reporting on auto provisioned systems carries process risk caused by the huge shift in how systems are created.  IT departments are accustomed to the heavy licensing processes necessary to maintain compliance but with cloud computing there is a potential for this process to be exposed to the business units in a way for which they are not at all equipped to handle.  There are accommodations for the licensing needs of cloud computing, but this is extra complexity and management that must be addressed.  Allowing systems to exist without direct IT department oversight clearly carries risk of a potentially unforeseen nature.

Private cloud ownership brings many exciting possibilities, but it is clear that these benefits and opportunities are not for everyone.  They cater to larger businesses, to those with good process control, to companies running especially adapted applications that are capable of taking advantage of the system-level elasticity of the resources and those needing large scale ad hoc system creation and destruction provided, as a service, for end users to self-provision.  Most large enterprises will find limited use for cloud computing in house.  Smaller organizations will rarely find cloud computing to be advantageous in the near future, if ever.