Tag Archives: cloud

When to Consider a Private Cloud?

The idea of running a private cloud, hosted or on premise, for a single company is rapidly becoming a commonplace one.  More and more businesses are learning of cloud computing and seeing that running their own cloud platform is both feasible and potentially valuable to the business.  But do to a general lack of cloud knowledge it is becoming more and more common that clouds are recommended when they do not suit the needs of the business at all, instead being mistaken for traditional virtualization management systems.

A cloud is a special type of virtualization platform and fills a unique niche.  Cloud computing takes traditional virtualization and layers it with automated scaling and provisioning that allows for rapid, horizontal scaling of applications.  This is not a normal business need.  Cloud also lends itself, and is often tied to, self-service of resource provisioning but this alone does not make something a cloud nor justify the move to a cloud platform, but could be an added incentive.  What makes cloud interesting is the ability to provide self-service portals to end users and the ability for applications to self-provision themselves.  These are the critical aspects that set a cloud platform apart from traditional virtualization.

What a cloud does not imply are features such as simplified whole-domain system management from a single pane of glass, large scale consolidation, easy migration between hardware systems, rapid provisioning of new systems, virtualization, high availability, resource over-commitment, etc.  These features are all available in other ways, primarily through or on top of standard platform virtualization (VMware vSphere, Microsoft’s HyperV, Xen, et. al.)  It is not that these features cannot be made available in a private cloud, but the features are not aspects of the cloud but rather of the underlying virtualization platform.  The cloud layer is above these and simply passes through the benefits of the underlying layers.

Often cloud is approached because of a misunderstanding that many of the features commonly associated with private clouds are not available in some other, simpler form.  This is rarely the case.  Normal virtualization platforms, most commonly VMware’s vSphere and Microsoft’s HyperV, offer all of these options.  They can be used to make robust clusters of physical servers, managed from a single interface, with incredibly high reliability and rapid provisioning of new systems that require minimal specialty knowledge from the IT department and maintain traditional business workflows.  Most times, when I am speaking with businesses that believe that they may be interested in pursuing the ownership of their own cloud, the features that they really want are not cloud features at all.

The term “cloud” has simply become so popular recently that people begin to assume that important features for nearly everyone must be attributed to it to explain the sudden surge in importance, but this is simply not the case.  Cloud remains, and will remain, a predominantly niche solution appropriate for only a very small number of companies to own themselves.  The use of public clouds or the use of hosted services delivered from cloud platforms will become, and indeed has already become, nearly ubiquitous   But ownership of a private cloud for the use of a single company is a long way from being a business need for most businesses or business units and in many cases, I suspect, never will become so.

Private clouds shine in two key areas.  The first is a business who needs a large number of temporary or ad hoc systems “spun up” on a regular basis.  This often occurs with large development teams and application testing groups, especially if these groups target multiple operating systems.  The ability to rapidly provision temporary testing systems or lab systems can be very advantageous and the nature of cloud computing to easily expose provisioning tools that allow business customers to create, manage and destroy their own system instances with, we would expect, built-in charge back mechanisms can be very beneficial to corporate efficiency as the interaction between the IT department and the end users becomes nearly frictionless for this transaction.  Responsibility for maintaining the cloud as a whole can easily be segregated from the responsibilities of maintaining individual systems.  Seldom used in this manner for production workloads, this allows a self-service approach that many business units desperately seek today.  Impractical on a small scale due to the overhead of creating and maintaining the cloud platform itself but on a large scale can be hugely productive.  In addition to technical advantages, this aspect of cloud computing can serve as a model for thinking of IT as an internal service provider and departments as customers.  We have long discussed IT and other business units in these terms but we rarely truly think of them in this way.

The second area where cloud computing really comes into its own and the one for which the concept was developed originally is to handle auto provisioning for horizontally scaling applications.  That is, application workloads that are able to increase in their capacity handling ability by spawning new instances for themselves.  On a small scale, many web applications, due to their stateless nature, do this within a single system by spawning new thread workers to handle additional connections.  An Apache web server might start with eight listeners ready to service requests but as those threads become exhausted it automatically starts new threads to handle additional incoming connections so that it is able to scale within the confines of a single server.  To expand on this concept, applied to cloud computing, that same application sensing thread exhaustion approaching on a system-wide level (or based on other metrics such as a lack of free memory or a loss of performance) would use an API exposed from the cloud computing platform to signal the cloud management system to provision a new copy of the system that was calling it – essentially cloning itself on the fly.  In a matter of seconds, a new virtual server, identical to the first, would be up and running and joining its parent in servicing incoming requests.  This child or clone system would likewise spawn new threads internally, as needed, and then if it too sensed exhaustion would call the cloud platform to create yet another new system to handle even more threads.  In this way the application can grow itself almost infinitely (within the hardware limits of the entire cloud platform) as needed, on the fly, automatically.  Then, as individual systems become idle, workloads die down, one at a time a system can signal that it is no longer needed to the cloud management system and the system will be powered off and destroyed as it was simply a stateless clone, freeing system capacity for other applications and workloads that may need to take advantage of the spare capacity.

As we can see, cloud computing is massively powerful, especially with the bulk of today’s public and enterprise applications being written in a stateless manner in order to take advantage of web protocols and end user interfaces. Web applications are especially adept at leveraging cloud computing’s scalability model and most large scale web applications leverage this elastic expanding and contracting of capacity today.  Many new NoSQL models are beginning to emerge that signal that databases, in addition to application front end processing nodes, may soon benefit from similar models on a large scale.  This can certainly be leveraged for internal applications as well as publicly facing ones, however internal applications rarely need to scale beyond a single system and so it is quite rare to find private clouds being leveraged in quite this way.

The dangers around cloud computing come in the form of additional complexity above and beyond normal virtualization.  There is the potential for complex storage needed to support the platform and more layers to learn and maintain.  Cloud computing’s ability to rapidly create and destroy systems can make it tempting for users to attempt to use cloud resources as if they were persistent systems, which they can be made to be, which can result in data loss from users receiving behavior very different from what is traditional and expected.  Possibly the biggest cloud concern is a human one and that is the increased likelihood of experiencing uncontrolled system sprawl as end users wildly spin up more and more new systems which, as they are created by end users and not IT, are probably not tightly controlled and monitored leaving systems in a rogue, and oft forgotten state.  This can lead to a maintenance and security nightmare as systems go unpatched and uncared for increasing risk and draining resources.  And most worrisome is the possibility that systems will be created and forgotten and potentially exist without proper licensing.  Tracking and reporting on auto provisioned systems carries process risk caused by the huge shift in how systems are created.  IT departments are accustomed to the heavy licensing processes necessary to maintain compliance but with cloud computing there is a potential for this process to be exposed to the business units in a way for which they are not at all equipped to handle.  There are accommodations for the licensing needs of cloud computing, but this is extra complexity and management that must be addressed.  Allowing systems to exist without direct IT department oversight clearly carries risk of a potentially unforeseen nature.

Private cloud ownership brings many exciting possibilities, but it is clear that these benefits and opportunities are not for everyone.  They cater to larger businesses, to those with good process control, to companies running especially adapted applications that are capable of taking advantage of the system-level elasticity of the resources and those needing large scale ad hoc system creation and destruction provided, as a service, for end users to self-provision.  Most large enterprises will find limited use for cloud computing in house.  Smaller organizations will rarely find cloud computing to be advantageous in the near future, if ever.

Just Because You Can…

I see this concept appear in discussions surrounding virtualization all of the time.  This is a broader, more general concept but virtualization is the “hot, new technology” facing many IT organizations and seems to be the space where currently we see the “just because you can, doesn’t mean you should” problems rearing their ugly heads most prevalently.  As with everything in IT, it is critical that all technical decisions be put into a business context so that we understand why we choose to do what we do and not blindly attempt to make our decisions based on popular deployment methodologies or worse, myths..

Virtualization itself, I should point out, I feel should be a default decision today for those working in the x64 computing space with systems being deployed sans virtualization only when a clear and obvious necessity exists such as specific hardware needs, latency sensitive applications, etc.  Baring any specific need, virtualization is free to implement from many vendors and offers many benefits both today and in future-proofing the environment.

That being said, what I often see today is companies deploying virtualization not as a best practice but as a panacea to all perceived IT problems.  This it certainly is not.  Virtualization is a very important tool to have in the IT toolbox and one that we will reach for very often, but it does not solve every problem and should be treated like every other tool that we posses and used only when appropriate.

I see several things recurring when virtualization discussions come up as a topic.  Many companies today are moving towards virtualization not because they have identified a business need but because it is the currently trending topic and people feel that if they do not implement virtualization that somehow they will be left behind or miss out on some mythical functionality.  This is generally good as it is increasing virtualization adoption, but it is bad because good IT and business decision making processes are being bypassed.  What happens is often that in the wave of virtualization hype IT departments feel that not only do they have to implement virtualization itself but do so in ways that may not be appropriate for their business.

There are four things that I often see tied to virtualization, often accepted as virtualization requirements, whether or not they make sense in a given business environment.  These are server consolidation, blade servers, SAN storage and high availability or live failover.

Consolidation is so often vaunted as the benefit of virtualization that I think most IT departments forget that there are other important reasons for doing implementing it.  Clearly consolidation is a great benefit for nearly all deployments (mileage may vary, of course) and is nearly always able to be achieved simply through better utilization of existing resources.  It is a pretty rare company that runs more than a single physical server that cannot shave some amount of cost through limited consolidation and it is not uncommon to see datacenter footprints decimated in larger organizations.

In extreme cases, though, it is not necessary to abandon virtualization projects just because consolidation proves to be out of the question.  These cases exist for companies with high utilization systems and little budget for a preemptive consolidation investment.  But these shops can still virtualize “in place” systems on a one to one basis to gain other benefits of virtualization today and look to consolidate when hardware needs to be replaced tomorrow or when larger, more powerful servers become more cost effective in the future.  It is important to not rule out virtualization just because its most heralded benefit may not apply at the current time in your environment.

Blade servers are often seen as the choice for virtualization environments.  Blades may play better in a standard virtualization environment than they do with more traditional computational workloads but this is both highly disputable and not necessarily applicable data.  Being a good scenario for blades themselves does not make it a good scenario for a business.  Just because the blades perform better than normal when used in this way does not imply that they perform better than traditional servers – only that they have potentially closed the gap.

Blades needs to be evaluated using the same harsh criteria when virtualizing as when not and, very often, they will continue to fail to provide the long term business value needed to choose them over the more flexible alternatives.  Blades remain far from a necessity for virtualization and often, in my opinion, a very poor choice indeed.

One of the most common misconceptions is that by moving to virtualization one must also move to shared storage such as SAN.  This mindset is the obvious reaction to the desire to also achieve other benefits from virtualization which, if they don’t require SAN, benefit greatly from it.  The ability to load balance or failover between systems is heavily facilitated by having a shared storage backend.  It is a myth that this is a hard requirement, but replicated local storage brings its own complexities and limitations.

But shared storage is far from a necessity of virtualization itself and, like everything, needs to be evaluated on its own.  If virtualization makes sense for your environment but you need no features that require SAN, then virtualize without shared storage.  There are many cases where local storage backed virtualization is an ideal deployment scenario.  There is no need to dismiss this approach without first giving it serious consideration.

The last major assumed necessary feature of virtualization is system level high availability or instant failover for your operating system.  Without a doubt, high availability at the system layer is a phenomenal benefit that virtualization brings us.  However, few companies needed high availability at this level prior to implementing virtualization and the price tag of the necessary infrastructure and software to do it with virtualization is often so high as to make it too expensive to justify.

High availability systems are complex and often overkill.  It is a very rare business system that requires transparent failover for even the most critical systems and those companies with that requirement would almost certainly already have failover processes in place.  I see companies moving towards high availability all of the time when looking at virtualization simply because a vendor saw an opportunity to dramatically oversell the original requirements.  The cost of high availability is seldom justified by the potential loss of revenue from the associated reduction in downtime.  With non-highly available virtualization, downtime for a failed hardware device might be measured in minutes if backups are handled well.  This means that high availability has to justify its cost in potentially eliminating just a few minutes of unplanned downtime per year minus any additional risks assumed by the added system complexity.  Even in the biggest organizations this is seldom justified on any large scale and in a more moderately sized company it is uncommon altogether.  But today we find many small businesses implementing high availability systems at extreme cost on systems that could easily suffer multi-day outages with minimal financial loss simply because the marketing literature promoted the concept.

Like anything, virtualization and all of the associated possibilities that it brings to the table need to be evaluated individually in the context of the organization considering them.  If the individual feature does not make sense for your business do not assume that you have to purchase or implement that feature.  Many organizations virtualize but use only a few, if any, of these “assumed” features.  Don’t look at virtualization as a black box, look at the parts and consider them like you would consider any other technology project.

What often happens in a snowball effect where one feature, likely high availability, is assumed to be necessary without the proper business assessment being performed.  Then a shared storage system, often assumed to be required for high availability, is added as another assumed cost.  Even if high availability features are not purchased the decision to use SAN might already be made and fail to be revisited after changes to the plan are made.  It is very common, in my experience, to find projects of this nature with sometimes more than fifty percent of the total expenditure on the project being spent on products that the purchaser is unable to even describe the reason for having purchased.

This concept does not stop at virtualization.  Extend it to everything that you do.  Keep IT in perspective of the business and don’t assume that going with one technology automatically assumes that you must adopt other technologies that are popularly associated with it.

Linux Virtualization Deployment Advantage

As more and more businesses begin to deploy virtualization broadly, we must begin to step back and reconsider the opportunities presented to us by this shift in datacenter architecture.  Virtualization comes with new challenges and potential not only for cost savings but for aggressive project implementation.  Small businesses, especially, when using virtualization tend to prepare themselves for projects that they could never have envisioned doing during the era of physical-only servers.

The big winners in this space of emerging virtualization opportunity are the open source operating systems such as Linux, OpenSolaris and FreeBSD.  The reason that these particular operating systems have unique opportunities that Windows and Mac OSX do not is because of the way that they are, or can be, licensed.  Each of these operating systems has an option by which they are available completely for free – something that cannot be done with Windows or Mac OSX.

Traditionally, when purchasing a new server a business would price out expensive hardware with relatively inexpensive software.  An enterprise operating system, such as Windows, would typically represent a relatively small percentage of the cost of a new server.  Even a small server would cost a few thousand dollars and Windows Server can easily be purchased for less than one thousand dollars.  In this scenario a business looking to purchase a new server would see only a very small cost savings in opting for a “free” operating system since introducing a new OS has its own risks and the bulk of the cost of the new server is in the hardware which would still need to be purchased.

Given that equation, only a rare small business would consider the purchase of a non-Windows-based server.  The opportunity for failure is too high given the risk of change and the cost savings are too small.  Today, though, virtualization is commonplace and becoming more ubiquitous every day.  Businesses virtualizing their infrastructure typically have excess capacity on their servers that is going unused.  As these businesses and their IT departments begin to look to utilize this spare capacity they will increasingly find that the cost of deploying virtualized Windows Server remains high while the cost of deploying a virtualized Linux or OpenSolaris server is nominal – generally nothing more than the effort to do so without any capital expenditure or its associated risk.

The ability to deploy new servers, at any time, without any cost is a significant advantage that companies have not begun to truly comprehend.  If a business wants a new web server, for instance, they can have one provisioned and built in thirty minutes without buying any licenses.  Having redundant virtualization hardware means that a redundant web server can be had as well – again without any capital cost.  Unlike Windows (or other commercial operating systems) there is no need to purchase a second license just to have a backup server.

This means that for the first time many businesses can begin to consider clusters as well.  Typically the cost of licensing software for clustering was prohibitive but if that licensing becomes free then suddenly clusters become very attractive options.

Of course, as open source proponents will point out, the low cost of Linux and other free and open source solutions have long been reasons to move to these platforms, but this discounts the incredible shift in pricing structure that occurs only when spare usable capacity meets the previously existing free licenses.  It is only because so many business have already implemented virtualization strategies, or are in the process of doing so, that this new opportunity truly presents itself.

The first challenge will be in getting businesses to begin to think of operating systems and application platforms as being free.  The ways in which businesses may take advantage of this has yet to be seen.  Businesses are so used to being hamstrung by the need to buy new hardware and expensive server software licenses for every new system deployment that the widespread availability of spare server images is quite novel indeed.

Of course, as with many new technology changes, it is the small and medium business space where the greatest change will likely take place.  Large enterprises are already doing datacenter consolidation and do not necessarily have spare capacity available to them as their capacity plan already takes into account virtualization.  But in the smaller business space where capacity planning is a practically non-existent practice we see a different type of opportunity.

What we typically see in small businesses moving to virtualization is an over-purchasing of hardware.  This generally comes from a misunderstanding of how capacity planning and virtual guest interaction will occur in the virtualized environment but also from a desire to err on the side of overpowered versus underpowered and the nature of virtualization capacity planning being a bit of a “black art”.  Because of this, however, many small businesses have server resources sitting idle.  It is not uncommon to see a powerful server virtualizing just two server instances when there is capacity to virtualize a dozen or more.

It is this overprovisioning of hardware that offers unique opportunity.  Many small businesses, and even medium sized businesses, may manage to effectively virtualize their entire existing server infrastructure leaving no further opportunity for cost savings through consolidation.  At this point the spare capacity of the existing servers offers no further cost savings and can now be viewed as capacity for growth instead.

This begs the question of “What new deployment opportunities exist given these opportunities?”  This question is difficult to answer as it will be different for nearly every business, but we can look at some commonalities to build a rough picture of where we may see new value presenting itself.

The most obvious new opportunity is in new web applications.  Small businesses often would like to take advantage of free web-based applications but do not want to risk deploying new, low-priority applications to their existing Windows-based web server of do not even have a server available to do so.  Creating one or more open source application servers is incredibly simple.  Deploying a wiki, corporate web portal, a blogging engine or news site, bug or incident tracking application, microblogging platform (a la laconi.ca,) CRM, ERP or any of thousands of similar applications can be done quickly and easily with minimal cost using only “spare” time from the existing IT resources.  Any number of internal applications such as these could bring value to the company and produce very little impact on a virtualization platform so many could be deployed utilizing only a small amount of excess capacity.

Beyond obvious web apps there are more feature-rich systems that could be deployed for no cost.  A great example is the OpenFire instant messaging and presence server.  Companies can suddenly roll out complete enterprise class, secure, internal instant messaging applications at no cost whatsoever.  Another example is in monitoring systems such as Nagios, Zenoss or Zabbix – all of which are available for free and represent a real benefit for companies that currently have no such system.  Enterprise monitoring completely for free.

Beyond new applications there is also an “environmental” benefit to be had.  In an enterprise environment changes going into production go through a series of testing.  Typically big businesses will maintain a development server environment, a user acceptance testing environment and then the production environment.  For a small business to do this with Windows is extremely cost prohibitive as the servers in each environment need to be licensed.  But with open source servers being virtualized using spare capacity deploying virtual servers for each of these environments is completely free and allows small businesses to test their own processes before making production changes giving them added stability previously unaffordable to them.

After all of these growth benefits there is one additional benefit to consider – flexibility.  Because these new systems can be deployed and tested with no cost it provides a new opportunity for small shops to deploy open source solutions that may replace expensive Windows solutions that they are currently using.  This could include replacing Exchange with Zimbra or replacing IIS with Apache or Active Directory with an LDAP server.  Doing a project like this would be risky and potentially costly if the hardware and software had to be purchased up front.  But if the project can be done, only using free time from the existing IT department, and can be done as a free “proof of concept” before looking to do a pilot and then full production replacement then risk can be minimized and the entire project can be effectively free.

While a full architectural replacement may be very aggressive for an average small business it is also a very significant potential cost savings.  Moving completely to open source systems is not for everyone and should be evaluated carefully.  The ability to evaluate a project of this magnitude, for free, is very important and small businesses should consider doing so to be sure that they are using the systems that make the most sense for their business model and needs rather than simply using the solutions with which they are already familiar or are already in place.

There are many additional ways in which free and open source products, deployed using existing, excess server capacity, can be used to expand the IT infrastructure of small businesses.  Learning to seek out opportunities rather than seeking cost savings from IT is a new process for most small businesses and requires some relearning, but those that take the time to pursue these opportunities have many benefits to be gained.