IT Infrastructure is a challenge for any company and especially companies that are not large enough to implement their own, full scale datacenters. Like many things in IT, major challenges come in the form of lacking specific, seldom used expertise as well as lacking the scale to utilize singular resources effectively.
This lack of scale can come in many forms. The obvious one is in man power. Managing a physical computing infrastructure uses unique skills that are separate from IT itself and are often desired to be available “around the clock.” This can vary from security to electrical to cooling and facilities to “datacenter technician” style staff. Of course, smaller businesses simply do without these roles available to them, but this raises the cost incurred on a “per server” basis to maintain the infrastructure. Large businesses and dedicated datacenters leverage an efficiency of scale to make the cost of physically housing an IT infrastructure lower – either by actually lowering the cost directly or by raising the quality and reliability of the equipment.
The cost effectiveness of delivering power, cooling and datacenter services is only one aspect of the cost of IT infrastructure in a business. Where many businesses attack this problem, by reducing infrastructure investment and staff, may counteract some amount of the up front costs of the infrastructure, but generally does so to the detriment of availability and longevity of equipment. Whether it is a lack of ISP redundancy, an absence of diesel electric generators or shaving a year or two of service off of a server’s service life, these costs generally add up, often in ways that are difficult to identify and track.
We see the effects of low qualify infrastructure often come out in the behaviour and expectations of smaller businesses. For example in the enterprise datacenter an average server lifespan may be ten years or more, but smaller businesses often assume that a server is worn out and unreliable in seven or eight years. This increase in failure rate also leads to more concern about system failure. Smaller businesses often see a higher, rather than a lower, need to have redundant systems even when lower revenue would normally suggest otherwise. Small businesses are prone to investing heavily in high availability mechanisms, often at great expense, to mitigate a perceived risk of high system fail rates that larger businesses may be less likely to see. These factors can combine to create a high cost through more rapid system replacement and a tendency towards overbuying hardware – sometimes even doubling the otherwise necessary investment to protect against risks created by lower quality facilities management.
This concept is not unique to information infrastructure. In the audiophile world, while huge investments in high quality audio equipment is common, it is a rule of thumb that fifty percent of audio quality comes from the equipment and fifty percent comes from the environment into which it is placed. This lesson applies to information infrastructure. Lower cost gear may run longer and more reliably in a high quality physical environment than more expensive, better engineered equipment will in a lower quality one.
Of course the most obvious components of lower reliability come from being unable to maintain redundant generators, independent power rails, adequate fuel supplies, uninterrupted power supply units, steady temperature and humidity, air filtration and, of course, highly redundant multi-path WAN access. These aspects we think of all the time and are almost completely out of reach of all but the largest companies. Even simple things like restricting access to only essential server room staff can be an insurmountable challenge in a small environment.
These challenges create an opportunity to find alternatives for the SME, SMB and SOHO business markets to look for ways to leverage combined scale. While many companies today turn to ideas such as hosted cloud computing, the associated costs to elastically expanding capacity often make this impractical as this same market struggles the most to have the ability to utilize that type of functionality. Cloud computing can be an answer in some cases, but normally only for the very smallest of companies for whom a single server is too much scale, or for those companies so large that they have a DevOps-style automation infrastructure capable of scaling elastically with load demands and workloads that make sense for this process. But these companies are the exception, not the norm. More often hosted cloud computing makes sense for only a specific subset of public-facing workloads and only in some cases.
For the majority of companies too small to create the scale necessary to build out their own full scale IT infrastructure, the answer is likely going to be found in colocation. It must be noted that there are obviously potentially overarching locational or environmental factors that can make off-premises infrastructures impossible or at least impractical. Most businesses, however, will not be subject to these limitations.
Colocation tackles the cost challenges of the smaller business market by generating the scale necessary to make high quality, dedicated information infrastructure facilities possible. This includes staff, WAN connectivity, environmental controls, power, and expertise. Cost savings can often come from surprising places including lower power cost per kilowatt hour, lower cost of cooling and power conditioning and higher real estate density.
It is often believed that colocation represents a cost premium service for businesses that have needs above and beyond the average, but in reality colocation is often and should often be chosen because it represents an opportunity to lower costs while also improving reliability. Colocation, in most cases, will actually bring a cost savings on a month by month basis providing for an impressive return on investment potential over time as the initial cost can be equal or similar to other investments, but the ongoing monthly cost can be lower and, perhaps more importantly, the costs can become far more predictable with fewer risks and unexpected expenditures.
Because the cost of services are potentially very granular it is actually far easier for colocation lower the overall expenditure than is generally believed. For example, a small business with just one or two servers would still need certain basics such as air conditioning and UPS support plus footprint space and security; all dedicated for only a very small amount of equipment. In a colocation facility these servers may represent less than one percent of the cooling of a large, high efficiency cooling system, may use just a small fraction of a large UPS and so forth.
Colocation also frees IT staff from performing datacenter functions, at which they are generally untrained and poorly qualified, to focus on the tasks at which they are more valuable and intentioned. Then the datacenter tasks can be performed by experienced, dedicated datacenter staff.
Calculating exactly ROI can be challenging because individual cases are very unique and depend heavily on the workloads, use cases, independent needs and environmental factors of an individual business and the colocation options considered. But it should be approached with a mindset that colocation does not present only an opportunity for improvements in the quality or reliability of IT infrastructure services, not that it can represent a return on investment but that it may, in fact, do both of these things on top of fundamentally lowering costs overall.
Two issues here: we already know that it’s possible to compromise data in VMs running on the same host, so collocation isn’t always a good idea from a security standpoint.
Second, no one will ever care as much about a customer’s data or services as the customer itself. We have seen outages with Google, Amazon, Microsoft, and others. If they, with their nearly limitless engineering talent and financial resources, are unable to provide reliable service, it seems pretty risky to outsource IT to a third party whose system is beyond the customer’s control.
I’m unclear what the first comment has to do with the colocation element. If the concerns is VMs being compromised (this is not a real world threat, but there are lab cases where the threat has been shown in theory – but this is extremely limited) then the threat is equal colocated or on premises. Colocation has nothing to do with virtualization so that threat is unaffected by the location decision.
The second is that actually the big security vendors often care dramatically more than the customer about their data’s safety. Not only do they care more (because they have more on the line), but they have more resources than any normal business, even those in the Fortune 500, to put towards this problem.
The idea that big cloud providers do not provide 100% uptime “in a single datacenter” is not something that they even propose to offer. That’s a misunderstanding of the goals of those vendors and of cloud and only affects customers that either intentionally choose to be risky and forgo high uptimes or lacked IT understanding of designing their systems and were in no position to be designing those kinds of services. As these provides are not attempting to provide the kind of uptime that they are being asked to is a fault of customers, not the providers.
Thirdly, just because Amazon can’t produce 100% uptime doesn’t suggest that their security and uptime is not far better than the security and uptime of the customer. Small businesses cannot compete, at all, with the uptime of these providers. No service, none, provides 100% uptime. So ruling out the “better” service because it is not perfect is not logical.
Lastly, none of these points relate to colocation. The providers and types of providers and services being used as examples are very different things and not colocation providers.