All posts by Scott Alan Miller

Started in software development with Eastman Kodak in 1989 as an intern in database development (making database platforms themselves.) Began transitioning to IT in 1994 with my first mixed role in system administration.

Avoiding Local Service Providers

Inflammatory article titles aside, the idea of choosing a technology service provider based on the fact or partially based on the fact that they are in some way located geographically near to where you are currently, is almost always a very bad idea.  Knowledge based services are difficult enough to find at all, let alone finding the best potential skills, experience and price while introducing artificial and unnecessary constraints to limit the field of potential candidates.

With the rare exception of major global market cities like New York City and London, it is nearly impossible to find a full range of skills in Information Technology in a single locality, at least not in conjunction with a great degree of experience and breadth.  This is true of nearly all highly technical industries – expertise tends to focus around a handful of localities around the world and the remaining skills are scattered in a rather unpredictable manner often because those people in the highest demand can command salary and locations as desired and live where they want to, not where they have to.

IT, more than nearly any other field, has little value in being geographically near to the business that it is supporting.  Enterprise IT departments, even when located locally to their associated businesses and working in an office on premises are often kept isolated in different buildings away from both the businesses that they are supporting and the physical systems on which they work.  It is actually very rare that enterprise server admins would physically ever see their servers or network admins see their switches and routers.  This becomes even less likely when we start talking about roles like database administrators, software developers and others who have even less association with devices that have any physical component.

Adding in a local limitation when looking for consulting talent (and in many cases even internal IT staff) adds an artificial constraint that eliminates nearly the entire possible field of talented people while encouraging people to work on site even for work for which it makes no sense.  Often working on site causes a large increase in cost and loss of productivity due to interruptions, lack of resources, poor work environment, travel or similar.  Working with exclusively or predominantly remote resources encourages a healthy investment in efficient working conditions that generally pay off very well.  But it is important to keep in mind that just because a service company is remote does not imply that the work that they will do will be remote.  In many cases this will make sense, but in others it will not.

Location agnostic workers have many advantages.  By not being tied to a specific location you get far more flexibility as to skill level (allowing you to pursue the absolute best people) or cost (by allowing you to hire people living in low cost areas) or simply offering flexibility as an incentive or get broader skill sets, larger staff, etc.  Choosing purely local services simply limits you in many ways.

Companies that are not based locally are not necessarily unable to provide local resources.  Many companies work with local resources, either local companies or individuals, to allow them to have a local presence.  In many cases this is simply what we call local “hands” and is analogous to how most enterprises work internally with centrally or remotely based IT staff and physical “hands” existing only at locations with physical equipment to be serviced.  In cases where specific expertise needs to be located with physical equipment or people it is common for companies to either staff locally in cases where the resource is needed on a very regular basis or to have specific resources travel to the location when needed.  These techniques are generally far more effective than attempting to hire firms with the needed staff already coincidentally located in the best location.  This can easily be more cost effective than working with a full staff that is already local.

As time marches forward needs change as well.  Companies that work local only can find themselves facing new challenges when they expand to include other regions or locations.  Do they choose vendors and partners only where they were originally located?  Or where they are moving to or expanding to?  Do they choose local for each location separately?  The idea of working with local resources only is nearly exclusive to the smallest of business.  Typically as businesses grow the concept of local begins to change in interesting ways.

Locality and jurisdiction may represent different things.  In many cases it may be necessary to work with businesses located in the same state or country as your business due to legal or financial logistical reasoning and this can often make sense.  Small companies especially may not be prepared the tackle the complexities of working with a foreign firm.  Larger companies may find these boundaries to be worthy of ignoring as well.  But the idea that location should be ignored should not be taken to mean that jurisdiction, by extension, should also be ignored.  Jurisdiction still plays a significant role – one that some IT service providers or other vendors may be able to navigate on your behalf allowing you to focus on working with a vendor within your jurisdiction while getting the benefits of support from another jurisdiction.

As with many artificial constraint situations, not only do we generally eliminate the most ideal vendor candidates, but we also risk “informing” the existing vendor candidate pool that we care more about locality than quality of service or other important factors.  This can lead to a situation where the vendor, especially in a smaller market, feels that they have a lock in to you as the customer and do not need to perform up to a market standard level, price competitively (as there is no true competition given the constraints) or worse.  A vendor who feels that they have a trapped customer is unlikely to perform as a good vendor long term.

Of course we don’t want to avoid companies simply because they are local to our own businesses, but we should not be giving undue preference to companies for this reason either.  Some work has advantages to being done in person, there is no denying this.  But we must be careful not to extend this to rules and needs that do not have this advantage nor should we confuse the location of a vendor with the location(s) where they do or are willing to do business.

In extreme cases, all IT work can, in theory, be done completely remotely and only bench work (the physical remote hands) aspects of IT need an on premises presence.  This is extreme and of course there are reasons to have IT on site.  Working with a vendor to determine how best service can be provided, whether locally, remotely or a combination of the two can be very beneficial.

In a broader context, the most important concept here is to avoid adding artificial or unnecessary constraints to the vendor selection process.  Assuming that a local vendor will be able or willing to deliver a value that a non-local vendor can or will do is just one way that we might bring assumption or prejudice to a process such as this.  There is every possibility that the local company will do the best possible job and be the best, most viable vendor long term – but the chances are far higher than you will find the right partner for your business elsewhere.  It’s a big world and in IT more than nearly any other field it is becoming a large, flat playing field.

The Jurassic Park Effect

“If I may… Um, I’ll tell you the problem with the scientific power that you’re using here, it didn’t require any discipline to attain it. You read what others had done and you took the next step. You didn’t earn the knowledge for yourselves, so you don’t take any responsibility for it. You stood on the shoulders of geniuses to accomplish something as fast as you could, and before you even knew what you had, you patented it, and packaged it, and slapped it on a plastic lunchbox, and now …” – Dr. Ian Malcolm, Jurassic Park

When looking at building a storage server or NAS, there is a common feeling that what is needed is a “NAS operating system.”  This is an odd reaction, I find, since the term NAS means nothing more than a “fileserver with a dedicated storage interface.”  Or, in other words, just a file server with limited exposed functionality.  The reason that we choose physical NAS appliances is for the integrated support and sometimes for special, proprietary functionality (NetApp being a key example of this offering extensive SMB and NFS integration and some really unique RAID and filesystem options or Exablox offering fully managed scale out file storage and RAIN style protection.)  Using a NAS to replace a traditional file server is, for the most part, a fairly recent phenomenon and one that I have found is often driven by misconception or the impression that managing a file server, one of the  most basic IT workloads, is special or hard.  File servers are generally considered the most basic form of server and traditionally what people meant when using the term server unless additional description was added and the only form commonly integrated into the desktop (every Mac, Windows and Linux desktop can function as a file server and it is very common to do so.)

There is, of course, nothing wrong with turning to a NAS instead of a traditional file server to meet your storage needs, especially as some modern NAS options, like Exablox, offer scale out and storage options that are not available in most operating systems.  But it appears that the trend to use a NAS instead of a file server has led to some odd behaviour when IT professionals turn back to considering file servers again.  A cascading effect, I suspect, where the reasons for why NAS are sometimes preferred and the goal level thinking are lost and the resulting idea of “I should have a NAS” remains, so that when returning to look at file server options there is a drive to “have a NAS” regardless of whether there is a logical reason for feeling that this is necessary or not.

First we must consider that the general concept of a NAS is a simple one, take a traditional file server, simplify it by removing options and package it with all of the necessary hardware to make a simplified appliance with all of the support included from the interface down to the spinning drives and everything in between.  Storage can be tricky when users need to determine RAID levels, drive types, monitor effectively, etc.  A NAS addresses this by integrating the hardware into the platform.  This makes things simple but can add risk as you have fewer support options and less ability to fix or replace things yourself.  A move from a file server to a NAS appliance is truly about support almost exclusively and is generally a very strong commitment to a singular vendor.  You chose the NAS approach because you want to rely on a vendor for everything.

When we move to a file server we go in the opposite direction.  A file server is a traditional enterprise server like any other.  You buy your server hardware from one vendor (HP, Dell, IBM, etc.) and your operating system from another (Microsoft, Red Hat, Suse, etc.)  You specify the parts and the configuration that you need and you have the most common computing model for all of IT.  With this model you generally are using standard, commodity parts allowing you to easily migrate between hardware vendors and between software vendors. You have “vendor redundancy” options and generally everything is done using open, standard protocols.  You get great flexibility and can manage and monitor your file server just like any other member of your server fleet, including keeping it completely virtualized.  You give up the vertical integration of the NAS in exchange for horizontal flexibility and standardization.

What is odd, therefore, is when returning to the commodity model but seeking, what is colloquially known as, a NAS OS.  Common examples of these include NAS4Free, FreeNAS and OpenFiler.  This category of products is generally nothing more than a standard operating system (often FreeBSD as it has ideal licensing, or Linux because it is well known) with a “storage interface” put onto it and no special or additional functionality that would not exist with the normal operating system.  In theory they are a “single function” operating system that does only one thing.  But this is not reality.  They are general purpose operating systems with an extra GUI management layer added on top.  One could say the same thing about most physical NAS products themselves, but they typically include custom engineering even at the storage level, special features and, most importantly, an integrated support stack and true isolation of the “generalness” of the underlying OS.  A “NAS OS” is not a simpler version of a general purpose OS, it is a more complex, yet less functional version of it.

What is additionally odd is that general OSes, with rare exception, already come with very simple, extremely well known and fully supported storage interfaces.  Nearly every variety of Windows or Linux servers, for example, have included simple graphical interfaces for these functions for a very long time.  These included GUIs are often shunned by system administrators as being too “heavy and unnecessary” for a simple file server.  So it is even more unusual that adding a third party GUI, one that is not patched and tested by the OS team and not standardly known and supported, would then be desired as this goes against the common ideals and practices of using a server.

And this is where the Jurassic Park effect comes in – the OS vendors (Red Hat, Microsoft, Oracle, FreeBSD, Suse, Canonical, et. al.) are giants with amazing engineering teams, code review, testing, oversight and enterprise support ecosystems.  While the “NAS OS” vendors are generally very small companies, some with just one part time person, who stand on the shoulders of these giants and build something that they knew that they could but they never stopped to ask if they should.  The resulting products are wholly negative compared to their pure OS counterparts, they do not make systems management easier nor do they fill a gap in the market’s service offerings. Solid, reliable, easy to use storage is already available, more vendors are not needed to fill this place in the market.

The logic often applied to looking at a NAS OS is that they are “easy to set up.”   This may or may not be true as easy, here, must be a relational term.  For there to be any value a NAS OS has to be easy in comparison to the standard version of the same operating system.  So in the case of FreeNAS, this would mean FreeBSD.  FreeNAS would need to be appreciably easier to set up than FreeBSD for the same, dedicated functions.  And this is easily true, setting up a NAS OS is generally pretty easy.  But this ease is only a panacea and one of which IT professionals need to be quite aware.  Making something easy to set up is not a priority in IT, making something that is easy to operate and repair when there are problems is what is important.  Easy to set up is nice, but if it comes at a cost of not understanding how the system is configured and makes operational repairs more difficult it is a very, very bad thing.  NAS OS products routinely make it dangerously easy to get a product into production for a storage role, which is almost always the most critical or nearly the most critical role of any server in an environment, that IT has no experience or likely skill to maintain, operate or, most importantly, fix when something goes wrong.  We need exactly the opposite, a system that is easy to operate and fix.  That is what matters.  So we have a second case of “standing on the shoulders of giants” and building a system that we knew we could, but did not know if we should.

What exacerbates this problem is that the very people who feel the need to turn to a NAS OS to “make storage easy” are, by the very nature of the NAS OS, the exact people for whom operational support and the repair of the system is most difficult.  System administrators who are comfortable with the underlying OS would naturally not see a NAS OS as a benefit and avoid it, for the most part.  It is uniquely the people for whom it is most dangerous to run a not fully understood storage platform that are likely to attempt it.  And, of course, most NAS OS vendors earn their money, as we could predict, on post-installation support calls for customers who deployed and got stuck once they were in production so that they are at the mercy of the vendors for exorbitant support pricing.  It is in the interest of the vendors to make it easy to install and hard to fix.  Everything is working against the IT pro here.

If we take a common example and look at FreeNAS we can see how this is a poor alignment of “difficulties.”  FreeNAS is FreeBSD with an additional interface on top.  Anything that FreeNAS can do, FreeBSD an do.  There is no loss of functionality by going to FreeBSD.  When something fails, in either case, the system administrator must have a good working knowledge of FreeBSD in order to exact repairs.  There is no escaping this.  FreeBSD knowledge is common in the industry and getting outside help is relatively easy.  Using FreeNAS adds several complications, the biggest being that any and all customizations made by the FreeNAS GUI are special knowledge needed for troubleshooting on top of the knowledge already needed to operate FreeBSD.  So this is a large knowledge set as well as more things to fail.  It is also a relatively uncommon knowledge set as FreeNAS is a niche storage product from a small vendor and FreeBSD is a major enterprise IT platform (plus all use of FreeNAS is FreeBSD use but only a tiny percentage of FreeBSD use is FreeNAS.)  So we can see that using a NAS OS just adds risk over and over again.

This same issue carries over into the communities that grow up around these products.  If you look to communities around FreeBSD, Linux or Windows for guidance and assistance you deal with large numbers of IT professionals, skilled system admins and those with business and enterprise experience.  Of course, hobbyists, the uninformed and others participate too, but these are the enterprise IT platforms and all the knowledge of the industry is available to you when implementing these products.  Compare this to the community of a NAS OS.  By its very nature, only people struggling with the administration of a standard operating system and/or storage basics would look at a NAS OS package and so this naturally filters the membership in their communities to include only the people from whom we would be best to avoid getting advice.  This creates an isolated culture of misinformation and misunderstandings around storage and storage products.  Myths abound, guidance often becomes reckless and dangerous and industry best practices are ignored as if decades of accumulated experience had never happened.

A NAS OS also, commonly, introduces lags in patching and updates.  A NAS OS will almost always and almost necessarily trail its parent OS on security and stability updates and will very often follow months or years behind on major features.  In one very well known scenario, OpenFiler, the product was built on an upstream non-enterprise base (RPath Linux) which lacked community and vendor support, failed and was abandoned leaving downstream users, included everyone on OpenFiler, abandoned without the ecosystem needed to support them.  Using a NAS OS means trusting not just the large, enterprise and well known primary OS vendor that makes the base OS but trusting the NAS OS vendor as well.  And the NAS OS vendor is orders of magnitude more likely to fail if they are basing their products on enterprise class base OSes.

Storage is a critical function and should not be treated carelessly and should not be ignored as if its criticality did not exist.  NAS OSes tempt us to install quickly and forget, hoping that nothing ever goes wrong or that we can move on to other roles or companies completely before bad things happen.  It sets us up for failure where failure is most impactful.  When a typical application server fails we can always copy the files off of its storage and start fresh.  When storage fails, data is lost and systems go down.

“John Hammond: All major theme parks have delays. When they opened Disneyland in 1956, nothing worked!

Dr. Ian Malcolm: Yeah, but, John, if The Pirates of the Caribbean breaks down, the pirates don’t eat the tourists.”

When storage fails, businesses fail.  Taking the easy route to setting up storage and ignoring the long term support needs and seeking advice from communities that have filtered out the experienced storage and systems engineers increases risk dramatically.  Sadly, the nature of a NAS OS, is that the very reason that people turn to it (lack of deep technical knowledge to build the systems) is the very reason they must avoid it (even greater need for support.)  The people for whom NAS OSes are effectively safe to use, those with very deep and broad storage and systems knowledge would rarely consider these products because for them they offer no benefits.

At the end of the day, while the concept of a NAS OS sounds wonderful, it is not a panacea and the value of a NAS does not carry over from the physical appliance world to the installed OS world and the value of standard OSes is far too great for NAS OSes to effectively add real value.

“Dr. Alan Grant: Hammond, after some consideration, I’ve decided, not to endorse your park.

John Hammond: So have I.”

Make Your Business Jealous

I have, in the past, discussed how home technology infrastructures and lab environments are one of the keys to personal career success and how IT practitioners should set a high bar in their own lives and hold their businesses accountable to an even higher bar.  I often speak to IT practitioners looking for ways to improve their own education, skill set and résumé.  I propose an approach to building your home network and lab – “Make Your Business Jealous.”

I mean it quite literally.  Why not approach your home network as an opportunity to raise the bar, both on your own experience but also on what you expect from businesses?  The approach is an excellent one for changing how you think about your home network and its goals.  Don’t just use your home to learn new skills in a “checkmark” fashion such as “learned Active Directory management.”  That is an excellent and obvious use of your home network.  But we should consider taking this even farther.

In many ways, this applies far more to those IT practitioners who work in small and medium businesses where it is common to cut corners, eschew best practices, leave work half done and not approve realistic budgets, but it is valuable for anyone.  You can take this from the smallest of features in your own home – the physical aspects (like cabling, labeling and organization) and take it very large (servers, services, security, backup, etc..)

For a real world example, we should begin with the simplest component, cabling.  Few people, even IT practitioners, take the time to cable their homes for data let alone cabling it well.  This is a big missed opportunity.  This is something that is not only utilitarian but ends up adding value to the house as well.  And many businesses do a very poor job of this themselves.  Even though cabling is not strictly an IT discipline, it is a fringe area of electrical support that is related to IT and well worth using as a physically visible showcase of your home environment.

Doing good cabling at home, since it is your home and presumably there is nearly unlimited time to do things well, can really be taken to an extreme.  I recommend using forward looking cable, CAT 6 or better, so that you can handle flawless GigE today and faster in the future.  You do not want your home infrastructure to become dated unnecessarily.  Once putting in the effort it is all about doing it “right”.  This is a chance not just to run a few cables but to implement a complete cabling plant with over-provisioned cable runs to all parts of the house.  Of course you can do this in stages doing just a room or a few at a time.

In my own home projects I ran four cable runs to nearly all rooms with some like the master bedroom and living room getting more like six or eight.  This may sound like a lot, but in reality for a well wired home, it is not at all, as we will see.  You want to run more cabling than you are ever likely to use now, while putting in the effort, both because it is good practice and because it is simply impressive.  Having extra cabling means an easier time keeping things organized and more ability to flexibly change things in the future.

Running the cables in the walls, up through an attic or down through the basement or whatever is appropriate with well managed cable runs is best.  In my own house this was the attic with large “J” hooks keeping cables in place.  Be sure to label all cabling very clearly.  This is another chance to go above and beyond.  Install nice wall jacks, label every jack and every cable in every jack.  Make the physical cabling plant as organized, impressive and manageable as possible.  All it really takes is a little time and effort and you can have something that you are really proud of and that you want to show off.

Where the cables run to, of course, is a matter for some discussion.  Maybe you will want to have a networking closet or even a server room.  Having a good place to store your home networking gear can be a wonderful addition to a home lab.  A patch panel for your cable runs is ideal.  Do everything “right”, just like you would find in a dream office environment.

Good cabling will allow us to move as much as possible from wireless to wired in the home environment.  Wired connections are faster, more stable, less management, potentially more secure and improve the performance of the remaining wireless devices by lowering the wireless saturation levels in the home.  If you are like me your home network consists of many devices that have regular Internet access and are often wireless but can be wired such as desktops, video game consoles, Internet television devices, smart televisions and more.  Many or most of these devices can easily be removed from wireless which can only be beneficial.  Save the wireless for mobile devices.

A cabling plant, of course, needs something to attach to.  A good switch will be the true heart of any home network, or any network at all.  If you have a large home you could easily need a forty eight port switch or even need to look at stacked switches.  A small apartment might use sixteen ports.  Size will vary by need.

In acquiring a switch it is a good time to consider not just size but features.  Adding PoE (Power over Ethernet) now is ideal allowing for even cleaner cable management and a yet more impressive network.  Wait till you see what else we consider adding to our home network that might leverage PoE.

Also this is a time when we can consider more advanced features on the switch.  Rather than just using a traditional unmanaged switch we can look at nice, rackmount switches that are either smart (web managed) or fully managed which is excellent for getting broader switch experience.  We might want to add a guest VLAN, for example, for visitors who need Internet access.  If you have a guest bedroom in the house, maybe those Ethernet ports should be VLAN’d as guest access all of the time, for example.

You are going to want to monitor that closet too, no doubt.  Once you start to put nice equipment in there you will want to keep it safe.  Perhaps temperature and moisture sensors that communicate onto the network?

Most home networks live and die on their wireless capabilities.  This is a place to really shine.  Instead of cheap “all in one” networking gear with anemic wireless that is stuck in the most useless corner of the house you can use your shiny, new PoE cabling plant to put high quality, commercial wireless access points in the most useful point(s) of your home (and considering placing them on the grounds too, if you own multiple buildings or open space.)  Centrally managed units are very affordable and can make for a very fast, robust wireless network and can make having guest wireless access very easy as well.

Next to consider, since we have such a robust cabling system in place already, is security.  Why not add a camera or two to watch the outside of your house or the front door?  You can use PoE cameras that are centrally managed.  Look for business class solutions, of course, not consumer cameras and software available at the local store.

One of the bigger and more interesting home projects to consider is a full scale VoIP PBX.  This can be a great and interesting project, one of the few really good uses for running a server at home that will be really used as a “production” service.  A home VoIP PBX can make it easy to have separate extensions for family members, rooms or purposes.  You can have features like individual voicemail mailboxes, a house intercom, front door intercom, room to room calling, wake up calls, video conferencing, free calling for other family members (outside of the home), guest lines, multiple lines for simultaneous calling and the ability to make and take calls while traveling!

Once we have a PBX in the home installing physical phones throughout the home, on PoE of course, is the next step.  Phones can be very useful around a home, especially a larger one.  Having real phones to manage can be very educational and certainly lets you take your home to another level of IT.

No server closest would be complete without features like a domain controller, home web server (why not have a guest website for visitors and a wiki for tracking your home projects and information) and the biggest home systems – storage.  Traditional storage like NAS or file server can be very useful for storing private photos and video, music and document collections, movies and other media.  DLNA streaming can make an audio and video library available to the entire house.  Traditional storage such as SMB and NFS can provide high speed, protected mapped drives available to the computers in the home.  And more modern storage techniques like “cloud storage” can be hosted as well.

Of course all of those workloads can be virtualized and run on a server (or two) and run in the server closet.  If you are incredibly ambitious this could include features like high availability or fault tolerance, although these will generally push costs into a range often impractical for home use by nearly any standard.

And the pièce de résistance is, of course, backups.  Use real backup software, several enterprise options are even free for home-scale use.  Taking good backups, testing restores, using different media, backup strategies and backup types (such as images and file-based backups) can really showcase the reliability of your home network.

Don’t forget to go beyond running systems into monitoring.  Log collection and analysis, bandwidth monitoring, system monitoring, load monitoring and more can be added for completeness.  Think of all the things that exist, or that you wish would exist, in an ideal office setting.  There is rarely any reason not to bring these same technologies into your home.

Beyond these steps there are many places that one could go to make a home network.  Features that might be interesting for you.  Go crazy.

Our goal here is to raise the bar.  Do at home what few businesses do.  Building an amazing home network, one that is really used, beyond building a great home IT lab, is valuable for many reasons.  A great home network is more than just an amazing learning experience, it makes for a perfect interview conversation starter, it is a “portfolio” piece demonstrating skills in cradle to grave LAN design and management, it shows dedication and initiative to the field and it sets a bar to be used when speaking to businesses.

Go ahead, don’t be afraid to make your business jealous of your home network.

Virtualizing Even a Single Server

I find it very common in conversations involving virtualization to have the concept of consolidation, which in the context of server virtualization refers to putting multiple formerly physical workloads onto a single physical box with separation handled by the virtual machine barriers, treated as being the core tenant and fundamental feature of virtualization.  Without a doubt, workload consolidation represents an amazing opportunity with virtualization, but it is extremely important that the value of virtualization and the value of consolidation not be confused.  Too often I have found that consolidation is viewed as the key value in virtualization and the primary justification for it but this is not the case.  Consolidation is a bonus feature, but should never be needed when justifying virtualization.  Virtualization should be a nearly foregone conclusion while consolidation must be evaluated and many times would not be used.  That workloads should not be consolidated should never lead to the belief that those workloads should not be virtual.  I would like to explore the virtualization decision space to see how we should be looking at this point.

Virtualization should be thought of as hardware abstraction as that is truly what it is, in a practical sense.  Virtualization encapsulates the hardware and presents a predictable, pristine hardware set to guest operating systems.  This may sound like it adds complication but, in reality, it actually simplifies a lot of things both for the makers of operating systems and drivers as well as for IT practitioners designing systems.  It is because computers, computer peripherals and operating systems are such complex beasts that this additional layer actually ends up removing complexity from the system by creating standard interfaces.  From standardization comes simplicity.

This exact same concept of presenting a standard, virtual machine to a software layer exists in other areas of computing as well, such as with how many programming languages are implemented.  This is a very mature and reliable computing model.

Hardware abstraction and the stability that it brings alone are reason enough to standardize on virtualization across the board but the practical nature of hardware abstraction as implemented by all enterprise virtualization products available to us today brings us even more important features.  To be sure, most benefits of virtualization can be found in some other way but rarely as completely, reliably, simply or freely as from virtualization.

The biggest set of additional features typically come from the abstraction of storage and memory allowing for the ability to snapshot storage or even the entire running state of a virtual machine, that is to take an image of the running system and store it in a file.  This ability leads to many very important capabilities such as the ability to take a system snapshot before installing new software, changing configurations or patching; allowing for extremely rapid rollbacks should anything go wrong.  This seemingly minor feature can lead to big peace of mind and overall system reliability.  It also makes testing of features and rolling back or repeated testing very easy in non-production environments.

The ability to snapshot from the abstraction layer also leads to the ability to take “image-based backups”, that is backups taken via the snapshot mechanism at a block device layer rather than from within the operating system’s file system layer.  This allows for operating system agnostic backup mechanisms and backups that include the entire system storage pool all at once.  Image backups allow for what were traditionally known as “bare metal restores” – the entire system can be restored to a fully running state without additional interaction – easily and very quickly.  Not all hypervisor makers include this capability or include it to equal levels so while conceptually a major feature it is critical that the extent to which this feature exists or is licensed must be considered on a case by case basis (notably HyperV includes this fully, XenServer includes it partially and VMware vSphere only includes it with non-free license levels.)  When available, image-based backups allow for extremely rapid recovery at speeds unthinkable with other backup methodologies.  Restoring systems in minutes is possible, from disaster to recovery!

The ability to treat virtual machines as files (at least when not actively running) provides additional benefits that are related to the backup benefits listed above.  Namely the ability to rapidly and easily migrate between physical hosts and even to move between disparate hardware.  Traditionally hardware upgrades or replacements meant a complicated migration process fraught with peril.  With modern virtualization, moving from existing hardware to new hardware can be a reliable, non-destructive process with safe fallback options and little or possibly even zero downtime!  Tasks that are uncommon but were very risky previously can often become trivial today.

Often this is the true benefit of virtualization and abstraction mechanisms.  It is not, necessarily, to improve the day to day operations of a system but to reduce risk and provide flexibility and options in the future.  Preparing for unknowns that are either unpredictable or are simply ignored in most common situations.  Rarely is such planning done at all, much to the chagrin of IT departments left with difficult and dangerous upgrades that could have been easily mitigated.

There are many features of virtualization that are applicable to only special scenarios.  Many virtualization products include live migration tools for moving running workloads between hosts, or possibly even between storage devices, without downtime.  High availability and fault tolerant options are often available allowing some workloads to rapidly or even transparently recover from system hardware failure, moving from failed hardware to redundant hardware without user intervention.  While more of a niche benefit and certainly not to be included in a list of why “nearly all workloads”  should be virtual, it is worth noting as a primary example of features that are often available and could be added later if a need for them arises as long as virtualization is used from the beginning.  Otherwise a migration to virtualization would be needed prior to being able to leverage such features.

Virtualization products typically come with extensive additional features that only matter in certain cases.  A great many of them fall into a large pool of “in case of future need.”  Possibly the biggest of all of these is the concept of consolidation, as I had mentioned at the beginning of this article.  Like other advanced features like high availability, consolidation is not a core value of virtualization but is often confused for it.  Workloads not intending to leverage high availability or consolidation should still be virtualized – without a doubt.  But these features are so potentially valuable as future options, even for scenarios where they will not be used today, that they are worth mentioning regardless.

Consolidation can be extremely valuable and it can easily be understood why so many people simply assume that it will be used as it is so often so valuable.  The availability of this once an infrastructure is in place is a key point of flexibility for handling the unknowns of future workloads.  Even when consolidation is completely unneeded today, there is a very good chance, even in the smallest of companies, that it will be useful at some unknown time in the future.  Virtualization provides us with a hedge against the unknown by preparing our systems for the maximum in flexibility.  One of the most important aspects of any IT decision is managing and reducing risk.  Virtualization does this.

Virtualization is about stability, flexibility, standardization, manageability and following best practices.  No major enterprise virtualization product is not available, at least in some form, for free today.  Any purchase would, of course, require a careful analysis of value versus expenditure.  However, with excellent enterprise options available for free from all four key product lines in this space currently (Xen, KVM, HyperV and VMware vSphere) we need make no such analysis.  We need only show that the implementation is a non-negative.

What makes the decision making easy is that when we consider the nominal case – the bare minimum that all enterprise virtualization provides which is the zero cost, abstraction, encapsulation and storage based benefits we find that we have a small benefit in effectively all cases, no measureable downsides and a very large potential benefit from the areas of flexibility and hedging against future needs.  This leaves us with a clear win and a simple decision that virtualization, being free and with essentially no downsides on its own, should be used in any case where it can be (which, at this point, is essentially all workloads.)  Additional, non-core, features like consolidation and high availability should be evaluated separately and only after the decision to virtualize has already been solidified.  No lack of need for those extended features, in any way, suggests that virtualization should not be chosen based on its own merits.

This is simply an explanation of existing industry best practices which have been to virtualize all potential workloads for many years.  This is not new nor a change of direction.  Just the fact that across the board virtualization has been an industry best practice for nearly a decade shows what a proven and accepted methodology this is.  There will always be workloads that, for one reason or another, simply cannot be virtualized, but these should be very few and far between and should prompt a deep review to find out why this is the case.

When deciding whether or not to virtualize, the approach should always be to assume that virtualization is a foregone conclusion and only vary from this if a solid, defended technical reason makes this impossible.  Nearly all arguments against virtualization come from a position of misunderstanding with a belief that consolidation, high availability, external storage, licensing cost and other loosely related or unrelated concepts are somehow intrinsic to virtualization.  They are not and should not be included in a virtualization versus physical deployment decision.  They are separate and should be evaluated as separate options.

It is worth noting that because consolidation is not part of our decision matrix in creating base value for virtualization, that all of the reasons that we are using apply equally to both one to one deployments (that is a single virtual machine on a single physical device) as to consolidated workloads (that is multiple virtual machines on a single physical device.)  There is no situation in which a workload is “too small” to be virtualized.  If anything, it is the opposite, only the largest workloads, typically with extreme latency sensitivity, where a niche scenario of non-virtualization still exists as an edge case but even these cases are rapidly disappearing as latency improvements in virtualization and total workload capacities are improved.  These cases are so rare and vanishing so quickly that even taking the time to mention these cases is probably unwise as it suggests that exceptions, based on capacity needs, are common enough to evaluate for, which they are not, especially not in the SMB market.  The smaller the workload, the more ideal for virtualization, but this is only to reinforce that small business, with singular workloads, are the most ideal case for virtualization across the board rather than an exception to best practices, not to suggest that larger businesses should be looking for exceptions themselves.