Tag Archives: smb

One Big Flat Network

There is a natural movement of networks to become unnecessarily complicated.  But there is great value in keeping networks clean and simple.  Simple networks are easier to manage, more performant and more reliable while being generally less expensive.  Every network needs a different level of complexity and large networks will certainly need an extensive level of it, but small businesses can often keep networks extremely simple which is part of what makes smaller businesses more agile and less expensive, giving them an edge over their larger counterparts.  This is an edge that they must leverage because they lack the enterprise advantage of scale.

There are two ways to look at network complexity.  The first is the physical network – the actual setup of the switches and routers that make up the network.  The second is the logical network – how IP address ranges are segmented, where routing barriers exist, etc.  Both are important to consider when looking at the complexity of your network.

It should be the goal of any network to be as simple as possible while still meeting all of the goals and requirements of the network.  

The first aspect we will address is the physically flat network.   Reducing a physical network to be flat can have a truly astounding effect on the performance and reliability of  that network.  In a very small network this could mean working from a single switch for all connections.  Typically this is only available for the very smallest networks as switches rarely are available above forty-eight or possibly fifty-two ports.  But for many small businesses this is completely possible.  It may require additional cabling for a building, in order to bring all connections back to a central location, but can often be attained – at least on a site by site basis.  Many businesses today have multiple locations or staff working from home and this can make the network challenges much greater, although each location can strive for its own simplicity in those cases.

As a network grows the concept of the single switch can be grown as well using the concept of switch stacking.  Stacked switches share a single switching fabric or backplane.  When stacked they behave as a single switch but with more ports.  (Some switches do true backplane sharing and some mimic this with very high speed uplink ports with shared management via that port.)  A switch stack is managed as a single switch making network management no more difficult, complex or time consuming for a stack than for a single switch.  It is common for a switch stack to grow to at least three hundred ports if not more.  This allows for much larger physical site growth before needing to leave the single switch approach.

In some cases, some large module single switch chassis will grow even larger than this allowing for four hundred or more ports in a single switch but in a “blade like” enterprise switching chassis.

By being creative and looking at simple, elegant solutions it is entirely possible to keep even a moderately large network contained to a single switching fabric allowing all network connections to share a single backplane.

The second area that we have to investigate is the logical complexity of the network.  Even in physically simple networks it is common to find small businesses investing a significant amount of time and energy into implementing unnecessary subnets or VLANs and all of the overhead that comes with those.

Subnetting is rarely necessary in a small or even a smaller medium-sized business.  Traditionally, going back to the 1990s, it was very common to want to keep subnets to a maximum of 256 devices (or a /24 subnet) because of packet collision, broadcasts and other practical issues.  This made a lot of sense in that era when hubs were used instead of switches and broadcasts were common and network bandwidth was lucky if it was 10Mb/s on a shared bus.  Today’s broadcast light, collision free, 1Gb/s dedicated channel networks experience network load in a completely different manner.  Where 256 devices on a subnet was an extremely large network then, having more than 1,000 devices on a single subnet is a non-issue today.

These changes in how networks behave mean that small and medium businesses almost never need to subnet for reasons of scale and can comfortably use a single subnet for their entire business reducing complexity and easing network management.  More than a single subnet may be necessary to support specific network segmentation like separating production and guest networks, but scale, the reason traditionally given for subnetting networks, becomes an issue solely of larger businesses.

It is tempting to want to implement VLANs on every small business environment as well.  Subnetting and VLANs are often related and often confused, but subnets often exist without VLANs, while VLANs do not exist without subnets.

In large environments VLANs are a foregone conclusion and it is simply assumed that they will exist.  This mentality often filters down to smaller organizations who are often tempted to apply this to businesses which lack the scale that makes VLAN management make sense.  VLANs should be relatively uncommon in a small business network.

The most common place where I see VLANs used when they are not needed is in Voice over IP or VoIP networks.  It is a common assumption that VoIP has special needs that require VLAN support.  This is not true.  VoIP and the QoS that it sometimes needs are available without VLANs and often will work better without them.

VLANs really only become important when either management is needed at large scale (where scale is larger than a single subnet can provision) and cannot be physically segregated or when specific network-layer security is needed which is relatively rare in the SMB market.  VLANs are very useful and do have their place.  VLANs are often used if a dedicated guest network is needed but generally in a small business guest access is provided via a direct guest connection to the Internet rather than a quarantined network for guests.

The most common practical use of a VLAN in an SMB is likely to be a walled garden DMZ designed for quarantined BYOD remote access where BYOD devices connect much like guests but have the ability to access remote access resources like RDP, ICA or PCoIP protocols.  VLANs would also be popular for building traditional DMZs for externally facing public services such as web and email servers – except that these services are not commonly kept on the local network for hosting in today’s SMBs so this classic use of VLANs in the SMB is rapidly fading.

Another use case where VLANs are often used inappropriately is for a Storage Area Network or SAN.  It is best practice that a SAN be a completely independent (air gapped), physically unique network unrelated to the regular switching infrastructure.  It is generally not advised that a SAN be created using VLANs or subnets but instead be on dedicated switches.

It is tempting to add complex switching setups, additional subnets and VLANs because we hear about these things from larger environments, they are fun and exciting, and they appear to add job security by making the network more difficult to maintain.  Complex networks require higher end skills and can seem like a great way to use that networking certificate.  But in the long run, this is a bad career and IT strategy.  Network complexity should be added in a lab for learning purposes, not in production networks.  Production networks should be run as simply, elegantly and cost effectively as possible.

With relatively little effort, a small business network can likely be designed to be both physically and logically very simple.  The goal, of course, is to come as close as possible to creating single, flat network structure where all devices are physical and logical peers with no unnecessary bottlenecks or protocol escalations.  This improves performance and reliability, reduces costs and frees IT resources to focus on more important tasks.

Originally posted on the StorageCraft Blog.

Virtualization for Small Business

In the last year or two we have seen virtualization go from a poorly understood concept to a much-hyped industry buzz word being bantered about constantly in every conversation involving technology.  There is no doubt that virtualization is playing an important role in today’s IT landscape, but the question we are asking is whether virtualization applies to the small and medium business markets at this time.

The quick answer to this question is: absolutely.  Unlike many technologies that are of questionable value or that provide a great degree of technological complication, risk and expense that may not be appropriate for a small business, virtualization is a mature technology (IBM CP/CMS circa 1968) that is well understood and provides a layer of hardware abstraction that can benefit an IT organization of any size and may possibly apply even more to the small business IT department than it applies in the enterprise space.

Before looking at how virtualization can benefit the SMB market I would like to provide some definitions to be sure that we are discussing the same set of technologies.  In today’s IT landscape it has become popular to relabel common technologies as “virtualization” for marketing reasons and this has unnecessarily complicated the issue.

True virtualization refers to the virtualizing of entire operating systems.  Wikipedia uses the term platform virtualization and I will as well.  Technically we could refer to this as “System Virtualization” or “Operating System Virtualization” to distinguish it from loosely-related technologies that may arguably have the right to also use the same general term.

The basic concept of platform virtualization involves running an abstraction layer on a computer that emulates the hardware itself. Through the combination of abstraction and emulation we get what is known as a virtual machine.  This virtual machine is a completely working “computer” onto which we can install an operating system just as if we were installing onto the bare metal of a dedicated machine.  Instead of being limited to only installing one operating system image per computer we can now, with platform virtualization, install many copies of the same or disparate operating systems onto the same piece of hardware.  A powerful concept indeed.

The obviousness of the utility of this technology begs the obvious question: “If platform virtualization has been available since 1968, why is it only becoming popular and important recently?”  This is an excellent question.  The answer is actually quite simple.

Traditional platform virtualization technologies require a lot of support within the computer hardware itself.  IBM has been building this type of support into its mainframe systems for decades and large UNIX vendors like Sun have been providing this in their high-end UNIX servers for years as well.  These systems are highly specialized and typically run their own custom operating system(s).  Generally only large IT shops could afford servers of this magnitude and small shops did not have ready access to these technologies.  For those IT professionals who have worked with this type of equipment in the past the idea of virtualization was often so ingrained into the platform that it was often discussed very little as it was seen as simply an aspect of these high-end server systems and not necessarily a concept in its own right.

What has changed recently is the move to bring platform virtualization to the commodity hardware space occupied by the AMD and Intel (x86_64) processors used by the majority of small and medium businesses as well as larger enterprises.  The first move was to use software alone to make this possible on the x86 processor family.  The early players in this space were VMWare and Microsoft with products like VMWare Workstation, Virtual PC, VMWare GSX and MS Virtual Server.  These products showed that no special hardware was needed to effectively virtualize whole operating systems and began to allow companies of all sizes to experiment with the concept of virtualizing their existing commodity platforms.  This form of virtualization is known as “host-based virtualization” as it requires a host operating system on which the virtualization environment will run.

Following on the tail of these software-only solutions the big processor vendors in the commodity space, AMD and Intel, began building virtualization capabilities into the processor allowing for more flexibility, security and performance and bringing the commodity x64 hardware market much more in line with the traditional offerings from the other processor families common in big iron servers.  By doing so the virtualization market has really exploded both from the vendor side as more and more vendors begin offering virtualization related products and from the customer side as virtualization begins to be better understood and its use becomes more commonplace.  With the latest rounds of purchasing most small IT shops have purchased servers, and often desktops, that support hardware-level virtualization even without intending to prepare themselves for a move to virtualization making the equation often tip in that direction naturally.  This hardware-supported virtualization model is called “hypervisor-based virtualization” as all operating systems run on top of a tiny kernel called the hypervisor and no traditional operating system runs directly on the hardware.

Now that we have a good idea of what platform virtualization is and why it is now available to us as an option we will look at why platform virtualization may be beneficial to us in the small and medium business space.

There are two things that we can readily virtualize (without getting esoteric or starting to virtualize our routing and switching infrastructure) – servers and desktops.  By far the easier and more obvious choice is the virtualization of servers.

Virtualizing the server infrastructure, or part of it, is the first place that most IT shops look today as a potential for virtualization.  Most companies find that the majority of their servers are extremely underutilized with excess CPU, memory and drive capacity sitting idle while additional workloads fail to find a home due to budget constraints, space or implementation time.  Virtualization to the rescue.

Through virtualization we have the opportunity to run several virtual servers on a single piece of server hardware.  We could virtualize just a single server system but this would not gain us any utilization advantages or we could, in theory, virtualize hundreds of servers if our hardware could handle it.  Typically, small businesses can virtualize several typical servers roles onto a single physical server.  Virtual machine density is, of course, determined by load characteristics as well as by the available hardware.  Virtualization uses a lot of memory and storage, obviously, and so careful planning must be made.  Memory and storage are relatively inexpensive today and are certainly vastly less expensive than purchasing additional server hardware and paying to support it.  It is not uncommon for a small business to easily virtualize half a dozen servers on a single piece of hardware at a minimum and a score or more is not an unreasonable number to hope to achieve.

Many small shops instantly jump to the conclusion that virtualization requires expensive SAN storage.  This is not the case at all.  Virtualization provides a range of benefits even without using a SAN storage infrastructure of which shops can take advantage immediately.  There are, of course, some significant advantages available by using SAN in conjunction with virtualization and high availability or load balancing technologies.  Often, though, these high availability and load balancing capabilities are additional features that did not exist prior to virtualization and are not necessary in order for a shop to see significant benefits from virtualization but do present an opportunity for future improvement when and if budgets allow.

Small businesses will see many advantages from virtualization immediately even doing so on a small scale.  Some of these benefits are obvious and some are less so.

Our first advantage is that of hardware cost as I mentioned above.  By eliminating the need to purchase and support expensive server hardware on a per operating system basis we can now deploy more systems at lower cost per system.  In many cases this is not only a cost savings but will also provide greater funds necessary to move from more spartan servers into fewer but more enterprise class offerings with important performance, stability and support features such as integrated power management and KVM over IP from an out of band management console.

Our second advantage is the cost savings from reducing power consumption.  It is very trendy, and for good reason, for companies to be concerned with how “green” they are today and IT virtualization plays a key role in the greenification of the department.  The addition of virtual machines onto a single physical server typically represents a trivial, if even measurable, increase in power draw.  Adding additional physical servers, of course, adds a significant amount of power consumption even for systems that are lightly used or used only occasionally.

Our third advantage is in reducing backup complexity.  Virtualized servers can be backed up using completely traditional methods such as file system level backups from the operating system itself as made popular by traditional backup systems like NetBackup, BackupExec, Amanda, Bacula and others.  So if we desire to stick with current backup strategies we can without any additional complexity, but if we want to move to image-based backups we can do so quite easily.  Using system images as backups is not necessarily new or unique to virtualization but virtualization makes this far more obvious and accessible for many users.  In fact, with virtualization system images (a copy of the entire system, not just of its individual files) can be taken using nothing but the regular filesystem – no special software needed.  A complete system backup can be taken by simply shutting down the virtual server, making a copy of its virtual filesystem – often a single, large file, and starting the system up again.  Restoring a system can be a simple as copying an image file from a backup storage device to the virtual server and starting it back up.  Restore done.  System back online.  This is as simple as it gets.

Our fourth advantage is in the ease of provisioning.  Building a new server operating system directly on hardware is a time consuming venture for most shops.  This is especially true if there are any surprises with new hardware type that has not been used previously.  There may be missing drivers or special operating system settings and parameters needed to support the hardware.  With virtualization the target platform is always identical removing many surprises from this process making it both faster and more reliable.  In many cases deployment is also faster simply because the process of preparing the base machine is so much faster.  To kick off a manual install of Linux on a traditional physical server I must purchase said server, install into rack, connect power and networking, provision networking, turn on server, update firmware, configure out of band management system, burn in hardware, install installation media and begin installing.  Or from some virtualization environments I can simply kick off the entire process with a single command at the command line.  Deploying a new server could go from hours or days to minutes.  This does not even begin to address the simplicity of cloning existing systems within a virtual environment.

A fifth “soft” advantage of virtualization is that there is quite often a significant software cost savings when virtualizing.  Some vendors, like Novell with Suse Linux, allow you to virtualize as many servers as you want on a single physical machine while paying for only a single machine license.  Red Hat gives you multiple installs but not unlimited like Novell.  Microsoft has a range of virtualization pricing options depending on your needs including an unlimited per processor deployment license.  In a worst case scenario you will need to pay for additional operating system and other software licenses exactly as if you were running the same machines physically but in almost all cases there is more pricing flexibility and often dramatic cost reductions for multiple virtualized hosts.

A sixth benefit is in the ability to “roll back” an entire operating system.  Most virtualization platforms allow for a concept of taking a system snapshot, making changes to the active system and then restoring the system back to its original state when done.  This is great for software testing and especially for the testing of operating system patches or any critical update process where something going wrong could cause your system to become unresponsive and potentially not repairable.  The ability to go “back in time” to the latest snapshot, taken seconds before the patch application or risky configuration change can be a lifesaver.  Of course taking an image backup could be used in the same way but snapshots allow for even more rapid recovery due to their “proximity” to the original filesystem.

All of these aforementioned benefits come with a move to virtualization and do not require additional cost for software or hardware.  If our budget allows and the need exists there is also the option of adding one of more virtualization servers and having these servers share a SAN for storage of virtual machine images.  At a minimum this will roughly triple the hardware cost but provides double the processing power and some really amazing features.  The main feature that really makes this solution impressive is the concept of live migration.  Live migration is when a virtual operating system can be moved, while running, from one physical virtualization server to another.  This can be done for purposes of load balancing, disaster testing or to survive a disaster itself.  With some live migration solutions, generally sold as high availability, this migration can happen so quickly that it provides effectively “zero downtime” and even heavily used web servers could survive the loss of a physical server without customers ever knowing that a physical server had gone down.  The transition between virtual machine host nodes is completely transparent to the end users.

There is one major caveat.  Relying upon a SAN in a disaster recovery scenario, of course, creates another point of failure – the SAN system.  So when planning to use SAN to increase the reliability of your virtual machines be sure not to use SAN that is not as redundant or moreso than your servers themselves or you may increase cost while accidentally lowering reliability and performance.

For the average small business it is not unlikely that it will make sense to not only virtualize some of the server infrastructure but virtualize all or nearly all of it.  Virtualization’s advantages are so many and its downsides so few and minor that it is a rare workload in the small business space that would justify dedicated hardware servers.

Now that we have examined why server virtualization makes sense we can begin looking towards desktop virtualization.  Unlike real desktops and servers, virtualized desktops often add a bit of complexity due to licensing requirements especially with Microsoft Windows desktops.

Virtualizing desktops is also somewhat complicated because there are many modes for physically providing desktops.  Obviously once we begin talking about virtualizing the desktop infrastructure we are actually talking about a range of solutions because some device must always exist “on the desktop” providing a keyboard, mouse and monitor which cannot be virtualized and the desktop operating system itself must be running elsewhere.  Even without virtualization this is done (and sometimes marketed as virtualization when, in fact it is simply remote access) very commonly through desktop blades, rackmount desktops or terminal servers.  All of these solutions move the desktop into the datacenter and provide access to it either from thin client front ends or simply via software to remote users existing machines such as users at home logging in to the office.

We will start with the concept of the terminal server as this is the most easily virtualized and the most straightforward.  Whether we are talking about virtualizing the server on which we run Microsoft Terminal Server (now known as Remote Desktop Services), Citrix XenApp or simply a standard Linux remote desktop terminal server we need do nothing more than install that server into a virtual environment rather than into a physical one.  It is really a question of server virtualization not of desktop virtualization – it is only perceived by the end user as being related to their desktops.

The other method of desktop virtualization, “true desktop virtualization” as I will refer to it, is to actually run desktop operating system images on a virtual server just as if they were normal desktops dedicated to a user.  This means virtualizing operating systems like Windows XP, Windows Vista or Windows 7 with each image being dedicated to a single user just as if it was a physical desktop.  We could, theoretically, do the same thing with Linux or some other flavor of Unix but as those systems do not have per user licensing or desktop specific versions and since they always run their desktops in a server mode we would only be able to differentiate between a true virtualized desktop and a Unix-based terminal server in its usage and not by any strict technological means as they are one and the same.  Only Windows truly offers a dedicated desktop model that allows this to happen in this particular manner without the concept of shared access to a single image simultaneously.

Due to licensing restrictions from Microsoft, Windows desktops must be installed one image per user even if technologies exist to make this technologically unnecessary, but still there are benefits to this model.  The big benefits to virtualized desktops definitely go to companies who have employees who roam either internally or even externally.

Using virtualized desktops provides far more control to the company than does providing laptops.  Laptops can be stolen, lost or damaged.  Laptops wear out and need to be replaced regularly.  A virtual desktop that is made accessible from the outside of the company can be secured and protected in ways that a laptop cannot be.  Upgrades are much simpler and there is no concern of the virtual desktop becoming cut off from the corporate network and being unable to be supported by the IT staff.

Almost any worker who uses a computer in the office already has one at home for personal use and often has a  laptop as well in addition to high speed Internet access.  Providing remote access to a virtual desktop at the office therefore potentially incurs no additional hardware expense for the company or staff while easing administrative burdens, lowering power consumption and increasing security.  Some workers will always require laptops but many will not.

For workers still sitting at a traditional desk inside of the company’s offices there is still a need for something physically sitting on the desk that will connect the keyboard, mouse and monitor to the newly virtualized desktop.  This could be an old PC that was planned for retirement, a dedicated hardware thin client or even a laptop.  Internal staff can then move around the office or between offices and sit at any available desk with a thin client and log in to their own dedicated virtual desktop and work exactly as if they were at their own desk.  They can then go home and work from there as well if this is allowed.

Like virtualized servers, desktops, if the need is warranted, can be easily backed up using either traditional means or by simply taking complete system images.  The flexibility is there to do whatever makes the most sense in your environment.

With the complexity and surprise cost of licensing as well as lack of ability to completely do away with hardware on the desktop except for solely remote users, desktop virtualization is hardly the no-brainer that server virtualization is.  Desktop virtualization will require careful analysis on a case by case basis to determine if it will meet the cost and usability needs of the individual organization.  Most organizations who choose to go this route will likely opt to only partially virtualize – using it only in cases where it makes the most sense such as roaming users and remote workers while keeping traditional desktops for those users who would seldom be in a position to take advantage of this technology.  Using terminal server options will often be far more common than “true desktop virtualization” which often makes sense only for power users, developers or to support certain applications that work poorly in a terminal server mode.

There is a final usage of virtualization that warrants discussion if only because it is important to understand its use in the business environment.  This final type of virtualization is not used to put operating systems into the datacenter on server hardware but instead is used to run additional operating system images on traditional desktops and laptops.  This is a common scenario for people who need to test multiple operating systems for support or development.  It is not useful for production systems and is generally outside the scope of this discussion.  It is a highly useful use of the technology but it is rather a niche scenario primarily useful for compatibility testing.

In all of this discussion there has been, somewhat conspicuously, no mention of Apple’s Mac OSX products.  There is a reason for this.  Apple does not license Mac OSX so that it may be virtualized on non-Apple hardware and Apple does not have an enterprise-ready virtualization product ready for its own platform.  The only way to virtualize Mac OSX is to purchase full, additional licenses for each operating system instance thereby eliminating most of the cost benefits of this approach and to run it on a host-based virtualization product such as VMWare Fusion or Parallels which are designed for use on top of a desktop and not as a server-class product.  This is a major gap in the Mac OSX portfolio and one of the ways in which Apple continues to lag behind the rest of the market in capability and in its understanding of its business customers’ needs.  If Apple were to change its licensing strategy around virtualization Mac OSX would prove to be an extremely popular and useful operating system to virtualize both from the server and desktop perspective.

Virtualization is a great opportunity to lower cost and raise productivity while reducing risk for businesses of any size and with budgets as low as zero.  Many technologies promise important improvements for businesses but most create questionable value while incurring real cost.  Virtualization brings real, measurable value while often costing nothing and often reducing spending immediately.  For many businesses virtualization is the technology that they have always dreamed of and is, in fact, available today.