State of Thin Clients

The IT world loves to swing back and forth between moving processing out to the user via fat clients and moving processing back to the server leaving users with thin clients.  The battle is a long running one that started with the first appearance of multiuser computer systems several decades ago and has continued to this day and will likely continue for a very long time to come.

When I began working in IT, thin clients were simple text terminals attached to a single, central server via serial connections.  Limited to very basic text input these served their purpose at the time to provide relatively low cost computing to a large number of users.  The system wasn’t pretty or glamorous, but it was quite functional.

These ancient terminals gave way to the personal computer and computing power shifted from the datacenter to the desktop allowing users to run powerful apps like Lotus 1-2-3 and WordPerfect.  Responsive graphical applications were a powerful draw for decentralized processing.  Users were enthralled with the new usability.  The text terminal went into very rapid decline.

Eventually centralized power was available in such quantities and at such a low price point that graphical applications could be run with almost as much responsiveness from the server while clients could be “thin” needing just a shim of an operating system – enough to provide remote access back to the server.  Thin computing became the darling of the industry again and the term itself arose and moving towards centralized processing again came into vogue.

Administrators love the central computing model because data and configuration remains in one place.  Backups and management are a breeze.  The idea, at least in theory, is that in doing so desktop support becomes a non-issue with all desktop clients being nothing more than commodity components that can be replaced anytime with completely interchangeable parts.  Since nothing is stored or configured on the desktop there is nothing to support there.

In the initial swings of the “thin computing pendulum” the market movement was dramatic.  When text terminal computing first became available this was practically the only model used in the real world.  The value was so dramatic that no one could really justify doing anything else.  When the PC was introduced the movement to the fat client was so ubiquitous that many younger IT professionals today have never actually seen text terminals in use even though the move to fat “PC” clients was not as all encompassing as the move to text terminals had been one pendulum swing previous.

The PC model was generally better for end users because it mimicked how they used computers at home – those that had computers at home.  It also gave them more options for customization and, for better or for worse, opportunity for them to begin installing software of their own rather than only that software preconfigured for them on the central server.

Over time there have been a lot of developments from both camps giving each more and more advantages of the other.  Central domain services such as Microsoft’s Active Directory have come along allowing central management to extend out to fat clients bringing control and management more in line with traditional thin computing models.  Likewise, companies like Citrix have worked very hard developing new technologies that allow thin clients to perform much more like robust fat clients making their use as seamless as possible for end users and even making offline use possible for laptop users.

Most shops today have adopted hybrid models.  Fat clients where they make sense and thin clients for certain categories of users and for remote workers and continuity of business scenarios.

Over the past decade we have seen a shift in the way that business applications are created and deployed.  Today almost all business applications are web-based and have no client platform dependency.  This affords IT departments of today with a potential new opportunity – to shift from a traditional thin client platform – that requires remote graphical access – to the browser as the new thin client platform.

The move to web apps has happened slowly and most businesses have a rather large legacy codebase on which they are quite dependent that cannot be easily transferred to the new web app architecture and some apps simply are not good candidates for this architecture.  But by and large the majority of new business applications are web based, written most often in Java or .NET, and these apps are prime candidates for a new thin computing model.

If our custom business apps are available via the browser then our only commonly used apps that remain holding us back are the traditional productivity apps such as our office suites that are widely used by nearly all staff today (if they have a computer at all.)  Very few desktop apps are actually pervasive except for these.  Increasingly we are seeing browser-based alternatives to the traditional office suites.  Everyone is very aware of Google Apps as a pioneer in this area with Microsoft now offering online MS Office as well.  But the popular offerings making consumer news headlines require businesses to totally rethink long term strategies involving keeping critical business data within their walls and are not likely to be highly disruptive to the enterprise for quite some time.

What does pose a threat to the status quo is other alternative software products such as ThinkFree office which is installed within the organization and used and secured internally just like any other normal business application.  This category of “traditionally installed internal web applications” will allow enterprise IT departments to begin to reconsider their end users’ platforms without having to reevaluate their entire concept of IT in general.  The biggest barriers to this today are lingering business applications and power users using specific desktop apps that cannot be encapsulated within a browser.

One of the great advantages, however, of the browser as the new thin client is how simple it is to mix browser-based apps with traditional apps.  The move is transparent and most large businesses are moving in this direction today even if there is no overarching strategy to do so.  The market momentum to develop all new apps for the web is causing this to happen naturally.

Another key advantage of a completely “web based” architectural model is the great ease with which it can be exposed for users outside of the corporate network.  Instead of using cumbersome VPN clients and company laptops employees can find any web browser, sign in to the company network and have secure business applications delivered to any browser, anywhere.

Bringing this almost unnoticed shift into sharp relief today are a handful of, of all things, consumer devices such as: Apple’s iPhone and iPad and Google’s Android and ChromeOS platforms.  What all of these devices have in common is a focus upon being primarily thin web appliances – thin clients for consumers.  With the majority of consumer computing focused upon web connectivity the need for anything else from a platform is nearly non-existent in the consumer market.  This means that within a very short period of time users who once brought home PC experience to the office as their expectation of a computing environment will soon be beginning to bring web-based thin computing as their new expectation.

When this shift happens IT departments with need to rethink their internal application delivery strategy.  The change doesn’t have to be dramatic if current development trends are used commonly and legacy systems are routinely updated.  In fact, one of the great benefits of this new model is that traditional fat clients function very well as browser platforms and will do so for a very long time to come most likely.  Companies adopting this model will likely be able to slow desktop purchasing cycles and prepare for purchasing some form of traditional thin client with embedded browser or move to a business version of the new Nettop trend that we are beginning to see emerge in the consumer space.  Some businesses may even attempt the rather dangerous path of using consumer devices but the lack of management and security features will likely keep this from being popular in all but rare instances.

I believe, though, that this swing of the pendulum will not be as dramatic as the last one just as it was not as dramatic as the swing before that.  It will be an important trend but IT departments understand more and more that no new technological shift is a silver bullet and that with each new opportunity comes new challenges.  Most IT departments will need to implement some degree of browser-based thin computing over the next few years but most will retain a majority user base of fat clients.  Hybrid environments, like we’ve seen for many years with more traditional models, will continue as before with each technology being used in target areas where they make the most sense.

The one area where thin clients continue to be challenged the most is in mobile computing where disconnected users end up being digitally marooned away from their company networks unable to continue working until network connectivity is reestablished.  This is a significant issue for power users who must travel extensively and need to be able to continue working regardless of their current connectivity.  Today this is being solved in the traditional thin client arena thanks to companies like Citrix who continue to advance the state of the art in thin application delivery.

In the browser-based arena we have had to turn to technologies like Google Gears and Adobe AIR in the past to make this possible but these had poor market penetration.  Coming down the pike, however, is the new HTML 5 Offline API which is set to redefine how the web works for users who need to go “off the grid” from time to time.  With HTML 5 incorporating offline capabilities and a richer feature set into the specification for the web itself we expect to see broad and rapid adoption from all of the leading vendors – most likely even before the draft standard is finalized.  While still quite some ways away this new standard will certainly lay the groundwork for a significant shift towards the browser as a ubiquitous, standard and robust platform.

The future of thin computing looks to be incredibly promising both in the enterprise as well as, for the first time, in the consumer arena as well.  Adoption of thin computing models will be spurred on by the current movement towards Software as a Service models and SaaS adoption will continue to be encouraged by the widespread presence of thin computing devices.  In many ways browser-based thin computing represents the technology aspect that is now maturing in the SaaS arena where SaaS itself is maturing in social acceptance rather than technical feasibility.

Choosing an Email Architecture: Internal or Hosted

If you talk to email specialists what you seem to find, in my small, anecdotal survey of the market, is that half of all of these professionals will tell you to simply install email locally, normally Microsoft Exchange, and the other half will simply tell you to go with a hosted (a.k.a. Software-as-a-Service / SaaS or “in the cloud”) service, most often Google Apps, but email is not such a simple architectural component that it should be distilled to trite answers.  Email is one of the most important components of your business’ communications infrastructure, often surpassing telephony, and choosing the right delivery methodology for your company is critical for your long term success.

We will start by considering some basic factors in email hosting.  Email systems require a good deal of bandwidth, quite a significant amount of storage, high reliability, careful management and significant security consideration.

Bandwidth is the first area to consider.  Every email sent and received must travel between the end user and the email server as well as between the email server itself and the outside world in the case of email destined externally.  In small businesses nearly all email is destined to leave the company network to go to clients, customers, vendors, etc.  In larger enterprises email use changes and as we approach the Fortune 100 email shifts from being almost exclusively a tool for communicating with people outside the organization to being a platform primarily used for internal communications.

This shift in how email itself is used is a very important factor in deciding how to deploy email services.  If email is used almost exclusively internally for intra-staff communications then this will lend itself very well to hosting email systems in-house to increase security and improve WAN bandwidth utilization.  The caveat here being, of course, that a highly distributed company of any size would not keep this traffic on a LAN network and so should be treated as if the email usage is external regardless of whether or not it is intra-staff.  Small companies with communications happening primarily with external users will find better utilization in a hosted service.

Storage is actually often a smaller factor in email architecture decision making than it may at first appear that it should be.  Traditionally email’s storage requirements made a compelling argument for hosting internally due to the cost benefit of keeping large storage, especially that used for archival needs, local.  Recently, large hosted email vendors such as Rackspace and Google Apps have brought the price of online, archival email storage so low that, in many cases, it may actually be more cost effective to utilize hosted storage rather than local storage or, at least, the cost is at parity.  Even long term archival storage can be had very cost effectively in a hosted solution today.

Reliability is a rather complex subject.  Email is critical to any organization.  If an email system goes down many companies simply grind to a halt.  In some cases, the company effectively shuts down when email stops flowing.  Not only do employees stop communicating with each other but customers, vendors, suppliers and others see the company as being offline at best and out of business at worst.  Interrupting communications with the outside world can represent immediate and serious financial impact to almost any business.

Hosted email has the obvious advantage of being hosted in a large, commercial datacenter with redundancy at every level (assuming a top tier vendor) from hardware to storage to networking to power to support.  Hosting email in house requires a business to determine the level of redundancy that is most cost effective given the business’ ability to withstand email downtime and is generally an exercise in compromises – how much reliability can a company do without given the cost necessary to provide it.

Some companies will opt to host email servers at a colocation facility which will provide them with many redundant components but to meet the features of a Rackspace or Google level offering, multiple datacenters would likely be needed.  Colocation is a halfway option providing the technical features of hosted options with the management and flexibility of in-house email systems.

A more common scenario, though, is for companies to host a single email server completely within their walls relying on their internal power, hardware and network connection.  In a scenario like this a company must either take extreme measures to ensure uptime – such as hosting a completely redundant site at immense cost – or front-ending their entire email infrastructure with a reliable online spooling service such as Postini, MessageLabs or MXLogic.  The cost of such services, while critical for the reliability most companies need, is often equal to or even greater than complete email hosting options.  This spooling service cost will likely add an ongoing, scaling cost that will make fully hosted email services always a less expensive option than in-house hosting.

Management cost is very difficult to determine but requires attention.  A fully hosted solution requires relatively little technical knowledge.  Time to manage is low and the skill level necessary to do so is relatively low.  With an in-house solution your company must supply infrastructure, networking, security, system and email skills.  Depending on your needs and your available staff this may be part time for a single professional or it may require multiple FTEs or even outside consultants.  The total time necessary to manage an in-house email system will vary dramatically and is often very hard to calculate do the complex nature of the situation but, at a minimum, it is orders of magnitude greater than a hosted solution.

Security is the final significant consideration.  Beyond traditional system-level security email requires spam filtering.  Handling spam can be done in many ways: in software on the email server, on an appliance located on the local network, farmed out to a spam filtering service or left to the hosted email solution provider.  Spam filtering, if handled internally, is seldom a set and forget service but one that requires regular attention and generally extra cost in licensing and management.

After looking at these main considerations every company should sit down, crunch the numbers, and determine which solution makes the most sense for them on an individual level.  Often it is necessary to use a spreadsheet and play with several scenarios to see what each solution will cost both up front and over time.  This, combined with a valuation of features and their applicability to the company, will be critical in determining the appropriateness of each option.

The secret weapons of the in-house solution are features, integration and flexibility.  In-house email options can be extended or modified to offer exactly the feature set that the organization requires – sometimes at additional cost.  A perfect example of this is Zimbra’s instant messaging integration which can be a significant value-add for an email platform.  This has to be considered in addition to raw cost.  Integration with existing internal authentication mechanisms can be an important factor as well.

In my own experience and cost calculations, hosted solutions represent the vast majority of appropriate solutions in the SMB space due to raw economics while large and enterprise class customers will find insurmountable benefits from the flexibility and internal communications advantages of in-house solutions.  Small businesses struggle mostly with cost while large business struggle primarily with the communications complexity of their scale.  Large businesses also get the best value from in-house solutions due to “professional density” – the inverse number of IT professionals whose time is wasted due to corporate scale inefficiencies.

Today, whether a business chooses to host their own email or to receive email as a service, there are many options from which to choose even once a basic architecture is chosen.  Traditionally only a few in-house options such as MS Exchange and Lotus Notes would be considered but new alternatives such as Zimbra (recently acquired by VMWare,) Scalix and Kerio are expanding the landscape with lower costs, new deployment options and aggressive feature sets.  Hosting’s relative newcomer, and overnight industry heavyweight, Rackspace is drawing a lot of attention with their new email offerings which more closely mimic traditional in-house offerings while Google continues to get attention with their unique GMail services.  I expect to see the hosted email space continue to become more competitive with new integration features being a key focus.

Every business is unique and the whole of the factors must be considered.  Using a combination of business and IT skills is necessary to evaluate the available options and opportunities and no one discipline should be making these decisions in isolation.  This is a perfect example of where IT managers must understand the economics of the business in addition to the technological aspects of the solution.

Linux Virtualization Deployment Advantage

As more and more businesses begin to deploy virtualization broadly, we must begin to step back and reconsider the opportunities presented to us by this shift in datacenter architecture.  Virtualization comes with new challenges and potential not only for cost savings but for aggressive project implementation.  Small businesses, especially, when using virtualization tend to prepare themselves for projects that they could never have envisioned doing during the era of physical-only servers.

The big winners in this space of emerging virtualization opportunity are the open source operating systems such as Linux, OpenSolaris and FreeBSD.  The reason that these particular operating systems have unique opportunities that Windows and Mac OSX do not is because of the way that they are, or can be, licensed.  Each of these operating systems has an option by which they are available completely for free – something that cannot be done with Windows or Mac OSX.

Traditionally, when purchasing a new server a business would price out expensive hardware with relatively inexpensive software.  An enterprise operating system, such as Windows, would typically represent a relatively small percentage of the cost of a new server.  Even a small server would cost a few thousand dollars and Windows Server can easily be purchased for less than one thousand dollars.  In this scenario a business looking to purchase a new server would see only a very small cost savings in opting for a “free” operating system since introducing a new OS has its own risks and the bulk of the cost of the new server is in the hardware which would still need to be purchased.

Given that equation, only a rare small business would consider the purchase of a non-Windows-based server.  The opportunity for failure is too high given the risk of change and the cost savings are too small.  Today, though, virtualization is commonplace and becoming more ubiquitous every day.  Businesses virtualizing their infrastructure typically have excess capacity on their servers that is going unused.  As these businesses and their IT departments begin to look to utilize this spare capacity they will increasingly find that the cost of deploying virtualized Windows Server remains high while the cost of deploying a virtualized Linux or OpenSolaris server is nominal – generally nothing more than the effort to do so without any capital expenditure or its associated risk.

The ability to deploy new servers, at any time, without any cost is a significant advantage that companies have not begun to truly comprehend.  If a business wants a new web server, for instance, they can have one provisioned and built in thirty minutes without buying any licenses.  Having redundant virtualization hardware means that a redundant web server can be had as well – again without any capital cost.  Unlike Windows (or other commercial operating systems) there is no need to purchase a second license just to have a backup server.

This means that for the first time many businesses can begin to consider clusters as well.  Typically the cost of licensing software for clustering was prohibitive but if that licensing becomes free then suddenly clusters become very attractive options.

Of course, as open source proponents will point out, the low cost of Linux and other free and open source solutions have long been reasons to move to these platforms, but this discounts the incredible shift in pricing structure that occurs only when spare usable capacity meets the previously existing free licenses.  It is only because so many business have already implemented virtualization strategies, or are in the process of doing so, that this new opportunity truly presents itself.

The first challenge will be in getting businesses to begin to think of operating systems and application platforms as being free.  The ways in which businesses may take advantage of this has yet to be seen.  Businesses are so used to being hamstrung by the need to buy new hardware and expensive server software licenses for every new system deployment that the widespread availability of spare server images is quite novel indeed.

Of course, as with many new technology changes, it is the small and medium business space where the greatest change will likely take place.  Large enterprises are already doing datacenter consolidation and do not necessarily have spare capacity available to them as their capacity plan already takes into account virtualization.  But in the smaller business space where capacity planning is a practically non-existent practice we see a different type of opportunity.

What we typically see in small businesses moving to virtualization is an over-purchasing of hardware.  This generally comes from a misunderstanding of how capacity planning and virtual guest interaction will occur in the virtualized environment but also from a desire to err on the side of overpowered versus underpowered and the nature of virtualization capacity planning being a bit of a “black art”.  Because of this, however, many small businesses have server resources sitting idle.  It is not uncommon to see a powerful server virtualizing just two server instances when there is capacity to virtualize a dozen or more.

It is this overprovisioning of hardware that offers unique opportunity.  Many small businesses, and even medium sized businesses, may manage to effectively virtualize their entire existing server infrastructure leaving no further opportunity for cost savings through consolidation.  At this point the spare capacity of the existing servers offers no further cost savings and can now be viewed as capacity for growth instead.

This begs the question of “What new deployment opportunities exist given these opportunities?”  This question is difficult to answer as it will be different for nearly every business, but we can look at some commonalities to build a rough picture of where we may see new value presenting itself.

The most obvious new opportunity is in new web applications.  Small businesses often would like to take advantage of free web-based applications but do not want to risk deploying new, low-priority applications to their existing Windows-based web server of do not even have a server available to do so.  Creating one or more open source application servers is incredibly simple.  Deploying a wiki, corporate web portal, a blogging engine or news site, bug or incident tracking application, microblogging platform (a la laconi.ca,) CRM, ERP or any of thousands of similar applications can be done quickly and easily with minimal cost using only “spare” time from the existing IT resources.  Any number of internal applications such as these could bring value to the company and produce very little impact on a virtualization platform so many could be deployed utilizing only a small amount of excess capacity.

Beyond obvious web apps there are more feature-rich systems that could be deployed for no cost.  A great example is the OpenFire instant messaging and presence server.  Companies can suddenly roll out complete enterprise class, secure, internal instant messaging applications at no cost whatsoever.  Another example is in monitoring systems such as Nagios, Zenoss or Zabbix – all of which are available for free and represent a real benefit for companies that currently have no such system.  Enterprise monitoring completely for free.

Beyond new applications there is also an “environmental” benefit to be had.  In an enterprise environment changes going into production go through a series of testing.  Typically big businesses will maintain a development server environment, a user acceptance testing environment and then the production environment.  For a small business to do this with Windows is extremely cost prohibitive as the servers in each environment need to be licensed.  But with open source servers being virtualized using spare capacity deploying virtual servers for each of these environments is completely free and allows small businesses to test their own processes before making production changes giving them added stability previously unaffordable to them.

After all of these growth benefits there is one additional benefit to consider – flexibility.  Because these new systems can be deployed and tested with no cost it provides a new opportunity for small shops to deploy open source solutions that may replace expensive Windows solutions that they are currently using.  This could include replacing Exchange with Zimbra or replacing IIS with Apache or Active Directory with an LDAP server.  Doing a project like this would be risky and potentially costly if the hardware and software had to be purchased up front.  But if the project can be done, only using free time from the existing IT department, and can be done as a free “proof of concept” before looking to do a pilot and then full production replacement then risk can be minimized and the entire project can be effectively free.

While a full architectural replacement may be very aggressive for an average small business it is also a very significant potential cost savings.  Moving completely to open source systems is not for everyone and should be evaluated carefully.  The ability to evaluate a project of this magnitude, for free, is very important and small businesses should consider doing so to be sure that they are using the systems that make the most sense for their business model and needs rather than simply using the solutions with which they are already familiar or are already in place.

There are many additional ways in which free and open source products, deployed using existing, excess server capacity, can be used to expand the IT infrastructure of small businesses.  Learning to seek out opportunities rather than seeking cost savings from IT is a new process for most small businesses and requires some relearning, but those that take the time to pursue these opportunities have many benefits to be gained.

In House Email for Small Businesses

In small businesses the primary concern with email is cost.  Email is a commodity and especially in smaller shops the biggest differentiating factor between email products and vendors is cost.  In larger companies factors beyond cost begin to come into play more significantly such as directory services, system integration, push email, extended client support, collaboration tools, presence and more.

Surprisingly, when small businesses decide to bring their email in-house they seem to immediately turn to Microsoft Exchange.  Now I don’t want to belittle Exchange’s place in the market.  Exchange is an extremely robust and feature rich product that has earned its reputation as the go-to enterprise collaboration and messaging server.  In the last decade Exchange came seemingly from nowhere to completely dominate the large business email market.  People simply assume that you run Exchange in the Fortune 500 and, for the most part, they are correct.

The features for which Exchange is best known, however, are not features often critical or even useful to small businesses.  In actuality, the weight of Exchange – necessary to support so many great big-business features – can make it unwieldy for small businesses – even for those businesses with the financial and technological resources to support it.  Exchange focuses on collaboration and internal team communications.

Exchange brings with it many burdens.  The first being cost, up front purchasing, licensing and ongoing support.  Up front costs of Exchange include the Exchange email server purchase plus the licenses necessary for the Windows Servers – yes, that is multiple servers – on which it runs.  (Yes, you can mitigate some of this cost by purchasing Microsoft’s Small Business Server which integrates these components together but extra costs remain and flexibility is lost.)  Licensing costs for Exchange include needed Windows Server CALs and Exchange Email CALs for every user, and in some case fictional user accounts, who will need to access the system.  Ongoing support cost comes from the extra complexity arising from Exchange’s complex feature set and deployment architecture.

The second set of burdens with Exchange comes from the user interface, namely Outlook.  Now technically Exchange requires no additional user interface as Outlook Web Access, or OWA, is included for free and is a very functional web interface for email.  This would be fine if all of Exchange’s functionality was exposed through OWA, but this is not the case, so this is often nothing more than a decent fall-back solution for remote users who are away from their corporate laptops.  To really achieve the benefits of Exchange a company needs to invest in Microsoft Outlook which is a very robust and powerful email and collaboration platform but also an expensive one.  The per-user cost of Outlook can be quite significant when added to the per user costs already existing from the Exchange licensing.

The third set of burdens comes from the overhead of managing such a complex and powerful beast as Exchange.  Exchange is no simple system and, when secured according to best practices, spans multiple physical servers and operates in multiple roles.  Exchange system administration is considered its own discipline within IT or, at least, a Windows Server specialty.  Qualified Exchange admins are costly and in-demand from big business.  Small businesses looking to acquire good Exchange talent will either be paying top dollar, hiring consultants or attempting to make do with less experienced staff – a potential disaster on such a critical and publicly exposed system.  In addition to managing the Exchange system itself the staff will also need to contend with managing the deployment and maintenance of the Outlook clients which, while not complicated, does increase the burden on the IT department compared to other solutions.

More potential cost comes from the need to supply anti-virus technologies and anti-spam technologies to support the Exchange installation.  It would be unfair to mention AV and AS technologies in relation to Exchange without pointing out that any in-house email system will absolutely need these technologies as well – these costs are certainly not unique to Exchange.  However, the ecosystem surrounding Exchange has a very strong tendency to encourage the use of expensive, commercial third party tools to meet these needs.  Outside of Exchange, AV and AS technologies are often included with the email packages and no further purchases are needed.

Vying for attention in the Exchange-alternative space are open source entries Zimbra and Scalix as well as several commercial products such as IBM’s Lotus Notes, Novell’s Groupwise, Open-Xchange and Kerio’s MailServer.  Of these, Lotus Notes and Groupwise target, primarily, the large business space bringing their own set of complex collaboration functionality and cost.  The other four, Zimbra, Scalix, Open-Xchange and Kerio MailServer, focus primarily on the small business space and bring leaner, more targeted solutions that will more likely fit the profile desired for a majority of small businesses who are looking to bring their email solution in-house.

Over the last few years Zimbra especially has been in the news with their advanced web interface and early sale to Yahoo! and very recent acquisition by VMWare.  Zimbra has led, at least in the media, the charge of alternative vendors looking to open the in-house email market.  What makes these products stand out is that they deliver the bulk of Exchange’s enterprise level features, including calendaring and other important corporate applications, but do so either for free or at very competitive prices and through robust web interfaces removing the need for a local fat client like Outlook (but while maintaining the option.)

Zimbra and Scalix truly stand out as ideal candidates for the majority of small businesses looking to keep their email in-house.  Both Zimbra and Scalix offer a wide range of functionality, robust AJAX-based web interface, large commercial installation bases, broad industry support and offer the paid option of full vendor support.  But the biggest benefit for many small businesses is that these packages are available in completely free editions allowing an SMB on a budget to rely completely upon their internal IT department or their IT vendor for support rather than buying expensive, per-user email system licenses.

In addition to being free themselves, Zimbra and Scalix offer a range of deployment scenarios including Red Hat Linux, and its free alternative CentOS Linux, as well as Novell’s Suse Linux. By being available on these platforms these vendors again lower the cost of deploying their solutions as no Windows Server license is required to support them.  This is a large potential cost savings over Exchange again as Exchange requires not one but at least two Windows Server licenses on which to run.  Linux also brings some cost and performance advantages in the virtualization space with more and more varied virtualization options compared to most other platforms.

Caveats exist, of course.  Many shops are wary when looking at non-Microsoft solutions.  A lack of skilled Linux technicians in the SMB space is of real concern.  Windows Admins are abundant and it is a rare shop who would need to even seek one out let alone fail to find one capable of supporting their systems.  While Linux Admins are hardly found by swinging cats, they are widely available and tend to be on average, in my opinion,  more skilled – if only because there is a smaller, more senior pool of people from whom to draw talent.  This helps to balance the equation making Linux support not nearly as scary as it may seem like it will be for small businesses, but it does mean that most SMBs will have to look to more experienced IT consulting firms to assist them – which can bring long term cost benefits as well.

Many users are addicted to the functionality and interfaces of Exchange.  This can be a significant factor in deciding to try an alternative product.  Once workers have become accustomed to their existing workflows and processes, changing them by replacing their email server architecture can be rather disruptive.  Exchange offers quite an extensive array of functionality and users who are using those functions, not handled by competing products, will not likely be pleased losing those features, even if there are alternatives available.  So knowing your userbase and what features are being used is important.  Many companies never touch these features and can migrate easily.

Zimbra and Scalix bring their own features, of course.  One of the best being Zimbra’s built-in instant messaging system and presence system built using the standard XMPP protocol.  Putting secure instant messaging directly into the email interface is a huge win for Zimbra and a significant value-add over the status quo.

Obviously the ideal time to consider an alternative email product is at the very beginning when email is first being deployed or when a migration from another system is already underway.  But even companies with existing email systems can seek cost benefits by moving to a less costly system with savings being recouped over a longer period of time and more work necessary to train users.

Small businesses should look first to products like Zimbra and Scalix as the de facto choice for their environments and heavier, more expensive products like Microsoft Exchange should be considered a “special case” choice that requires careful cost analysis and justification.  Far too many SMB IT departments are picking the expensive route without being required to justify their actions.  If more small businesses were diligent about monitoring their IT spending they would likely find many places where their money is not only being spent somewhat liberally but sometimes even for features that they cannot use at all and sometimes on systems that carry many long term support costs as well.

The Information Technology Resource for Small Business