State of Thin Clients

The IT world loves to swing back and forth between moving processing out to the user via fat clients and moving processing back to the server leaving users with thin clients.  The battle is a long running one that started with the first appearance of multiuser computer systems several decades ago and has continued to this day and will likely continue for a very long time to come.

When I began working in IT, thin clients were simple text terminals attached to a single, central server via serial connections.  Limited to very basic text input these served their purpose at the time to provide relatively low cost computing to a large number of users.  The system wasn’t pretty or glamorous, but it was quite functional.

These ancient terminals gave way to the personal computer and computing power shifted from the datacenter to the desktop allowing users to run powerful apps like Lotus 1-2-3 and WordPerfect.  Responsive graphical applications were a powerful draw for decentralized processing.  Users were enthralled with the new usability.  The text terminal went into very rapid decline.

Eventually centralized power was available in such quantities and at such a low price point that graphical applications could be run with almost as much responsiveness from the server while clients could be “thin” needing just a shim of an operating system – enough to provide remote access back to the server.  Thin computing became the darling of the industry again and the term itself arose and moving towards centralized processing again came into vogue.

Administrators love the central computing model because data and configuration remains in one place.  Backups and management are a breeze.  The idea, at least in theory, is that in doing so desktop support becomes a non-issue with all desktop clients being nothing more than commodity components that can be replaced anytime with completely interchangeable parts.  Since nothing is stored or configured on the desktop there is nothing to support there.

In the initial swings of the “thin computing pendulum” the market movement was dramatic.  When text terminal computing first became available this was practically the only model used in the real world.  The value was so dramatic that no one could really justify doing anything else.  When the PC was introduced the movement to the fat client was so ubiquitous that many younger IT professionals today have never actually seen text terminals in use even though the move to fat “PC” clients was not as all encompassing as the move to text terminals had been one pendulum swing previous.

The PC model was generally better for end users because it mimicked how they used computers at home – those that had computers at home.  It also gave them more options for customization and, for better or for worse, opportunity for them to begin installing software of their own rather than only that software preconfigured for them on the central server.

Over time there have been a lot of developments from both camps giving each more and more advantages of the other.  Central domain services such as Microsoft’s Active Directory have come along allowing central management to extend out to fat clients bringing control and management more in line with traditional thin computing models.  Likewise, companies like Citrix have worked very hard developing new technologies that allow thin clients to perform much more like robust fat clients making their use as seamless as possible for end users and even making offline use possible for laptop users.

Most shops today have adopted hybrid models.  Fat clients where they make sense and thin clients for certain categories of users and for remote workers and continuity of business scenarios.

Over the past decade we have seen a shift in the way that business applications are created and deployed.  Today almost all business applications are web-based and have no client platform dependency.  This affords IT departments of today with a potential new opportunity – to shift from a traditional thin client platform – that requires remote graphical access – to the browser as the new thin client platform.

The move to web apps has happened slowly and most businesses have a rather large legacy codebase on which they are quite dependent that cannot be easily transferred to the new web app architecture and some apps simply are not good candidates for this architecture.  But by and large the majority of new business applications are web based, written most often in Java or .NET, and these apps are prime candidates for a new thin computing model.

If our custom business apps are available via the browser then our only commonly used apps that remain holding us back are the traditional productivity apps such as our office suites that are widely used by nearly all staff today (if they have a computer at all.)  Very few desktop apps are actually pervasive except for these.  Increasingly we are seeing browser-based alternatives to the traditional office suites.  Everyone is very aware of Google Apps as a pioneer in this area with Microsoft now offering online MS Office as well.  But the popular offerings making consumer news headlines require businesses to totally rethink long term strategies involving keeping critical business data within their walls and are not likely to be highly disruptive to the enterprise for quite some time.

What does pose a threat to the status quo is other alternative software products such as ThinkFree office which is installed within the organization and used and secured internally just like any other normal business application.  This category of “traditionally installed internal web applications” will allow enterprise IT departments to begin to reconsider their end users’ platforms without having to reevaluate their entire concept of IT in general.  The biggest barriers to this today are lingering business applications and power users using specific desktop apps that cannot be encapsulated within a browser.

One of the great advantages, however, of the browser as the new thin client is how simple it is to mix browser-based apps with traditional apps.  The move is transparent and most large businesses are moving in this direction today even if there is no overarching strategy to do so.  The market momentum to develop all new apps for the web is causing this to happen naturally.

Another key advantage of a completely “web based” architectural model is the great ease with which it can be exposed for users outside of the corporate network.  Instead of using cumbersome VPN clients and company laptops employees can find any web browser, sign in to the company network and have secure business applications delivered to any browser, anywhere.

Bringing this almost unnoticed shift into sharp relief today are a handful of, of all things, consumer devices such as: Apple’s iPhone and iPad and Google’s Android and ChromeOS platforms.  What all of these devices have in common is a focus upon being primarily thin web appliances – thin clients for consumers.  With the majority of consumer computing focused upon web connectivity the need for anything else from a platform is nearly non-existent in the consumer market.  This means that within a very short period of time users who once brought home PC experience to the office as their expectation of a computing environment will soon be beginning to bring web-based thin computing as their new expectation.

When this shift happens IT departments with need to rethink their internal application delivery strategy.  The change doesn’t have to be dramatic if current development trends are used commonly and legacy systems are routinely updated.  In fact, one of the great benefits of this new model is that traditional fat clients function very well as browser platforms and will do so for a very long time to come most likely.  Companies adopting this model will likely be able to slow desktop purchasing cycles and prepare for purchasing some form of traditional thin client with embedded browser or move to a business version of the new Nettop trend that we are beginning to see emerge in the consumer space.  Some businesses may even attempt the rather dangerous path of using consumer devices but the lack of management and security features will likely keep this from being popular in all but rare instances.

I believe, though, that this swing of the pendulum will not be as dramatic as the last one just as it was not as dramatic as the swing before that.  It will be an important trend but IT departments understand more and more that no new technological shift is a silver bullet and that with each new opportunity comes new challenges.  Most IT departments will need to implement some degree of browser-based thin computing over the next few years but most will retain a majority user base of fat clients.  Hybrid environments, like we’ve seen for many years with more traditional models, will continue as before with each technology being used in target areas where they make the most sense.

The one area where thin clients continue to be challenged the most is in mobile computing where disconnected users end up being digitally marooned away from their company networks unable to continue working until network connectivity is reestablished.  This is a significant issue for power users who must travel extensively and need to be able to continue working regardless of their current connectivity.  Today this is being solved in the traditional thin client arena thanks to companies like Citrix who continue to advance the state of the art in thin application delivery.

In the browser-based arena we have had to turn to technologies like Google Gears and Adobe AIR in the past to make this possible but these had poor market penetration.  Coming down the pike, however, is the new HTML 5 Offline API which is set to redefine how the web works for users who need to go “off the grid” from time to time.  With HTML 5 incorporating offline capabilities and a richer feature set into the specification for the web itself we expect to see broad and rapid adoption from all of the leading vendors – most likely even before the draft standard is finalized.  While still quite some ways away this new standard will certainly lay the groundwork for a significant shift towards the browser as a ubiquitous, standard and robust platform.

The future of thin computing looks to be incredibly promising both in the enterprise as well as, for the first time, in the consumer arena as well.  Adoption of thin computing models will be spurred on by the current movement towards Software as a Service models and SaaS adoption will continue to be encouraged by the widespread presence of thin computing devices.  In many ways browser-based thin computing represents the technology aspect that is now maturing in the SaaS arena where SaaS itself is maturing in social acceptance rather than technical feasibility.

Choosing an Email Architecture: Internal or Hosted

If you talk to email specialists what you seem to find, in my small, anecdotal survey of the market, is that half of all of these professionals will tell you to simply install email locally, normally Microsoft Exchange, and the other half will simply tell you to go with a hosted (a.k.a. Software-as-a-Service / SaaS or “in the cloud”) service, most often Google Apps, but email is not such a simple architectural component that it should be distilled to trite answers.  Email is one of the most important components of your business’ communications infrastructure, often surpassing telephony, and choosing the right delivery methodology for your company is critical for your long term success.

We will start by considering some basic factors in email hosting.  Email systems require a good deal of bandwidth, quite a significant amount of storage, high reliability, careful management and significant security consideration.

Bandwidth is the first area to consider.  Every email sent and received must travel between the end user and the email server as well as between the email server itself and the outside world in the case of email destined externally.  In small businesses nearly all email is destined to leave the company network to go to clients, customers, vendors, etc.  In larger enterprises email use changes and as we approach the Fortune 100 email shifts from being almost exclusively a tool for communicating with people outside the organization to being a platform primarily used for internal communications.

This shift in how email itself is used is a very important factor in deciding how to deploy email services.  If email is used almost exclusively internally for intra-staff communications then this will lend itself very well to hosting email systems in-house to increase security and improve WAN bandwidth utilization.  The caveat here being, of course, that a highly distributed company of any size would not keep this traffic on a LAN network and so should be treated as if the email usage is external regardless of whether or not it is intra-staff.  Small companies with communications happening primarily with external users will find better utilization in a hosted service.

Storage is actually often a smaller factor in email architecture decision making than it may at first appear that it should be.  Traditionally email’s storage requirements made a compelling argument for hosting internally due to the cost benefit of keeping large storage, especially that used for archival needs, local.  Recently, large hosted email vendors such as Rackspace and Google Apps have brought the price of online, archival email storage so low that, in many cases, it may actually be more cost effective to utilize hosted storage rather than local storage or, at least, the cost is at parity.  Even long term archival storage can be had very cost effectively in a hosted solution today.

Reliability is a rather complex subject.  Email is critical to any organization.  If an email system goes down many companies simply grind to a halt.  In some cases, the company effectively shuts down when email stops flowing.  Not only do employees stop communicating with each other but customers, vendors, suppliers and others see the company as being offline at best and out of business at worst.  Interrupting communications with the outside world can represent immediate and serious financial impact to almost any business.

Hosted email has the obvious advantage of being hosted in a large, commercial datacenter with redundancy at every level (assuming a top tier vendor) from hardware to storage to networking to power to support.  Hosting email in house requires a business to determine the level of redundancy that is most cost effective given the business’ ability to withstand email downtime and is generally an exercise in compromises – how much reliability can a company do without given the cost necessary to provide it.

Some companies will opt to host email servers at a colocation facility which will provide them with many redundant components but to meet the features of a Rackspace or Google level offering, multiple datacenters would likely be needed.  Colocation is a halfway option providing the technical features of hosted options with the management and flexibility of in-house email systems.

A more common scenario, though, is for companies to host a single email server completely within their walls relying on their internal power, hardware and network connection.  In a scenario like this a company must either take extreme measures to ensure uptime – such as hosting a completely redundant site at immense cost – or front-ending their entire email infrastructure with a reliable online spooling service such as Postini, MessageLabs or MXLogic.  The cost of such services, while critical for the reliability most companies need, is often equal to or even greater than complete email hosting options.  This spooling service cost will likely add an ongoing, scaling cost that will make fully hosted email services always a less expensive option than in-house hosting.

Management cost is very difficult to determine but requires attention.  A fully hosted solution requires relatively little technical knowledge.  Time to manage is low and the skill level necessary to do so is relatively low.  With an in-house solution your company must supply infrastructure, networking, security, system and email skills.  Depending on your needs and your available staff this may be part time for a single professional or it may require multiple FTEs or even outside consultants.  The total time necessary to manage an in-house email system will vary dramatically and is often very hard to calculate do the complex nature of the situation but, at a minimum, it is orders of magnitude greater than a hosted solution.

Security is the final significant consideration.  Beyond traditional system-level security email requires spam filtering.  Handling spam can be done in many ways: in software on the email server, on an appliance located on the local network, farmed out to a spam filtering service or left to the hosted email solution provider.  Spam filtering, if handled internally, is seldom a set and forget service but one that requires regular attention and generally extra cost in licensing and management.

After looking at these main considerations every company should sit down, crunch the numbers, and determine which solution makes the most sense for them on an individual level.  Often it is necessary to use a spreadsheet and play with several scenarios to see what each solution will cost both up front and over time.  This, combined with a valuation of features and their applicability to the company, will be critical in determining the appropriateness of each option.

The secret weapons of the in-house solution are features, integration and flexibility.  In-house email options can be extended or modified to offer exactly the feature set that the organization requires – sometimes at additional cost.  A perfect example of this is Zimbra’s instant messaging integration which can be a significant value-add for an email platform.  This has to be considered in addition to raw cost.  Integration with existing internal authentication mechanisms can be an important factor as well.

In my own experience and cost calculations, hosted solutions represent the vast majority of appropriate solutions in the SMB space due to raw economics while large and enterprise class customers will find insurmountable benefits from the flexibility and internal communications advantages of in-house solutions.  Small businesses struggle mostly with cost while large business struggle primarily with the communications complexity of their scale.  Large businesses also get the best value from in-house solutions due to “professional density” – the inverse number of IT professionals whose time is wasted due to corporate scale inefficiencies.

Today, whether a business chooses to host their own email or to receive email as a service, there are many options from which to choose even once a basic architecture is chosen.  Traditionally only a few in-house options such as MS Exchange and Lotus Notes would be considered but new alternatives such as Zimbra (recently acquired by VMWare,) Scalix and Kerio are expanding the landscape with lower costs, new deployment options and aggressive feature sets.  Hosting’s relative newcomer, and overnight industry heavyweight, Rackspace is drawing a lot of attention with their new email offerings which more closely mimic traditional in-house offerings while Google continues to get attention with their unique GMail services.  I expect to see the hosted email space continue to become more competitive with new integration features being a key focus.

Every business is unique and the whole of the factors must be considered.  Using a combination of business and IT skills is necessary to evaluate the available options and opportunities and no one discipline should be making these decisions in isolation.  This is a perfect example of where IT managers must understand the economics of the business in addition to the technological aspects of the solution.