Tag Archives: thin client

The Physical Interaction Considerations of VDI

VDI (Virtual Desktop Infrastructure) is different from traditional virtualization of servers because, unlike servers which provide services exclusively onto a network, desktops are a point of physical interaction with end users.  There is no escaping the need for there to be physical equipment that the end users will actually touch.  Keyboards, mice, touchscreens, monitors, speakers… these things cannot be virtualized.

Because of this VDI faces much more complicated decision making and planning than virtualizing servers would.  VDI physical requirements can have a wide variety of solutions.

Traditionally we approached VDI and terminal servers’ needs around physical interaction through the use of thin clients.  Thin clients sit on the network and utilize the same protocols and techniques we would use for normal remote graphical access with protocols like NX, ICA, RDP and VNC.  A thin client runs a full operating system but one that is very lean and that has a singular purpose – to manage connections to other machines.  The idea of the thin client is to keep all processing remote and only have the necessary components on the local hardware to handle networking and local interactions.  Thin clients are relatively low cost, low power consumption, easy to maintain, reliable and have very long lifespans.  But they don’t cost so little as to not be a concern, typically prices are half to three quarters the cost of a traditional desktop and while they tend to last up to twice as long in the field, this remains neither a trivial cost for initial acquisition nor a trivial long term investment cost.

Because of the remaining high costs of traditional thin clients a more modern replacement, the zero client, has arisen as a fix to those issues.  A zero client is not a strict term and is truly just a class of thin clients but one that has removed traditional processing involving a CPU and moved to dedicated very low cost remote graphical processing that is essentially nothing more than a display adapter attached to a network.  Doing so reduces the power needs, management needs and manufacturing costs allowing for a much lower cost end point device.  Zero Clients offer few potential features than do Thin Clients which can often run their own apps like a web browser locally, as there is no local processing but this is often a good thing rather than a bad thing.   Supporting Zero Clients is also a new breed of remote graphical protocols often associated with them such as PCoIP.

Of course, going in the other direction, we can use full fat clients (e.g. traditional desktops and laptops) as our clients.  This generally only makes sense if either the desktops are remnants of a previous infrastructure and only being repurposed as remote graphical access points or if the infrastructure is a hybrid and users use the desktops for some purposes and the VDI or terminal services for others.  In some cases where thin clients are desired and fat clients are available at low cost, such as off lease with older units, fat clients can still make financial sense but the use cases there are limited.  It is extremely common to use existing fat clients during a transition phase and then to migrate to thin or zero clients once a desktop refresh point has been reached or on a machine by machine basis as the machines require maintenance.

Today other options do exist such as using phones, tablets and other mobile devices as remote access points but these are generally special cases and not the norm due to a lack of good input devices.  But use cases do exist and you can see this from time to time.  As devices such as Android-based desktops begin to become more common on the market we may find this becoming more standard and may even see some rather unexpected situations where even devices like advanced desktop phones that run Android will be used as a phone and a thin client device at once.  The more likely situation is that convertible cell phones that can double as lightweight desktop devices when docked will be popular thin client choices.

The last hardware consideration is that of BYOD or “Bring Your Own Device.”  When moving to VDI and/or Terminal Services infrastructures the ability to leverage employee devices becomes very good.  There are legal and logistical complications with employees supplying all of their own access devices but there are huge benefits as well such as happier employees, lower costs and more flexibility.  The use of remote graphical displays rather than exposing data directly vastly reduces security risk and changes how we can approach accessing and exposing internal systems.

It is easy to become caught up in the move of processing resources from local to server when looking at VDI and to overlook that hardware costs remain, and generally remain quite significant, on a per user “on the desktop” level.  Pricing out VDI is not as simple as determining the cost of a VDI server to replace the cost of desktops.  A cost reduction per desktop must be determined and can easily be significant but can just as easily be pretty trivial.  The cost of desktops or desktop replacement hardware will continue to be a large part of the per user IT budget even with VDI solutions.

State of Thin Clients

The IT world loves to swing back and forth between moving processing out to the user via fat clients and moving processing back to the server leaving users with thin clients.  The battle is a long running one that started with the first appearance of multiuser computer systems several decades ago and has continued to this day and will likely continue for a very long time to come.

When I began working in IT, thin clients were simple text terminals attached to a single, central server via serial connections.  Limited to very basic text input these served their purpose at the time to provide relatively low cost computing to a large number of users.  The system wasn’t pretty or glamorous, but it was quite functional.

These ancient terminals gave way to the personal computer and computing power shifted from the datacenter to the desktop allowing users to run powerful apps like Lotus 1-2-3 and WordPerfect.  Responsive graphical applications were a powerful draw for decentralized processing.  Users were enthralled with the new usability.  The text terminal went into very rapid decline.

Eventually centralized power was available in such quantities and at such a low price point that graphical applications could be run with almost as much responsiveness from the server while clients could be “thin” needing just a shim of an operating system – enough to provide remote access back to the server.  Thin computing became the darling of the industry again and the term itself arose and moving towards centralized processing again came into vogue.

Administrators love the central computing model because data and configuration remains in one place.  Backups and management are a breeze.  The idea, at least in theory, is that in doing so desktop support becomes a non-issue with all desktop clients being nothing more than commodity components that can be replaced anytime with completely interchangeable parts.  Since nothing is stored or configured on the desktop there is nothing to support there.

In the initial swings of the “thin computing pendulum” the market movement was dramatic.  When text terminal computing first became available this was practically the only model used in the real world.  The value was so dramatic that no one could really justify doing anything else.  When the PC was introduced the movement to the fat client was so ubiquitous that many younger IT professionals today have never actually seen text terminals in use even though the move to fat “PC” clients was not as all encompassing as the move to text terminals had been one pendulum swing previous.

The PC model was generally better for end users because it mimicked how they used computers at home – those that had computers at home.  It also gave them more options for customization and, for better or for worse, opportunity for them to begin installing software of their own rather than only that software preconfigured for them on the central server.

Over time there have been a lot of developments from both camps giving each more and more advantages of the other.  Central domain services such as Microsoft’s Active Directory have come along allowing central management to extend out to fat clients bringing control and management more in line with traditional thin computing models.  Likewise, companies like Citrix have worked very hard developing new technologies that allow thin clients to perform much more like robust fat clients making their use as seamless as possible for end users and even making offline use possible for laptop users.

Most shops today have adopted hybrid models.  Fat clients where they make sense and thin clients for certain categories of users and for remote workers and continuity of business scenarios.

Over the past decade we have seen a shift in the way that business applications are created and deployed.  Today almost all business applications are web-based and have no client platform dependency.  This affords IT departments of today with a potential new opportunity – to shift from a traditional thin client platform – that requires remote graphical access – to the browser as the new thin client platform.

The move to web apps has happened slowly and most businesses have a rather large legacy codebase on which they are quite dependent that cannot be easily transferred to the new web app architecture and some apps simply are not good candidates for this architecture.  But by and large the majority of new business applications are web based, written most often in Java or .NET, and these apps are prime candidates for a new thin computing model.

If our custom business apps are available via the browser then our only commonly used apps that remain holding us back are the traditional productivity apps such as our office suites that are widely used by nearly all staff today (if they have a computer at all.)  Very few desktop apps are actually pervasive except for these.  Increasingly we are seeing browser-based alternatives to the traditional office suites.  Everyone is very aware of Google Apps as a pioneer in this area with Microsoft now offering online MS Office as well.  But the popular offerings making consumer news headlines require businesses to totally rethink long term strategies involving keeping critical business data within their walls and are not likely to be highly disruptive to the enterprise for quite some time.

What does pose a threat to the status quo is other alternative software products such as ThinkFree office which is installed within the organization and used and secured internally just like any other normal business application.  This category of “traditionally installed internal web applications” will allow enterprise IT departments to begin to reconsider their end users’ platforms without having to reevaluate their entire concept of IT in general.  The biggest barriers to this today are lingering business applications and power users using specific desktop apps that cannot be encapsulated within a browser.

One of the great advantages, however, of the browser as the new thin client is how simple it is to mix browser-based apps with traditional apps.  The move is transparent and most large businesses are moving in this direction today even if there is no overarching strategy to do so.  The market momentum to develop all new apps for the web is causing this to happen naturally.

Another key advantage of a completely “web based” architectural model is the great ease with which it can be exposed for users outside of the corporate network.  Instead of using cumbersome VPN clients and company laptops employees can find any web browser, sign in to the company network and have secure business applications delivered to any browser, anywhere.

Bringing this almost unnoticed shift into sharp relief today are a handful of, of all things, consumer devices such as: Apple’s iPhone and iPad and Google’s Android and ChromeOS platforms.  What all of these devices have in common is a focus upon being primarily thin web appliances – thin clients for consumers.  With the majority of consumer computing focused upon web connectivity the need for anything else from a platform is nearly non-existent in the consumer market.  This means that within a very short period of time users who once brought home PC experience to the office as their expectation of a computing environment will soon be beginning to bring web-based thin computing as their new expectation.

When this shift happens IT departments with need to rethink their internal application delivery strategy.  The change doesn’t have to be dramatic if current development trends are used commonly and legacy systems are routinely updated.  In fact, one of the great benefits of this new model is that traditional fat clients function very well as browser platforms and will do so for a very long time to come most likely.  Companies adopting this model will likely be able to slow desktop purchasing cycles and prepare for purchasing some form of traditional thin client with embedded browser or move to a business version of the new Nettop trend that we are beginning to see emerge in the consumer space.  Some businesses may even attempt the rather dangerous path of using consumer devices but the lack of management and security features will likely keep this from being popular in all but rare instances.

I believe, though, that this swing of the pendulum will not be as dramatic as the last one just as it was not as dramatic as the swing before that.  It will be an important trend but IT departments understand more and more that no new technological shift is a silver bullet and that with each new opportunity comes new challenges.  Most IT departments will need to implement some degree of browser-based thin computing over the next few years but most will retain a majority user base of fat clients.  Hybrid environments, like we’ve seen for many years with more traditional models, will continue as before with each technology being used in target areas where they make the most sense.

The one area where thin clients continue to be challenged the most is in mobile computing where disconnected users end up being digitally marooned away from their company networks unable to continue working until network connectivity is reestablished.  This is a significant issue for power users who must travel extensively and need to be able to continue working regardless of their current connectivity.  Today this is being solved in the traditional thin client arena thanks to companies like Citrix who continue to advance the state of the art in thin application delivery.

In the browser-based arena we have had to turn to technologies like Google Gears and Adobe AIR in the past to make this possible but these had poor market penetration.  Coming down the pike, however, is the new HTML 5 Offline API which is set to redefine how the web works for users who need to go “off the grid” from time to time.  With HTML 5 incorporating offline capabilities and a richer feature set into the specification for the web itself we expect to see broad and rapid adoption from all of the leading vendors – most likely even before the draft standard is finalized.  While still quite some ways away this new standard will certainly lay the groundwork for a significant shift towards the browser as a ubiquitous, standard and robust platform.

The future of thin computing looks to be incredibly promising both in the enterprise as well as, for the first time, in the consumer arena as well.  Adoption of thin computing models will be spurred on by the current movement towards Software as a Service models and SaaS adoption will continue to be encouraged by the widespread presence of thin computing devices.  In many ways browser-based thin computing represents the technology aspect that is now maturing in the SaaS arena where SaaS itself is maturing in social acceptance rather than technical feasibility.