Category Archives: VDI

Decision Point: VDI and Terminal Services

Two basic concepts vie for prominence, if technologies care about prominence, when it comes to remote graphical desktop interfaces: VDI (virtual desktop infrastructure) and terminal services.  The idea of both is simple, put the resources and processing on a server and have end users access the graphical interface remotely over a network.  What separates VDI and TS fundamentally is the difference between that remote server being a one to many experience with many users getting their desktops from a single operating system image (TS) and each user getting a dedicated server all of their own (presumably virtualized and called VDI) where there is no sharing of the individual operating system resources.

There is a certain amount of assumption, partially from the naming conventions, that VDI implies a desktop operating system rather than a server one but this should not be seen as an implication.  In fact, outside of the Windows world there truly is no separation between desktop and server operating systems so having such a distinction at the technology level would not make sense.  It is important to remember, however, that Microsoft defines VDI licensing by the use of different OS license options and most VDI is for Windows operating systems so while VDI does not imply this, in a practical sense it is generally important to keep in mind that on the tech side there is no distinction and on the Microsoft licensing side there are heavy distinctions.

Of the two, VDI is the newer concept.  Terminal Services have been around for decades and are well known and are anything but exciting or flashy today.  Terminal services predate Windows and are common to nearly every operating system family and are so common in the UNIX world that they are often used without note.  Terminal services are the GUI continuation of the old “green screen” terminals that were used since the “olden days” of computers.  In the old days the terminals were often serial connected VT100 terminals and today we use TCP/IP networking and protocols capable of carrying graphics, but the concept remains the same: many users on a single server.

With VDI we accomplish the same goals but do so giving each user all of their own resources.  Their OS is completely their own, not shared with anyone.  This means that there is all of the overhead of memory management, CPU management, process tables, copies of libraries and such for every individual user.  That is a lot of overhead.  Consider all of the resources that an idle graphical desktop requires just to boot up and wait for the user – it can be quite a bit.  Newer Windows operating systems have been getting leaner and more efficient, probably to make them more viable on VDI infrastructures, but the overhead remains a significant factor.  VDI was not really possible until virtualization made it a reality so in any practical sense it is a new use of technology and is often misunderstood.

What we face now is, when deciding on a remote computational infrastructure, choosing between these two architectural ideas.  Of course, it should be noted, these two can co-exist very easily and it would often be appropriate to do this.  In smaller shops it would be very easy for the two to co-exist on the same physical platform, in fact.  There are many factors here that we need to consider and this decision process can actually be rather complicated.

One of the biggest factors that we must consider is software compatibility.  This is the largest driver of the move to VDI rather than terminal services.  In the Windows world it is not uncommon for applications to require things such as a desktop operating system signature (refusing to run on server OS variants), single user environments, users to have administrator level privileges, uses to run with specific accounts or library requirement that will often conflict with other packages.  Because of these issues, many companies look to VDI to mimic the way individual desktops work where these issues were easily overlooked because each user was running in a discrete environment.  VDI brings this same functionality to the remote access world allowing problem child applications to be catered to as needed.  Isolation of the OS adds a layer of protection.

This driving factor essentially does not exist outside of the Windows world and is primarily why VDI has never taken hold in any other environment.  While easily achievable with Linux or FreeBSD, for example, VDI has little purpose or value in those cases.

A major concern with VDI is the extreme overhead necessary to manage many redundant operating systems each with its own duplicated processes, storage and memory. In the early days this made VDI incredibly inefficient.  More recently, however, advanced VDI systems, primarily centered around virtualization platforms and storage, have addressed many of these issues by deduplicating memory and storage, using common master boot files and other techniques.  In fact, contrary to most assumptions, it can even be the case that VDI may outperform traditional terminal services for Windows due to the hypervisor platform being able to handle memory management and task switching even more efficiently than Windows itself (a phenomenon first observed in the early 2000s when in some cases Windows would run faster when virtualized on top of Linux so that memory management could be partially handed off to the Linux system underneath which was more efficient.)  This is definitely not always the case, but the improvements in VDI handling have come so far that the two are often quite close.  Again, however, this is a factor making VDI more attractive in the Windows world but not as dramatically in the non-Windows world where native OS task management is typically more efficient and VDI would remain unnecessary overhead.

Another area where VDI has consistently shown to be more capable than terminal services is in the area of graphically rich rendered environments such as CAD and video editing.  The same areas that still lean heavily towards dedicated hardware tend to move to VDI rather than terminal services because of a heavy investment in GPU capabilities within the VDI solutions.  This is not a universal scenario, but for situations where heavy graphical rendering needs to take place it is worth investigating the possibility that VDI will perform significantly better.

Because of how VDI is managed, it is often reserved only for very large deployments where the scale, in number of end users included in the solution, can be used to overcome some of the cost of implementation.  Terminal services, however, do to its more scalable cost is often able to be implemented to smaller environments or subsets of users more cost effectively.  Neither is common for a very small environment of only a few users, although a strange phenomenon of manually managed VDI would make VDI likely more effective than terminal services for an exceptionally tiny number of users, perhaps less than ten, where VDI is treated more like individual servers rather than as a unified VDI environment.

With only the rarest of exceptions, primarily due to the licensing overhead created by the Windows desktop ecosystem in a virtualized setting, it is a de facto starting position for remote access end users systems to assume a starting point with terminal server technologies and only turn to the more complicated and more costly VDI solutions when terminal services prove to be unable to meet the technical requirements of the scenario.  For all intents and purposes, VDI is a fall back brute force method to make end user virtualization work where the preferred methods have come up short.

The Physical Interaction Considerations of VDI

VDI (Virtual Desktop Infrastructure) is different from traditional virtualization of servers because, unlike servers which provide services exclusively onto a network, desktops are a point of physical interaction with end users.  There is no escaping the need for there to be physical equipment that the end users will actually touch.  Keyboards, mice, touchscreens, monitors, speakers… these things cannot be virtualized.

Because of this VDI faces much more complicated decision making and planning than virtualizing servers would.  VDI physical requirements can have a wide variety of solutions.

Traditionally we approached VDI and terminal servers’ needs around physical interaction through the use of thin clients.  Thin clients sit on the network and utilize the same protocols and techniques we would use for normal remote graphical access with protocols like NX, ICA, RDP and VNC.  A thin client runs a full operating system but one that is very lean and that has a singular purpose – to manage connections to other machines.  The idea of the thin client is to keep all processing remote and only have the necessary components on the local hardware to handle networking and local interactions.  Thin clients are relatively low cost, low power consumption, easy to maintain, reliable and have very long lifespans.  But they don’t cost so little as to not be a concern, typically prices are half to three quarters the cost of a traditional desktop and while they tend to last up to twice as long in the field, this remains neither a trivial cost for initial acquisition nor a trivial long term investment cost.

Because of the remaining high costs of traditional thin clients a more modern replacement, the zero client, has arisen as a fix to those issues.  A zero client is not a strict term and is truly just a class of thin clients but one that has removed traditional processing involving a CPU and moved to dedicated very low cost remote graphical processing that is essentially nothing more than a display adapter attached to a network.  Doing so reduces the power needs, management needs and manufacturing costs allowing for a much lower cost end point device.  Zero Clients offer few potential features than do Thin Clients which can often run their own apps like a web browser locally, as there is no local processing but this is often a good thing rather than a bad thing.   Supporting Zero Clients is also a new breed of remote graphical protocols often associated with them such as PCoIP.

Of course, going in the other direction, we can use full fat clients (e.g. traditional desktops and laptops) as our clients.  This generally only makes sense if either the desktops are remnants of a previous infrastructure and only being repurposed as remote graphical access points or if the infrastructure is a hybrid and users use the desktops for some purposes and the VDI or terminal services for others.  In some cases where thin clients are desired and fat clients are available at low cost, such as off lease with older units, fat clients can still make financial sense but the use cases there are limited.  It is extremely common to use existing fat clients during a transition phase and then to migrate to thin or zero clients once a desktop refresh point has been reached or on a machine by machine basis as the machines require maintenance.

Today other options do exist such as using phones, tablets and other mobile devices as remote access points but these are generally special cases and not the norm due to a lack of good input devices.  But use cases do exist and you can see this from time to time.  As devices such as Android-based desktops begin to become more common on the market we may find this becoming more standard and may even see some rather unexpected situations where even devices like advanced desktop phones that run Android will be used as a phone and a thin client device at once.  The more likely situation is that convertible cell phones that can double as lightweight desktop devices when docked will be popular thin client choices.

The last hardware consideration is that of BYOD or “Bring Your Own Device.”  When moving to VDI and/or Terminal Services infrastructures the ability to leverage employee devices becomes very good.  There are legal and logistical complications with employees supplying all of their own access devices but there are huge benefits as well such as happier employees, lower costs and more flexibility.  The use of remote graphical displays rather than exposing data directly vastly reduces security risk and changes how we can approach accessing and exposing internal systems.

It is easy to become caught up in the move of processing resources from local to server when looking at VDI and to overlook that hardware costs remain, and generally remain quite significant, on a per user “on the desktop” level.  Pricing out VDI is not as simple as determining the cost of a VDI server to replace the cost of desktops.  A cost reduction per desktop must be determined and can easily be significant but can just as easily be pretty trivial.  The cost of desktops or desktop replacement hardware will continue to be a large part of the per user IT budget even with VDI solutions.

Understanding Virtual Desktop Infrastructure

VDI (or Virtual Desktop Infrastructure) has been all the rage in IT circles for the past couple of years. Once the virtualization of servers became the norm, looking to desktops as the next frontier in virtualization was natural. Unlike servers, however, desktops are affected by several major factors that need to be addressed and considered before we simply jump on the VDI bandwagon. VDI is a great and wonderful technology, but like any technology has its place and needs to be considered carefully.

First we have to look at an important concept that affects VDI – the idea of shared computing. With servers we naturally assume that the servers and the services that they provide are not for one to one consumption but will be utilized by multiple users. This concept exists and has always existed in the world of desktops and is often referred to as terminal services. Terminal Servers are the server world’s answer to the need for centralized desktop resources and have been standard since literally before desktops even existed.

It is an interesting bit of computing history that brings us to how the Windows world interacts with the concept of terminal servers. Most operating systems and nearly all that remain in use today are designed from the ground up, and always have been, as multi-user systems. The idea that one user would sit in front of the computer as the “main” user and others would either be second class citizens or not exist at all did not really exist and all users were considered equal. Windows, unlike the UNIX family for example, came from a world of single user systems originating with DOS and DOS/Windows which were not multi-user and when Windows NT was developed as a multiuser system a great many software developers treated it as they always had making software that would not function well or often even at all in a multiuser mode.

This software ecosystem that is unique to Windows (it could effectively never exist on Linux, for example, because such software would simply be seen as broken due to the way that the ecosystem and deployments exist) has created an interesting problem making some software and some tasks easily addressable by the use of terminal servers identically to how it would be tackled by any UNIX OS while many other applications cannot be addressed using a terminal server and require a dedicated operating system instance for every user.

It is because of this historical factor leading to a significant difference in software ecosystems that has created the fundamental need for VDI and explains why VDI arose as a unique phenomenon within the Windows world and remains, for all intents and purposes, exclusive to it. So, it is very important to understand, that VDI arose conceptually as a means of addressing a need that existed only by a deficiency in third party applications and not because of an intrinsic nature of Windows itself in its current form or because VDI was a better approach to virtualizing or hosting end user desktop services. In fact, we could even look at VDI as an unfortunate kludge only needed in situations where we want to virtualize or centralize desktop resources and where some of the software needed to run on those systems cannot run in a multiuser mode. VDI is a fallback mechanism for special situations and not a desired approach to virtualized or centralized end user services.

It is important to note that due to the widespread use of VDI and necessity of it that the investment into support technologies around VDI has led to the possibility that in many cases VDI might actually outperform terminal servers even though architecturally this seems almost impossible. Basically this is happening because the incredible amount of research and development going into the hypervisor layer may be outpacing the same components in the operating system itself making for potentially better CPU and memory management and resource sharing. This is completely dependent on the unique situation, of course, as every OS and every hypervisor and every set of VDI tools is unique as well as the workloads being tested so mileage will vary significantly.

Also of serious consideration is, because of the aforementioned Windows-centric nature of the VDI concept, licensing. If we were to look at VDI from the Linux perspective we would have little to no licensing concerns and VDI would be up against traditional terminal services based on technical merits alone, but this is effectively never the case. The biggest single factor around VDI decision making is Microsoft licensing.

VDI licensing is both expensive, as well as complex. Companies wanting to consider the virtualization of Windows desktop resources have to carefully way the benefits against both the large cost of appropriate licensing and also the potentially large overhead of license management. Moving into VDI will likely mean lots of expensive IT time dedicated to license research, monitoring and training which is an often overlooked aspect of licensing costs.

VDI is a somewhat difficult concept to speak about in generalities because it is a slightly amorphous topic. If we virtualize a desktop, does it not become a server? If we use an operating system intended for server use, does that change what is and is not VDI? Is VDI based around use cases, licensing or product categories?

The real answer lies in that to the industry VDI is technically one thing but in practical terms to Microsoft, the only key licensing player in the space, it means something somewhat different. VDI is technically the virtualization of one to one “graphical end user” instances – meaning a single virtual machine being used by a single user much as a traditional, physical desktop or laptop would be used. To Microsoft, whose concerns are slightly different than those of the industry, the term refers to the virtualization of Windows “desktop class” operating systems. If you virtualize Windows “server class” operating systems, Microsoft does not view you as doing VDI. So we have to understand these two views of the concept to keep from becoming confused. In fact, using Windows Server OSes to get around VDI licensing needs from Windows desktops has become very standard and common. However, we have to remember the kludge nature of VDI and while this does solve the failure to write software that is multiuser in nature it does not address the very real potential that software was written with expectations of desktop-branded operating systems and we are somewhat likely to find end user software that is either locked (intentionally or unintentionally) to desktop operating systems only or is potentially licensed only on those platforms.

The last major consideration around VDI decision making is that unlike servers which when virtualized are completely virtualized, a desktop cannot be treated in the same way because there is always a physical component to it. The end user will always need a monitor to look at, a keyboard to type on, speakers to listen to, and so on. So when we are looking to move to VDI we must take care not to overlook the fact that we are not eliminating the need to purchase and maintain desktops, we are simply moving where the operating system will reside. We may redeploy older hardware to be used for remote access, move to thin clients or the newly termed and mostly meaningless zero clients or use otherwise in use “fat clients” to pull double duty handling both activities as a remote access client as well as providing its own desktop services.

Certainly virtualizing the desktop offers us many great opportunities and much value if we are doing it for the right reasons and understanding the hows, whys and whens of VDI. It has sadly, like so many technology trends, become a knee-jerk reaction to want to move to VDI without performing proper evaluations and developing a clear picture as to how VDI will fit into our own environments. If we lack a clear reason for choosing VDI it will be very unlikely that we deploy it in a positive manner.

Finally it is very important that we consider the necessary skill sets that will be required in order to properly move to VDI. From a purely technical standpoint, throwing a Windows 10 VM onto Hyper-V constitutes VDI but from a practical perspective this is not how effective VDI will be designed. VDI not only requires the special licensing knowledge that I mentioned above but will typically involve rather unique knowledge of modern and very specialized VDI toolsets and products, shared storage as it applies to VDI, remote access protocols, thin clients or zero clients, and more. VDI deployments tend to be one of the technical and unique components of an infrastructure leading to a great number of unknowns and challenges for any organization.

State of Thin Clients

The IT world loves to swing back and forth between moving processing out to the user via fat clients and moving processing back to the server leaving users with thin clients.  The battle is a long running one that started with the first appearance of multiuser computer systems several decades ago and has continued to this day and will likely continue for a very long time to come.

When I began working in IT, thin clients were simple text terminals attached to a single, central server via serial connections.  Limited to very basic text input these served their purpose at the time to provide relatively low cost computing to a large number of users.  The system wasn’t pretty or glamorous, but it was quite functional.

These ancient terminals gave way to the personal computer and computing power shifted from the datacenter to the desktop allowing users to run powerful apps like Lotus 1-2-3 and WordPerfect.  Responsive graphical applications were a powerful draw for decentralized processing.  Users were enthralled with the new usability.  The text terminal went into very rapid decline.

Eventually centralized power was available in such quantities and at such a low price point that graphical applications could be run with almost as much responsiveness from the server while clients could be “thin” needing just a shim of an operating system – enough to provide remote access back to the server.  Thin computing became the darling of the industry again and the term itself arose and moving towards centralized processing again came into vogue.

Administrators love the central computing model because data and configuration remains in one place.  Backups and management are a breeze.  The idea, at least in theory, is that in doing so desktop support becomes a non-issue with all desktop clients being nothing more than commodity components that can be replaced anytime with completely interchangeable parts.  Since nothing is stored or configured on the desktop there is nothing to support there.

In the initial swings of the “thin computing pendulum” the market movement was dramatic.  When text terminal computing first became available this was practically the only model used in the real world.  The value was so dramatic that no one could really justify doing anything else.  When the PC was introduced the movement to the fat client was so ubiquitous that many younger IT professionals today have never actually seen text terminals in use even though the move to fat “PC” clients was not as all encompassing as the move to text terminals had been one pendulum swing previous.

The PC model was generally better for end users because it mimicked how they used computers at home – those that had computers at home.  It also gave them more options for customization and, for better or for worse, opportunity for them to begin installing software of their own rather than only that software preconfigured for them on the central server.

Over time there have been a lot of developments from both camps giving each more and more advantages of the other.  Central domain services such as Microsoft’s Active Directory have come along allowing central management to extend out to fat clients bringing control and management more in line with traditional thin computing models.  Likewise, companies like Citrix have worked very hard developing new technologies that allow thin clients to perform much more like robust fat clients making their use as seamless as possible for end users and even making offline use possible for laptop users.

Most shops today have adopted hybrid models.  Fat clients where they make sense and thin clients for certain categories of users and for remote workers and continuity of business scenarios.

Over the past decade we have seen a shift in the way that business applications are created and deployed.  Today almost all business applications are web-based and have no client platform dependency.  This affords IT departments of today with a potential new opportunity – to shift from a traditional thin client platform – that requires remote graphical access – to the browser as the new thin client platform.

The move to web apps has happened slowly and most businesses have a rather large legacy codebase on which they are quite dependent that cannot be easily transferred to the new web app architecture and some apps simply are not good candidates for this architecture.  But by and large the majority of new business applications are web based, written most often in Java or .NET, and these apps are prime candidates for a new thin computing model.

If our custom business apps are available via the browser then our only commonly used apps that remain holding us back are the traditional productivity apps such as our office suites that are widely used by nearly all staff today (if they have a computer at all.)  Very few desktop apps are actually pervasive except for these.  Increasingly we are seeing browser-based alternatives to the traditional office suites.  Everyone is very aware of Google Apps as a pioneer in this area with Microsoft now offering online MS Office as well.  But the popular offerings making consumer news headlines require businesses to totally rethink long term strategies involving keeping critical business data within their walls and are not likely to be highly disruptive to the enterprise for quite some time.

What does pose a threat to the status quo is other alternative software products such as ThinkFree office which is installed within the organization and used and secured internally just like any other normal business application.  This category of “traditionally installed internal web applications” will allow enterprise IT departments to begin to reconsider their end users’ platforms without having to reevaluate their entire concept of IT in general.  The biggest barriers to this today are lingering business applications and power users using specific desktop apps that cannot be encapsulated within a browser.

One of the great advantages, however, of the browser as the new thin client is how simple it is to mix browser-based apps with traditional apps.  The move is transparent and most large businesses are moving in this direction today even if there is no overarching strategy to do so.  The market momentum to develop all new apps for the web is causing this to happen naturally.

Another key advantage of a completely “web based” architectural model is the great ease with which it can be exposed for users outside of the corporate network.  Instead of using cumbersome VPN clients and company laptops employees can find any web browser, sign in to the company network and have secure business applications delivered to any browser, anywhere.

Bringing this almost unnoticed shift into sharp relief today are a handful of, of all things, consumer devices such as: Apple’s iPhone and iPad and Google’s Android and ChromeOS platforms.  What all of these devices have in common is a focus upon being primarily thin web appliances – thin clients for consumers.  With the majority of consumer computing focused upon web connectivity the need for anything else from a platform is nearly non-existent in the consumer market.  This means that within a very short period of time users who once brought home PC experience to the office as their expectation of a computing environment will soon be beginning to bring web-based thin computing as their new expectation.

When this shift happens IT departments with need to rethink their internal application delivery strategy.  The change doesn’t have to be dramatic if current development trends are used commonly and legacy systems are routinely updated.  In fact, one of the great benefits of this new model is that traditional fat clients function very well as browser platforms and will do so for a very long time to come most likely.  Companies adopting this model will likely be able to slow desktop purchasing cycles and prepare for purchasing some form of traditional thin client with embedded browser or move to a business version of the new Nettop trend that we are beginning to see emerge in the consumer space.  Some businesses may even attempt the rather dangerous path of using consumer devices but the lack of management and security features will likely keep this from being popular in all but rare instances.

I believe, though, that this swing of the pendulum will not be as dramatic as the last one just as it was not as dramatic as the swing before that.  It will be an important trend but IT departments understand more and more that no new technological shift is a silver bullet and that with each new opportunity comes new challenges.  Most IT departments will need to implement some degree of browser-based thin computing over the next few years but most will retain a majority user base of fat clients.  Hybrid environments, like we’ve seen for many years with more traditional models, will continue as before with each technology being used in target areas where they make the most sense.

The one area where thin clients continue to be challenged the most is in mobile computing where disconnected users end up being digitally marooned away from their company networks unable to continue working until network connectivity is reestablished.  This is a significant issue for power users who must travel extensively and need to be able to continue working regardless of their current connectivity.  Today this is being solved in the traditional thin client arena thanks to companies like Citrix who continue to advance the state of the art in thin application delivery.

In the browser-based arena we have had to turn to technologies like Google Gears and Adobe AIR in the past to make this possible but these had poor market penetration.  Coming down the pike, however, is the new HTML 5 Offline API which is set to redefine how the web works for users who need to go “off the grid” from time to time.  With HTML 5 incorporating offline capabilities and a richer feature set into the specification for the web itself we expect to see broad and rapid adoption from all of the leading vendors – most likely even before the draft standard is finalized.  While still quite some ways away this new standard will certainly lay the groundwork for a significant shift towards the browser as a ubiquitous, standard and robust platform.

The future of thin computing looks to be incredibly promising both in the enterprise as well as, for the first time, in the consumer arena as well.  Adoption of thin computing models will be spurred on by the current movement towards Software as a Service models and SaaS adoption will continue to be encouraged by the widespread presence of thin computing devices.  In many ways browser-based thin computing represents the technology aspect that is now maturing in the SaaS arena where SaaS itself is maturing in social acceptance rather than technical feasibility.