Decision Point: VDI and Terminal Services

Two basic concepts vie for prominence, if technologies care about prominence, when it comes to remote graphical desktop interfaces: VDI (virtual desktop infrastructure) and terminal services.  The idea of both is simple, put the resources and processing on a server and have end users access the graphical interface remotely over a network.  What separates VDI and TS fundamentally is the difference between that remote server being a one to many experience with many users getting their desktops from a single operating system image (TS) and each user getting a dedicated server all of their own (presumably virtualized and called VDI) where there is no sharing of the individual operating system resources.

There is a certain amount of assumption, partially from the naming conventions, that VDI implies a desktop operating system rather than a server one but this should not be seen as an implication.  In fact, outside of the Windows world there truly is no separation between desktop and server operating systems so having such a distinction at the technology level would not make sense.  It is important to remember, however, that Microsoft defines VDI licensing by the use of different OS license options and most VDI is for Windows operating systems so while VDI does not imply this, in a practical sense it is generally important to keep in mind that on the tech side there is no distinction and on the Microsoft licensing side there are heavy distinctions.

Of the two, VDI is the newer concept.  Terminal Services have been around for decades and are well known and are anything but exciting or flashy today.  Terminal services predate Windows and are common to nearly every operating system family and are so common in the UNIX world that they are often used without note.  Terminal services are the GUI continuation of the old “green screen” terminals that were used since the “olden days” of computers.  In the old days the terminals were often serial connected VT100 terminals and today we use TCP/IP networking and protocols capable of carrying graphics, but the concept remains the same: many users on a single server.

With VDI we accomplish the same goals but do so giving each user all of their own resources.  Their OS is completely their own, not shared with anyone.  This means that there is all of the overhead of memory management, CPU management, process tables, copies of libraries and such for every individual user.  That is a lot of overhead.  Consider all of the resources that an idle graphical desktop requires just to boot up and wait for the user – it can be quite a bit.  Newer Windows operating systems have been getting leaner and more efficient, probably to make them more viable on VDI infrastructures, but the overhead remains a significant factor.  VDI was not really possible until virtualization made it a reality so in any practical sense it is a new use of technology and is often misunderstood.

What we face now is, when deciding on a remote computational infrastructure, choosing between these two architectural ideas.  Of course, it should be noted, these two can co-exist very easily and it would often be appropriate to do this.  In smaller shops it would be very easy for the two to co-exist on the same physical platform, in fact.  There are many factors here that we need to consider and this decision process can actually be rather complicated.

One of the biggest factors that we must consider is software compatibility.  This is the largest driver of the move to VDI rather than terminal services.  In the Windows world it is not uncommon for applications to require things such as a desktop operating system signature (refusing to run on server OS variants), single user environments, users to have administrator level privileges, uses to run with specific accounts or library requirement that will often conflict with other packages.  Because of these issues, many companies look to VDI to mimic the way individual desktops work where these issues were easily overlooked because each user was running in a discrete environment.  VDI brings this same functionality to the remote access world allowing problem child applications to be catered to as needed.  Isolation of the OS adds a layer of protection.

This driving factor essentially does not exist outside of the Windows world and is primarily why VDI has never taken hold in any other environment.  While easily achievable with Linux or FreeBSD, for example, VDI has little purpose or value in those cases.

A major concern with VDI is the extreme overhead necessary to manage many redundant operating systems each with its own duplicated processes, storage and memory. In the early days this made VDI incredibly inefficient.  More recently, however, advanced VDI systems, primarily centered around virtualization platforms and storage, have addressed many of these issues by deduplicating memory and storage, using common master boot files and other techniques.  In fact, contrary to most assumptions, it can even be the case that VDI may outperform traditional terminal services for Windows due to the hypervisor platform being able to handle memory management and task switching even more efficiently than Windows itself (a phenomenon first observed in the early 2000s when in some cases Windows would run faster when virtualized on top of Linux so that memory management could be partially handed off to the Linux system underneath which was more efficient.)  This is definitely not always the case, but the improvements in VDI handling have come so far that the two are often quite close.  Again, however, this is a factor making VDI more attractive in the Windows world but not as dramatically in the non-Windows world where native OS task management is typically more efficient and VDI would remain unnecessary overhead.

Another area where VDI has consistently shown to be more capable than terminal services is in the area of graphically rich rendered environments such as CAD and video editing.  The same areas that still lean heavily towards dedicated hardware tend to move to VDI rather than terminal services because of a heavy investment in GPU capabilities within the VDI solutions.  This is not a universal scenario, but for situations where heavy graphical rendering needs to take place it is worth investigating the possibility that VDI will perform significantly better.

Because of how VDI is managed, it is often reserved only for very large deployments where the scale, in number of end users included in the solution, can be used to overcome some of the cost of implementation.  Terminal services, however, do to its more scalable cost is often able to be implemented to smaller environments or subsets of users more cost effectively.  Neither is common for a very small environment of only a few users, although a strange phenomenon of manually managed VDI would make VDI likely more effective than terminal services for an exceptionally tiny number of users, perhaps less than ten, where VDI is treated more like individual servers rather than as a unified VDI environment.

With only the rarest of exceptions, primarily due to the licensing overhead created by the Windows desktop ecosystem in a virtualized setting, it is a de facto starting position for remote access end users systems to assume a starting point with terminal server technologies and only turn to the more complicated and more costly VDI solutions when terminal services prove to be unable to meet the technical requirements of the scenario.  For all intents and purposes, VDI is a fall back brute force method to make end user virtualization work where the preferred methods have come up short.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.