All posts by Scott Alan Miller

Started in software development with Eastman Kodak in 1989 as an intern in database development (making database platforms themselves.) Began transitioning to IT in 1994 with my first mixed role in system administration.

The Physical Interaction Considerations of VDI

VDI (Virtual Desktop Infrastructure) is different from traditional virtualization of servers because, unlike servers which provide services exclusively onto a network, desktops are a point of physical interaction with end users.  There is no escaping the need for there to be physical equipment that the end users will actually touch.  Keyboards, mice, touchscreens, monitors, speakers… these things cannot be virtualized.

Because of this VDI faces much more complicated decision making and planning than virtualizing servers would.  VDI physical requirements can have a wide variety of solutions.

Traditionally we approached VDI and terminal servers’ needs around physical interaction through the use of thin clients.  Thin clients sit on the network and utilize the same protocols and techniques we would use for normal remote graphical access with protocols like NX, ICA, RDP and VNC.  A thin client runs a full operating system but one that is very lean and that has a singular purpose – to manage connections to other machines.  The idea of the thin client is to keep all processing remote and only have the necessary components on the local hardware to handle networking and local interactions.  Thin clients are relatively low cost, low power consumption, easy to maintain, reliable and have very long lifespans.  But they don’t cost so little as to not be a concern, typically prices are half to three quarters the cost of a traditional desktop and while they tend to last up to twice as long in the field, this remains neither a trivial cost for initial acquisition nor a trivial long term investment cost.

Because of the remaining high costs of traditional thin clients a more modern replacement, the zero client, has arisen as a fix to those issues.  A zero client is not a strict term and is truly just a class of thin clients but one that has removed traditional processing involving a CPU and moved to dedicated very low cost remote graphical processing that is essentially nothing more than a display adapter attached to a network.  Doing so reduces the power needs, management needs and manufacturing costs allowing for a much lower cost end point device.  Zero Clients offer few potential features than do Thin Clients which can often run their own apps like a web browser locally, as there is no local processing but this is often a good thing rather than a bad thing.   Supporting Zero Clients is also a new breed of remote graphical protocols often associated with them such as PCoIP.

Of course, going in the other direction, we can use full fat clients (e.g. traditional desktops and laptops) as our clients.  This generally only makes sense if either the desktops are remnants of a previous infrastructure and only being repurposed as remote graphical access points or if the infrastructure is a hybrid and users use the desktops for some purposes and the VDI or terminal services for others.  In some cases where thin clients are desired and fat clients are available at low cost, such as off lease with older units, fat clients can still make financial sense but the use cases there are limited.  It is extremely common to use existing fat clients during a transition phase and then to migrate to thin or zero clients once a desktop refresh point has been reached or on a machine by machine basis as the machines require maintenance.

Today other options do exist such as using phones, tablets and other mobile devices as remote access points but these are generally special cases and not the norm due to a lack of good input devices.  But use cases do exist and you can see this from time to time.  As devices such as Android-based desktops begin to become more common on the market we may find this becoming more standard and may even see some rather unexpected situations where even devices like advanced desktop phones that run Android will be used as a phone and a thin client device at once.  The more likely situation is that convertible cell phones that can double as lightweight desktop devices when docked will be popular thin client choices.

The last hardware consideration is that of BYOD or “Bring Your Own Device.”  When moving to VDI and/or Terminal Services infrastructures the ability to leverage employee devices becomes very good.  There are legal and logistical complications with employees supplying all of their own access devices but there are huge benefits as well such as happier employees, lower costs and more flexibility.  The use of remote graphical displays rather than exposing data directly vastly reduces security risk and changes how we can approach accessing and exposing internal systems.

It is easy to become caught up in the move of processing resources from local to server when looking at VDI and to overlook that hardware costs remain, and generally remain quite significant, on a per user “on the desktop” level.  Pricing out VDI is not as simple as determining the cost of a VDI server to replace the cost of desktops.  A cost reduction per desktop must be determined and can easily be significant but can just as easily be pretty trivial.  The cost of desktops or desktop replacement hardware will continue to be a large part of the per user IT budget even with VDI solutions.

Business: The Context of IT

I would estimate that the vast majority of people working in the IT field come to it out of an interest in or even a passion for computers. Working in IT lets them play with many big, fast, powerful computers, networks, storage devices and more.  It’s fun.  It’s exciting.  We tend to love gadgets and technical toys.  We love overseeing the roaring server room or datacenter.  This is, almost universally, true of IT people everywhere in the industry.

Because of this somewhat unnatural means by which people are introduced to IT as a career we are left with some issues that are not exactly unique to IT but that are, at the very least, relatively extreme in it.  Primarily the issue that we face, as an industry and especially within the SMB portion of the industry, is a lack of business context within our view of IT.

IT exists only with a business context, this is crucial for understanding all aspects of IT.  Without a business to support, IT would not be IT at all but would just be “playing with computers.”  Other departments that are directly tied to business support such as finance, accounting, human resources, legal, etc. have far more typical business involvement and less “inwardly focused interest” so that they tend to not lose focus on their role in supporting the business environment in everything that they do.  But IT is often so far removed from the business itself, at least mentally, that it is easy to begin to think that IT exists for its own sake.  But it does not.

Moreso than nearly any other department IT is and must be an integral part of the business.  IT has some of the deepest and broadest insight into the business and is invaluable as a partner with management in this aspect.  Everything that happens in IT must be considered within the context of, and in regards to the needs of, the business.

Of course there are roles within IT, within any department, that can function essentially completely without understanding the context of the business that they are supporting.  Job roles that are highly structured and rely on procedure rather than decision making can often get away without even knowing what the business does let alone considering its needs.  But once any role in IT moves into an advisement or decision making one, the business is the core focus.  In reality, the business is the only focus.  IT is an enabler of business, if it is not enabling the business, what is it doing?  Because of this we must remain ever cognizant of the business reasonings behind all decision making and planning.

This cannot be overstated: The primary role of IT is a business one, not a technical one.

IT needs to think about the business at every turn.  Every decision should be made with a keen sense of how it impacts the business in efficiency, cost effectiveness, etc. It is so easy, especially when working with other IT staff from other companies, to lose this perspective and begin to think that there are stock answers, that there are accepted “it should be done this way” approaches, that IT should dictate what is best for the business from an IT perspective.

These concepts become especially poignant when we talk about areas of risk.  It is commonly an IT perspective to think of risk as something that must be overcome, but a business perspective is to balance risk against the cost of mitigation.  If left to run on their own without oversight, most IT departments would see the business as so critical that any amount of money should be spent on a “better” IT infrastructure in order to make sure that downtime could never happen.  But this is completely wrong.  “Better” should never be associated with uptime, it should be associated with “what best serves the goals of the business.”  Perhaps that is uptime, perhaps it is a lowering of capital expenses: it depends on the unique business scenario. Often what is best for the business is not what is perceived as being best for IT.

Concepts such as “the business cannot go down” or “cost is no object” have no place in a business, and therefore cannot in IT.  Every business has a cost of uptime threshold where it is more cost effective to be done.  No IT project has cost as no object, in a business cost is always an object.

What IT needs to do is learn to think differently.  The needs of the business should be at the forefront of IT concepts of what is good and what is applicable.  The idea that there is a “proper or best level of protection” for a system should never even occur to IT decision makers.  Instead, IT should immediately think about value to the business, cost of downtime, cost of risk mitigation and make decisions based around the value to the business.

Thinking about “business first” or really “business only” can be a struggle for IT staff that come to IT from a technology perspective instead of from a business one, but it is a critical skill and will fundamentally change the approach and effectiveness of an IT department.

Businesses need to look for IT staff in decision making and guidance roles that have a firm understanding or and interest in business and can consistently maintain their IT work within that perspective.

Understanding Virtual Desktop Infrastructure

VDI (or Virtual Desktop Infrastructure) has been all the rage in IT circles for the past couple of years. Once the virtualization of servers became the norm, looking to desktops as the next frontier in virtualization was natural. Unlike servers, however, desktops are affected by several major factors that need to be addressed and considered before we simply jump on the VDI bandwagon. VDI is a great and wonderful technology, but like any technology has its place and needs to be considered carefully.

First we have to look at an important concept that affects VDI – the idea of shared computing. With servers we naturally assume that the servers and the services that they provide are not for one to one consumption but will be utilized by multiple users. This concept exists and has always existed in the world of desktops and is often referred to as terminal services. Terminal Servers are the server world’s answer to the need for centralized desktop resources and have been standard since literally before desktops even existed.

It is an interesting bit of computing history that brings us to how the Windows world interacts with the concept of terminal servers. Most operating systems and nearly all that remain in use today are designed from the ground up, and always have been, as multi-user systems. The idea that one user would sit in front of the computer as the “main” user and others would either be second class citizens or not exist at all did not really exist and all users were considered equal. Windows, unlike the UNIX family for example, came from a world of single user systems originating with DOS and DOS/Windows which were not multi-user and when Windows NT was developed as a multiuser system a great many software developers treated it as they always had making software that would not function well or often even at all in a multiuser mode.

This software ecosystem that is unique to Windows (it could effectively never exist on Linux, for example, because such software would simply be seen as broken due to the way that the ecosystem and deployments exist) has created an interesting problem making some software and some tasks easily addressable by the use of terminal servers identically to how it would be tackled by any UNIX OS while many other applications cannot be addressed using a terminal server and require a dedicated operating system instance for every user.

It is because of this historical factor leading to a significant difference in software ecosystems that has created the fundamental need for VDI and explains why VDI arose as a unique phenomenon within the Windows world and remains, for all intents and purposes, exclusive to it. So, it is very important to understand, that VDI arose conceptually as a means of addressing a need that existed only by a deficiency in third party applications and not because of an intrinsic nature of Windows itself in its current form or because VDI was a better approach to virtualizing or hosting end user desktop services. In fact, we could even look at VDI as an unfortunate kludge only needed in situations where we want to virtualize or centralize desktop resources and where some of the software needed to run on those systems cannot run in a multiuser mode. VDI is a fallback mechanism for special situations and not a desired approach to virtualized or centralized end user services.

It is important to note that due to the widespread use of VDI and necessity of it that the investment into support technologies around VDI has led to the possibility that in many cases VDI might actually outperform terminal servers even though architecturally this seems almost impossible. Basically this is happening because the incredible amount of research and development going into the hypervisor layer may be outpacing the same components in the operating system itself making for potentially better CPU and memory management and resource sharing. This is completely dependent on the unique situation, of course, as every OS and every hypervisor and every set of VDI tools is unique as well as the workloads being tested so mileage will vary significantly.

Also of serious consideration is, because of the aforementioned Windows-centric nature of the VDI concept, licensing. If we were to look at VDI from the Linux perspective we would have little to no licensing concerns and VDI would be up against traditional terminal services based on technical merits alone, but this is effectively never the case. The biggest single factor around VDI decision making is Microsoft licensing.

VDI licensing is both expensive, as well as complex. Companies wanting to consider the virtualization of Windows desktop resources have to carefully way the benefits against both the large cost of appropriate licensing and also the potentially large overhead of license management. Moving into VDI will likely mean lots of expensive IT time dedicated to license research, monitoring and training which is an often overlooked aspect of licensing costs.

VDI is a somewhat difficult concept to speak about in generalities because it is a slightly amorphous topic. If we virtualize a desktop, does it not become a server? If we use an operating system intended for server use, does that change what is and is not VDI? Is VDI based around use cases, licensing or product categories?

The real answer lies in that to the industry VDI is technically one thing but in practical terms to Microsoft, the only key licensing player in the space, it means something somewhat different. VDI is technically the virtualization of one to one “graphical end user” instances – meaning a single virtual machine being used by a single user much as a traditional, physical desktop or laptop would be used. To Microsoft, whose concerns are slightly different than those of the industry, the term refers to the virtualization of Windows “desktop class” operating systems. If you virtualize Windows “server class” operating systems, Microsoft does not view you as doing VDI. So we have to understand these two views of the concept to keep from becoming confused. In fact, using Windows Server OSes to get around VDI licensing needs from Windows desktops has become very standard and common. However, we have to remember the kludge nature of VDI and while this does solve the failure to write software that is multiuser in nature it does not address the very real potential that software was written with expectations of desktop-branded operating systems and we are somewhat likely to find end user software that is either locked (intentionally or unintentionally) to desktop operating systems only or is potentially licensed only on those platforms.

The last major consideration around VDI decision making is that unlike servers which when virtualized are completely virtualized, a desktop cannot be treated in the same way because there is always a physical component to it. The end user will always need a monitor to look at, a keyboard to type on, speakers to listen to, and so on. So when we are looking to move to VDI we must take care not to overlook the fact that we are not eliminating the need to purchase and maintain desktops, we are simply moving where the operating system will reside. We may redeploy older hardware to be used for remote access, move to thin clients or the newly termed and mostly meaningless zero clients or use otherwise in use “fat clients” to pull double duty handling both activities as a remote access client as well as providing its own desktop services.

Certainly virtualizing the desktop offers us many great opportunities and much value if we are doing it for the right reasons and understanding the hows, whys and whens of VDI. It has sadly, like so many technology trends, become a knee-jerk reaction to want to move to VDI without performing proper evaluations and developing a clear picture as to how VDI will fit into our own environments. If we lack a clear reason for choosing VDI it will be very unlikely that we deploy it in a positive manner.

Finally it is very important that we consider the necessary skill sets that will be required in order to properly move to VDI. From a purely technical standpoint, throwing a Windows 10 VM onto Hyper-V constitutes VDI but from a practical perspective this is not how effective VDI will be designed. VDI not only requires the special licensing knowledge that I mentioned above but will typically involve rather unique knowledge of modern and very specialized VDI toolsets and products, shared storage as it applies to VDI, remote access protocols, thin clients or zero clients, and more. VDI deployments tend to be one of the technical and unique components of an infrastructure leading to a great number of unknowns and challenges for any organization.

The End of the GUI Era

We should set the stage by looking at some historical context around GUIs and their role within the world of systems administration.

In the “olden days” we did not have graphical user interfaces on any computers at all, let alone on our servers. Long after GUIs began to become popular on end user equipment, servers still did not have them. In the 1980s and 1990s the computational overhead necessary to produce a GUI was significant in terms of the total computing capacity of a machine and using what little that there was to produce a GUI was rather impractical, if often not even completely impossible. The world of systems administration grew up in this context, working from command lines because there was no other option available to us. It was not common for people to desire GUIs for systems administration, perhaps because the idea had not occurred to people yet.

In the mid-1990s Microsoft, along with some others, began to introduce the idea of GUI-driven systems administration for the entry level server market. At first the approach was not that popular as it did not match how experienced administrators were working in the market. But slowly, as new Windows administrators and to some degree as Novell Netware administrators, began to “grow up” with access to GUI-based administration tools there began to be an accepted place in the server market for these systems. In the mid to late 1990s the UNIX and other non-Windows servers completely dominated the market. Even VMS was a major player still and on the small business and commodity server side Novell Netware was the dominant player mid-decade and still a very serious contender late in the decade. Netware offered a GUI experience but one that was very light and should probably be considered only “semi-GUI” in comparison to Windows NT’s rich GUI experience offered by at least 1996 and to some degree earlier with the NT 3.x family, although Windows NT was only just finding its place in the world before NT 4’s release.

Even at the time, the GUI-driven administration market remained primarily a backwater. Microsoft and Windows still had no major place on the server side but was beginning to make inroads via the small business market where their low cost and easy to use products made a lot sense. But it was truly the late 1990s panic and market expansion brought on by the combination of the Y2K scare, the dotcom market bubble and excellent product development and marketing by Microsoft that a significant growth and shift to a GUI-driven administration market occurred.

The massive expansion of the IT market in the late 1990s meant that there was not enough time or resources to train new people entering IT. The learning curve for many systems, including Solaris and Netware, was very steep and the industry needed a truly epic number of people to go from zero to “competent IT professional” faster than it was possible to do with the existing platforms of the day. The market growth was explosive and there was so much money to be made working in IT that there were no available resources to effectively train new people who needed to be coming into IT as anyone qualified to handle educational duties was also able to earn so much more working in the industry rather than working in education. As the market grew, the value of mature, experienced professionals became extremely high, as they were more and more rare in the ever expanding field as a whole.

The market responded to this need in many ways but one of the biggest ones was to fundamentally change how IT was approached. Instead of pushing IT professionals to overcome the traditional learning curves and develop the needed skills to effectively manage the systems that were on the market at the time, the market changed which tools that they were using to accommodate less experienced and less knowledgeable IT staff. Simpler and often more expensive tools often with GUI interfaces began to flood the market allowing those with less training and experience to at least begin to be useful and productive almost immediately even without ever having seen a product previously.

This change coincided with the natural advancement of the performance of computer hardware. It was during this era that for the first time the power of many systems was such that while the GUI still made a rather significant impact to performance, the lower cost of support staff and speed at which systems could be deployed and managed generally offset this loss of computing capacity taken by the GUI. The GUI rapidly became a standard addition to systems that just a few years before would never have seen one.

To improve the capabilities of these new IT professionals and to rush them into the marketplace the industry also shifted heavily towards certifications, more or less a new innovation at the time, which allowed new IT pros, often with no hands on experience of any kind, to establish some degree of competence and to do so commonly without needing any significant interaction or investment from existing IT professionals like university programs would require. Both the GUI-based administration market, as well as the certification industry, boomed; and the face IT significantly changed.

The result certainly was a flood of new, untrained or lightly trained IT professionals entering the market at a record pace. In the short term this change work for the industry. The field went from dramatically understaffed to relatively well staffed years faster than it could have done so otherwise. But it did not take long before the penalties for this rapid uptake of new people began to appear.

One of the biggest impacts to the industry was that there was an industry-wide “baby boom” with all of the growing pains that that would entail. An entire generation of IT professionals grew up in the boot camps and rapid “certification training” programs of the late 1990s. This resulted in a long term effect of the rules of thumb and general approaches common in that era becoming often codified to the point of near religious belief in a way that previous, as well as later, approaches would not. Often, because education was done quickly and shallowly, many concepts had to be learned by rote without an understanding of the fundamentals behind them. As the “Class of 1998” grew into the senior IT professionals in their companies over time, they became the mentors of new generations and that old rote learning has very visibly trickled down through similar approaches in the years since, even long after the knowledge is outdated or impractical and in many cases it has been interpreted incorrectly and is wrong in predicable ways even for the era from which it sprang.

Part of this learning of the era was a general acceptance that GUIs were not just acceptable but that they were practical and expected. The baby boom effect meant that there was little mentorship from the former era and previously established practices and norms were often swept away. The baby boom effect meant that the industry did not exactly reinvent itself as much as it simple invested itself. Even the concept of Information Technology as a specific industry unto itself took its current form and took hold in the public consciousness during this changing of the guards. Instead of being a vestige or other departments or disciplines, IT came into its own; but it did so without the maturing and continuity of practices that would have existed with more organic growth leaving the industry in possibly a worse position than it might have been would it have developed in a continuous fashion.

The lingering impact of the late 1990s IT boom will be felt for a very long time as it will take many generations for the trends, beliefs and assumptions of that time period to finally be swept away. Slowly, new concepts and approaches are taking hold, often only when old technologies disappear and knew ones are introduced breaking the stranglehold of tradition. One of these is the notion of the GUI being the dominant method by which systems administration is accomplished.

As we pointed out before, the GUI at its inception was a point of differentiation between old systems and the new world of the late 1990s. But since that time GUI administration tools have become ubiquitous in their availability. Every significant platform has and has long had graphical administration options so the GUI no longer sets any platform apart in a significant way. This means that there is no longer any vendor with a clear agenda driving them to push the concept of the GUI. The marketing value of the GUI is effectively gone. Likewise, not only did systems that previously lacked a strong GUI nearly all develop one (or more) but the GUI-based systems that did not have strong command line tools went back and developed those as well and developed new professional ecosystems around them. The tide most certainly turned.

Furthermore, over the past nearly two decades the rhetoric of the non-GUI world has begun to take hold. System administrators working from a position of a mastery of the command line, on any platform, generally outperform their counterparts leading to more career opportunities, more challenging roles and higher incomes. Companies focused on command line administration find themselves with more skilled workers and a higher administration density which, in turn, lowers overall cost.

This alone was enough to make the position of the GUI begin to falter. But there was always the old argument that GUIs, even in the late 1990s, used a small amount of system resources and only added a very small amount of additional attack surface. Even if they were not going to be used, why not have them installed “just in case.” As CPUs got faster, memory got larger, storage got cheaper and as system design improved the impact of the GUI became less and less so this argument of having GUIs available got stronger. Especially strong was the proposal that GUIs allowed junior staff to do tasks as well making them more useful. But it was far too common for senior staff to retain the GUI as a crutch in these circumstances.

With the advent of virtualization in the commodity server space, this all began to change. The cost of a GUI became suddenly noticeable again. A system running twenty virtual machines would suddenly use twenty times the CPU resources and twenty times the memory and twenty times the storage capacity of a single GUI instance. The footprint of the GUI was noticeable again. As virtual machine densities began to climb, so did the relative impact of the GUI.

Virtualization gave rise to cloud computing. Cloud computing increased virtual machine deployment densities and exposed other performance impacts of GUIs, mostly in terms of longer instance build times and more complex remote console access. Systems requiring a GUI began to noticeably lag behind their GUI-less counterparts in adoption and capabilities.

But the far bigger factor was the artifact of cloud computing’s standard billing methodologies. Because cloud computing typically exposes per-instance costs in a raw, fully visible way IT departments had no means of fudging or overlooking the costs of GUI deployments whose additional overhead would often even double the cost of a single cloud instance. Accounting would very clearly see bills for GUI systems costing far more than their GUI-less counterparts. Even non-technical teams could see that the cost of GUIs was adding up even before considering the cost of management.

This cost continues to increase as we move towards container technologies where the scale of individual instances becomes small and smaller means that the relative overhead of the GUI becomes more significant.

But the real impact, possibly the biggest exposure of the issues around GUI driven systems is the industry’s move towards the DevOps system automation models. Today only a relatively small percentage of companies are actively moving to a full cloud-enabled, elastically scalable DevOps model of system management but the trend is there and the model leaves GUI administrators and their systems completely behind. With DevOps models direct access to machines is no longer a standard mode of management and systems have gone even farther than working solely from the command line to being built completely in code meaning that not only do system administrators working in the DevOps world need to interact with their systems at a command line but they must do so programmatically.

The market is rapidly moving towards fewer, more highly skilled systems administrators working with many, many more servers “per admin” than in any previous era. The idea that a single systems administrator can only manage a few dozen servers, a common belief in the GUI world, has long been challenged even in traditional “snowflake” command line systems administration with numbers easily climbing into the few hundred range. But the DevOps model or similar automation models take those numbers into the thousands of servers per administrator. The overhead of GUIs is becoming more and more obvious.

As new technologies like cloud, containers and DevOps automation models become pervasive so does the natural “sprawl” of workloads. This means that companies of all sizes are seeing an increase in the numbers of workloads that need to be managed. Companies that traditionally had just two or three servers today may have ten or twenty virtual instances! The number of companies that need only one or two virtual machines is dwindling.

This all hardly means that GUI administration is going to go away in the near, or even the distant, future. The need for “one off” systems administration will remain. But the ratio of administrators able to work in a GUI administration “one off” mode versus those that need to work through the command line and specifically through scripted or even fully automated systems (a la Puppet, Chef, Ansible) is already tipping incredibly rapidly towards non-GUI system administration and DevOps practices.

What does all of this mean for us in the trenches of the real world? It means that even roles, such as small business Windows administration, that traditionally have had little or no need to work at the command line need to reconsider the dependence on the local server GUI for our work.   Command line tools and processes are becoming increasingly powerful, well known and how we are expected to work. In the UNIX world the command line has always remained and the need to rely on GUI tools would almost always be seen as a major handicap. This same impression is beginning to apply to the Windows world as well. Slowly those that rely on GUI tools exclusively are being seen as second class citizens and increasingly relegated to more junior roles and smaller organizations.

The improvement in scripting and automation tools also means that the value of scale is getting better so that the cost to administer small numbers of servers is becoming very high on a per workload basis which means that there is a very heavy encouragement for smaller companies to look towards management consolidation through the use of outside vendors who are able to specialize in large scale systems management and leverage scripting and automation techniques to bring their costs more in line with larger businesses’ costs. The ability to use outside vendors to establish scale or an approximation to it will be very important, over time, for smaller businesses to remain cost competitive in their IT needs while still getting the same style of computing advantages that larger businesses are beginning to experience today.

It should be noted that happening in tandem with this industry shift towards the command line and automation tools is the move to more modern, powerful and principally remote GUIs. This is a far less dramatic shift but one that should not be overlooked. Tools like Microsoft’s RSAT and Server Administrator provide a GUI view that is leveraging command line and API interfaces under the hood. Likewise Canonical’s Ubuntu world now has Landscape. These tools are less popular in the enterprise but are beginning to make the larger SMB market able to maintain a GUI dependency while also managing a larger set of server instances. The advancement in these types of GUI tools may be the strongest force slowing the adoption of command line tools across the board.

Whether we are interested in the move from the command line, to GUIs and back to the command line as an interesting artifact of the history of Information Technology as an industry or if we are looking at this as a means to understanding how systems administration is evolving as a career path or business approach for our own uses it is good for us to appreciate the factors that caused it to occur and why the ebb and flow of the industry is now taking us back out to the sea of the command line once again. By understanding these forces we can more practically asses where the future will take us, when the tide may again change, how to best approach our own careers or decide on both technology and human talent for our organizations.