Understanding Virtual Desktop Infrastructure

VDI (or Virtual Desktop Infrastructure) has been all the rage in IT circles for the past couple of years. Once the virtualization of servers became the norm, looking to desktops as the next frontier in virtualization was natural. Unlike servers, however, desktops are affected by several major factors that need to be addressed and considered before we simply jump on the VDI bandwagon. VDI is a great and wonderful technology, but like any technology has its place and needs to be considered carefully.

First we have to look at an important concept that affects VDI – the idea of shared computing. With servers we naturally assume that the servers and the services that they provide are not for one to one consumption but will be utilized by multiple users. This concept exists and has always existed in the world of desktops and is often referred to as terminal services. Terminal Servers are the server world’s answer to the need for centralized desktop resources and have been standard since literally before desktops even existed.

It is an interesting bit of computing history that brings us to how the Windows world interacts with the concept of terminal servers. Most operating systems and nearly all that remain in use today are designed from the ground up, and always have been, as multi-user systems. The idea that one user would sit in front of the computer as the “main” user and others would either be second class citizens or not exist at all did not really exist and all users were considered equal. Windows, unlike the UNIX family for example, came from a world of single user systems originating with DOS and DOS/Windows which were not multi-user and when Windows NT was developed as a multiuser system a great many software developers treated it as they always had making software that would not function well or often even at all in a multiuser mode.

This software ecosystem that is unique to Windows (it could effectively never exist on Linux, for example, because such software would simply be seen as broken due to the way that the ecosystem and deployments exist) has created an interesting problem making some software and some tasks easily addressable by the use of terminal servers identically to how it would be tackled by any UNIX OS while many other applications cannot be addressed using a terminal server and require a dedicated operating system instance for every user.

It is because of this historical factor leading to a significant difference in software ecosystems that has created the fundamental need for VDI and explains why VDI arose as a unique phenomenon within the Windows world and remains, for all intents and purposes, exclusive to it. So, it is very important to understand, that VDI arose conceptually as a means of addressing a need that existed only by a deficiency in third party applications and not because of an intrinsic nature of Windows itself in its current form or because VDI was a better approach to virtualizing or hosting end user desktop services. In fact, we could even look at VDI as an unfortunate kludge only needed in situations where we want to virtualize or centralize desktop resources and where some of the software needed to run on those systems cannot run in a multiuser mode. VDI is a fallback mechanism for special situations and not a desired approach to virtualized or centralized end user services.

It is important to note that due to the widespread use of VDI and necessity of it that the investment into support technologies around VDI has led to the possibility that in many cases VDI might actually outperform terminal servers even though architecturally this seems almost impossible. Basically this is happening because the incredible amount of research and development going into the hypervisor layer may be outpacing the same components in the operating system itself making for potentially better CPU and memory management and resource sharing. This is completely dependent on the unique situation, of course, as every OS and every hypervisor and every set of VDI tools is unique as well as the workloads being tested so mileage will vary significantly.

Also of serious consideration is, because of the aforementioned Windows-centric nature of the VDI concept, licensing. If we were to look at VDI from the Linux perspective we would have little to no licensing concerns and VDI would be up against traditional terminal services based on technical merits alone, but this is effectively never the case. The biggest single factor around VDI decision making is Microsoft licensing.

VDI licensing is both expensive, as well as complex. Companies wanting to consider the virtualization of Windows desktop resources have to carefully way the benefits against both the large cost of appropriate licensing and also the potentially large overhead of license management. Moving into VDI will likely mean lots of expensive IT time dedicated to license research, monitoring and training which is an often overlooked aspect of licensing costs.

VDI is a somewhat difficult concept to speak about in generalities because it is a slightly amorphous topic. If we virtualize a desktop, does it not become a server? If we use an operating system intended for server use, does that change what is and is not VDI? Is VDI based around use cases, licensing or product categories?

The real answer lies in that to the industry VDI is technically one thing but in practical terms to Microsoft, the only key licensing player in the space, it means something somewhat different. VDI is technically the virtualization of one to one “graphical end user” instances – meaning a single virtual machine being used by a single user much as a traditional, physical desktop or laptop would be used. To Microsoft, whose concerns are slightly different than those of the industry, the term refers to the virtualization of Windows “desktop class” operating systems. If you virtualize Windows “server class” operating systems, Microsoft does not view you as doing VDI. So we have to understand these two views of the concept to keep from becoming confused. In fact, using Windows Server OSes to get around VDI licensing needs from Windows desktops has become very standard and common. However, we have to remember the kludge nature of VDI and while this does solve the failure to write software that is multiuser in nature it does not address the very real potential that software was written with expectations of desktop-branded operating systems and we are somewhat likely to find end user software that is either locked (intentionally or unintentionally) to desktop operating systems only or is potentially licensed only on those platforms.

The last major consideration around VDI decision making is that unlike servers which when virtualized are completely virtualized, a desktop cannot be treated in the same way because there is always a physical component to it. The end user will always need a monitor to look at, a keyboard to type on, speakers to listen to, and so on. So when we are looking to move to VDI we must take care not to overlook the fact that we are not eliminating the need to purchase and maintain desktops, we are simply moving where the operating system will reside. We may redeploy older hardware to be used for remote access, move to thin clients or the newly termed and mostly meaningless zero clients or use otherwise in use “fat clients” to pull double duty handling both activities as a remote access client as well as providing its own desktop services.

Certainly virtualizing the desktop offers us many great opportunities and much value if we are doing it for the right reasons and understanding the hows, whys and whens of VDI. It has sadly, like so many technology trends, become a knee-jerk reaction to want to move to VDI without performing proper evaluations and developing a clear picture as to how VDI will fit into our own environments. If we lack a clear reason for choosing VDI it will be very unlikely that we deploy it in a positive manner.

Finally it is very important that we consider the necessary skill sets that will be required in order to properly move to VDI. From a purely technical standpoint, throwing a Windows 10 VM onto Hyper-V constitutes VDI but from a practical perspective this is not how effective VDI will be designed. VDI not only requires the special licensing knowledge that I mentioned above but will typically involve rather unique knowledge of modern and very specialized VDI toolsets and products, shared storage as it applies to VDI, remote access protocols, thin clients or zero clients, and more. VDI deployments tend to be one of the technical and unique components of an infrastructure leading to a great number of unknowns and challenges for any organization.

The End of the GUI Era

We should set the stage by looking at some historical context around GUIs and their role within the world of systems administration.

In the “olden days” we did not have graphical user interfaces on any computers at all, let alone on our servers. Long after GUIs began to become popular on end user equipment, servers still did not have them. In the 1980s and 1990s the computational overhead necessary to produce a GUI was significant in terms of the total computing capacity of a machine and using what little that there was to produce a GUI was rather impractical, if often not even completely impossible. The world of systems administration grew up in this context, working from command lines because there was no other option available to us. It was not common for people to desire GUIs for systems administration, perhaps because the idea had not occurred to people yet.

In the mid-1990s Microsoft, along with some others, began to introduce the idea of GUI-driven systems administration for the entry level server market. At first the approach was not that popular as it did not match how experienced administrators were working in the market. But slowly, as new Windows administrators and to some degree as Novell Netware administrators, began to “grow up” with access to GUI-based administration tools there began to be an accepted place in the server market for these systems. In the mid to late 1990s the UNIX and other non-Windows servers completely dominated the market. Even VMS was a major player still and on the small business and commodity server side Novell Netware was the dominant player mid-decade and still a very serious contender late in the decade. Netware offered a GUI experience but one that was very light and should probably be considered only “semi-GUI” in comparison to Windows NT’s rich GUI experience offered by at least 1996 and to some degree earlier with the NT 3.x family, although Windows NT was only just finding its place in the world before NT 4’s release.

Even at the time, the GUI-driven administration market remained primarily a backwater. Microsoft and Windows still had no major place on the server side but was beginning to make inroads via the small business market where their low cost and easy to use products made a lot sense. But it was truly the late 1990s panic and market expansion brought on by the combination of the Y2K scare, the dotcom market bubble and excellent product development and marketing by Microsoft that a significant growth and shift to a GUI-driven administration market occurred.

The massive expansion of the IT market in the late 1990s meant that there was not enough time or resources to train new people entering IT. The learning curve for many systems, including Solaris and Netware, was very steep and the industry needed a truly epic number of people to go from zero to “competent IT professional” faster than it was possible to do with the existing platforms of the day. The market growth was explosive and there was so much money to be made working in IT that there were no available resources to effectively train new people who needed to be coming into IT as anyone qualified to handle educational duties was also able to earn so much more working in the industry rather than working in education. As the market grew, the value of mature, experienced professionals became extremely high, as they were more and more rare in the ever expanding field as a whole.

The market responded to this need in many ways but one of the biggest ones was to fundamentally change how IT was approached. Instead of pushing IT professionals to overcome the traditional learning curves and develop the needed skills to effectively manage the systems that were on the market at the time, the market changed which tools that they were using to accommodate less experienced and less knowledgeable IT staff. Simpler and often more expensive tools often with GUI interfaces began to flood the market allowing those with less training and experience to at least begin to be useful and productive almost immediately even without ever having seen a product previously.

This change coincided with the natural advancement of the performance of computer hardware. It was during this era that for the first time the power of many systems was such that while the GUI still made a rather significant impact to performance, the lower cost of support staff and speed at which systems could be deployed and managed generally offset this loss of computing capacity taken by the GUI. The GUI rapidly became a standard addition to systems that just a few years before would never have seen one.

To improve the capabilities of these new IT professionals and to rush them into the marketplace the industry also shifted heavily towards certifications, more or less a new innovation at the time, which allowed new IT pros, often with no hands on experience of any kind, to establish some degree of competence and to do so commonly without needing any significant interaction or investment from existing IT professionals like university programs would require. Both the GUI-based administration market, as well as the certification industry, boomed; and the face IT significantly changed.

The result certainly was a flood of new, untrained or lightly trained IT professionals entering the market at a record pace. In the short term this change work for the industry. The field went from dramatically understaffed to relatively well staffed years faster than it could have done so otherwise. But it did not take long before the penalties for this rapid uptake of new people began to appear.

One of the biggest impacts to the industry was that there was an industry-wide “baby boom” with all of the growing pains that that would entail. An entire generation of IT professionals grew up in the boot camps and rapid “certification training” programs of the late 1990s. This resulted in a long term effect of the rules of thumb and general approaches common in that era becoming often codified to the point of near religious belief in a way that previous, as well as later, approaches would not. Often, because education was done quickly and shallowly, many concepts had to be learned by rote without an understanding of the fundamentals behind them. As the “Class of 1998” grew into the senior IT professionals in their companies over time, they became the mentors of new generations and that old rote learning has very visibly trickled down through similar approaches in the years since, even long after the knowledge is outdated or impractical and in many cases it has been interpreted incorrectly and is wrong in predicable ways even for the era from which it sprang.

Part of this learning of the era was a general acceptance that GUIs were not just acceptable but that they were practical and expected. The baby boom effect meant that there was little mentorship from the former era and previously established practices and norms were often swept away. The baby boom effect meant that the industry did not exactly reinvent itself as much as it simple invested itself. Even the concept of Information Technology as a specific industry unto itself took its current form and took hold in the public consciousness during this changing of the guards. Instead of being a vestige or other departments or disciplines, IT came into its own; but it did so without the maturing and continuity of practices that would have existed with more organic growth leaving the industry in possibly a worse position than it might have been would it have developed in a continuous fashion.

The lingering impact of the late 1990s IT boom will be felt for a very long time as it will take many generations for the trends, beliefs and assumptions of that time period to finally be swept away. Slowly, new concepts and approaches are taking hold, often only when old technologies disappear and knew ones are introduced breaking the stranglehold of tradition. One of these is the notion of the GUI being the dominant method by which systems administration is accomplished.

As we pointed out before, the GUI at its inception was a point of differentiation between old systems and the new world of the late 1990s. But since that time GUI administration tools have become ubiquitous in their availability. Every significant platform has and has long had graphical administration options so the GUI no longer sets any platform apart in a significant way. This means that there is no longer any vendor with a clear agenda driving them to push the concept of the GUI. The marketing value of the GUI is effectively gone. Likewise, not only did systems that previously lacked a strong GUI nearly all develop one (or more) but the GUI-based systems that did not have strong command line tools went back and developed those as well and developed new professional ecosystems around them. The tide most certainly turned.

Furthermore, over the past nearly two decades the rhetoric of the non-GUI world has begun to take hold. System administrators working from a position of a mastery of the command line, on any platform, generally outperform their counterparts leading to more career opportunities, more challenging roles and higher incomes. Companies focused on command line administration find themselves with more skilled workers and a higher administration density which, in turn, lowers overall cost.

This alone was enough to make the position of the GUI begin to falter. But there was always the old argument that GUIs, even in the late 1990s, used a small amount of system resources and only added a very small amount of additional attack surface. Even if they were not going to be used, why not have them installed “just in case.” As CPUs got faster, memory got larger, storage got cheaper and as system design improved the impact of the GUI became less and less so this argument of having GUIs available got stronger. Especially strong was the proposal that GUIs allowed junior staff to do tasks as well making them more useful. But it was far too common for senior staff to retain the GUI as a crutch in these circumstances.

With the advent of virtualization in the commodity server space, this all began to change. The cost of a GUI became suddenly noticeable again. A system running twenty virtual machines would suddenly use twenty times the CPU resources and twenty times the memory and twenty times the storage capacity of a single GUI instance. The footprint of the GUI was noticeable again. As virtual machine densities began to climb, so did the relative impact of the GUI.

Virtualization gave rise to cloud computing. Cloud computing increased virtual machine deployment densities and exposed other performance impacts of GUIs, mostly in terms of longer instance build times and more complex remote console access. Systems requiring a GUI began to noticeably lag behind their GUI-less counterparts in adoption and capabilities.

But the far bigger factor was the artifact of cloud computing’s standard billing methodologies. Because cloud computing typically exposes per-instance costs in a raw, fully visible way IT departments had no means of fudging or overlooking the costs of GUI deployments whose additional overhead would often even double the cost of a single cloud instance. Accounting would very clearly see bills for GUI systems costing far more than their GUI-less counterparts. Even non-technical teams could see that the cost of GUIs was adding up even before considering the cost of management.

This cost continues to increase as we move towards container technologies where the scale of individual instances becomes small and smaller means that the relative overhead of the GUI becomes more significant.

But the real impact, possibly the biggest exposure of the issues around GUI driven systems is the industry’s move towards the DevOps system automation models. Today only a relatively small percentage of companies are actively moving to a full cloud-enabled, elastically scalable DevOps model of system management but the trend is there and the model leaves GUI administrators and their systems completely behind. With DevOps models direct access to machines is no longer a standard mode of management and systems have gone even farther than working solely from the command line to being built completely in code meaning that not only do system administrators working in the DevOps world need to interact with their systems at a command line but they must do so programmatically.

The market is rapidly moving towards fewer, more highly skilled systems administrators working with many, many more servers “per admin” than in any previous era. The idea that a single systems administrator can only manage a few dozen servers, a common belief in the GUI world, has long been challenged even in traditional “snowflake” command line systems administration with numbers easily climbing into the few hundred range. But the DevOps model or similar automation models take those numbers into the thousands of servers per administrator. The overhead of GUIs is becoming more and more obvious.

As new technologies like cloud, containers and DevOps automation models become pervasive so does the natural “sprawl” of workloads. This means that companies of all sizes are seeing an increase in the numbers of workloads that need to be managed. Companies that traditionally had just two or three servers today may have ten or twenty virtual instances! The number of companies that need only one or two virtual machines is dwindling.

This all hardly means that GUI administration is going to go away in the near, or even the distant, future. The need for “one off” systems administration will remain. But the ratio of administrators able to work in a GUI administration “one off” mode versus those that need to work through the command line and specifically through scripted or even fully automated systems (a la Puppet, Chef, Ansible) is already tipping incredibly rapidly towards non-GUI system administration and DevOps practices.

What does all of this mean for us in the trenches of the real world? It means that even roles, such as small business Windows administration, that traditionally have had little or no need to work at the command line need to reconsider the dependence on the local server GUI for our work.   Command line tools and processes are becoming increasingly powerful, well known and how we are expected to work. In the UNIX world the command line has always remained and the need to rely on GUI tools would almost always be seen as a major handicap. This same impression is beginning to apply to the Windows world as well. Slowly those that rely on GUI tools exclusively are being seen as second class citizens and increasingly relegated to more junior roles and smaller organizations.

The improvement in scripting and automation tools also means that the value of scale is getting better so that the cost to administer small numbers of servers is becoming very high on a per workload basis which means that there is a very heavy encouragement for smaller companies to look towards management consolidation through the use of outside vendors who are able to specialize in large scale systems management and leverage scripting and automation techniques to bring their costs more in line with larger businesses’ costs. The ability to use outside vendors to establish scale or an approximation to it will be very important, over time, for smaller businesses to remain cost competitive in their IT needs while still getting the same style of computing advantages that larger businesses are beginning to experience today.

It should be noted that happening in tandem with this industry shift towards the command line and automation tools is the move to more modern, powerful and principally remote GUIs. This is a far less dramatic shift but one that should not be overlooked. Tools like Microsoft’s RSAT and Server Administrator provide a GUI view that is leveraging command line and API interfaces under the hood. Likewise Canonical’s Ubuntu world now has Landscape. These tools are less popular in the enterprise but are beginning to make the larger SMB market able to maintain a GUI dependency while also managing a larger set of server instances. The advancement in these types of GUI tools may be the strongest force slowing the adoption of command line tools across the board.

Whether we are interested in the move from the command line, to GUIs and back to the command line as an interesting artifact of the history of Information Technology as an industry or if we are looking at this as a means to understanding how systems administration is evolving as a career path or business approach for our own uses it is good for us to appreciate the factors that caused it to occur and why the ebb and flow of the industry is now taking us back out to the sea of the command line once again. By understanding these forces we can more practically asses where the future will take us, when the tide may again change, how to best approach our own careers or decide on both technology and human talent for our organizations.

Understanding the Role of the Dell VRTX

Dell’s VRTX is one of those devices that is just sexy, as IT hardware goes. It strikes a chord and drives IT professionals nearly wild. It looks cool, it has an incredible amount of power, it can be rack mounted or placed under a desk, it is quiet – so quiet that it can be run right in the middle of an open office space. It’s just really cool, and nearly every IT professional wants one – even if they have no idea why.

The problem with the VRTX is that it is generally misunderstood and the misunderstandings around the device itself and the architecture used within it have led to a lot of proposals, nearly continuous, to use the device where it is least suited. The device itself truly is awesome and has excellent use cases, but it is very important to understand what they are and what they are not as this is a very specialty piece of hardware.

First, we need to determine what the VRTX “is”. The Dell VRTX is primarily a blade enclosure, more or less like any blade system. But unlike traditional blade enclosure that typically hold six to ten blades per enclosure, the VRTX only holds four. So it is a “baby” blade enclosure. Because it is a true blade system, the Dell VRTX carriers the normal caveats of any blade enclosure. However, due to its small size the probability of it being able to be used, and retired, effectively make it quite a bit more reasonable to consider than traditional, larger blade enclosures. So an understanding of its blade nature is important in evaluating it for your organization’s needs.

Along with the included blade component, the VRTX also has a DAS (Direct Attached Storage) system attached via SAS to the blades. This storage array offers either twelve large form factor (3.5”) or twenty five small form factor (2.5”) hard drives attached by way of either one or two PERC8 hardware RAID controllers. This included, large scale, shared external storage array inside of the VRTX blade enclosure is what makes the VRTX unit truly unique.

So all four blades share the single DAS unit for storage. The four blades constitute 2U of the VRTX enclosure and the DAS unit another 2U for a total enclosure size of 4U.

Of course, as with any blade system, there is no requirement that you fully populate the VRTX initially, or ever. The system can be used with any number of blades from one to four, as needed. But the value of a blade enclosure, especially a small one such as this, depends heavily on being completely populated or nearly so, to be cost viable.

Architecturally what the VRTX represents is a highly compact, single chassis, Inverted Pyramid of Doom (the “traditional” 3-2-1 architectural design) built following what are, more or less, the best approaches for that type of system. The biggest advantages here are that the use of a solid DAS is mandated and cannot be altered and all connections between the DAS and the compute nodes are hard wired internally for the highest level of potential reliability for a shared external storage system with the least opportunity for human failure. By using DAS instead of SAN in this example, our 3-2-1 has its “2” layer removed resulting in a far better inverted pyramid structure. What we are left with is a 4-1 inverted pyramid design.

The overall profile of the VRTX is one of massive compute capabilities, far outstripping the computational needs of a normal SMB business, all in a single chassis. The smallest blade option is a dual processor module and the biggest are quad processor meaning that when populated we have a minimum of eight Intel Xeon processors over four nodes and a maximum of sixteen Intel Xeon processors over four nodes. This is truly a mammoth computational system in a small package. But it is critical to understand that all of this horsepower shares a single storage array and is not highly available and cannot be made so. This is a system designed for processing power, not as a reliable infrastructure component.

It should also be noted that Dell experienced reliability issues with the redundant PERC8 hardware RAID controller setup and had to pull it from the market for some time. As with nearly all storage systems in this category, which includes many DAS and SAN devices, redundant controllers are commonly the cause of storage outages rather than the preventers of such. Redundancy of RAID controllers is rarely a valuable addition and should never be looked on as a panacea to storage reliability concerns.

Given that fact that the VRTX is compute heavy and reliability weak, what are its designated use cases? Where does it make the most sense to consider deploying this unit?

There are three extremely common deployment scenarios today where large compute and shared “fragile” storage often fit. Of course there may be many special cases and those should be evaluated individually based on the power, cost and reliability profiles of the VRTX relative to other options. But by and large the big three use cases where we would want to see the VRTX deployed would be:

Enterprise Remote Office and Branch Office (ROBO): This use case is based around the concept of the VRTX being a single device, easily deployable with nothing to do but to “plug it in” delivering a “reliable enough” but very powerful platform for remote offices. Not every remote or branch office would need the kind of horsepower than a VRTX can provide and some would require high availability which it does not have, but large ROBOs are often ideally suited to this architectural profile due to the ease of remote management and the common ability to use remote access to a central office or datacenter as a means of providing failover and reliability in the event of a major disaster either to IT itself (such as a total failure of the VRTX) or to the ROBO itself (fire, flood, etc.)

A VRTX in this scenario can easily be the sole IT device, outside of networking equipment, powering an entire ROBO of hundreds or potentially even thousands of users. And the ability to do nearly all maintenance in a non-disruptive way, which if properly designed is trivial to provide with a VRTX, can be quite significant to a ROBO.

The concept of this being solely for the “enterprise” ROBO rather than SMB ROBOs is simply because of the total scale of the VRTX being larger than the typical needs of an SMB as a whole let alone the needs of just one remote office. The VRTX is just too “big” for the typical needs of an SMB without being specifically focused on the needs of SMB.

Virtual Desktop Infrastructure (VDI): VDI generally requires a large amount of compute power, non-disruptive updates and shared storage which is perfect for the VRTX. Of course this only makes sense in shops that need at least three nodes, if not four nodes, of compute power to leverage the blade chassis natural of the VRTX. But for companies looking for eight to sixteen CPUs worth of VDI power the VRTX can be a slam dunk. Possibly no use case is more appropriate for the VRTX than as a single, modular VDI system.

Big Data: Not many SMBs look to do big data processing today (Hadoop, Apache Spark, etc.) but a VRTX can be an ideal platform for doing huge processing in a small business that does not need to scale its data processing beyond this point. For larger enterprises needing a much larger scale of processing the VRTX would not be well suited, what makes it exceptionally valuable is in matching the size to the organization’s need. Of course other kinds of computationally heavy processing, such as Monte Carlo simulations, would also work well on this platform.

Now that we know where the VRTX is well suited, where does it not fit well?

The VRTX is very poorly suited to general computing use, in both the SMB and the enterprise sectors. In the enterprise the VRTX represents a fully contained, but non-scaling, stack which would be unwieldy and expensive in a large infrastructure.

In the SMB the VRTX is dramatic overkill on the computational size while underkill, generally in reliability, on the storage side. Most SMBs, when scaling past a single computation node, are seeking both flexible scalability as well as higher than typical reliability. Often it is a desire for high availability alone that drives SMBs past a single computation node considering the incredible capacity of a single node that is available today. So moving to an inverted pyramid architecture would be counter-productive to the needs of the typical SMB. The VRTX is simply too big, too rigid and lacks the reliability profile desired by SMBs. The SMB is really the last market where I would expect the VRTX to be deployed as general computing needs that drive SMB needs simply is the farthest appropriate use case for this device.

The VRTX is an amazing piece of equipment and well designed for several niche use cases, but is not designed to replace or be used in typical scenarios where standard servers, such as the Dell PowerEdge R730, have been designed to be the ideal equipment. General use equipment exists as the industry standards and best sellers for a reason, niche equipment also exists for a reason. Be sure to understand why the equipment you are considering makes sense for your environment, new and interesting is not enough to justify moving to special case gear.

Choosing a University Degree Program for IT

In my last article I looked at the overarching concerns and approaches to an university program and how it would apply to us in IT. Now we will look at individual programs and how to approach the selection of a major and focus area within the university system.

Of actual degree programs we face a world of complexity as universities and colleges often use any variety of names for their programs of study and often attempt to use one program to teach another so a program name will often not match the actual field of study which can be very bad as you do not want to be in a position of needing to explain this discrepancy to potential employers or existing employers. An example of this was a well known northeastern school that lacked the ability to offer an IT program so relabeled their existing library science program to IT and passed that off as such for many years.

The first thing to consider is if we want to have a focused program in our field or one outside of the field. Given what we learned from the last article, that universities excel at liberal and traditional subjects and do poorly at technical ones and that our goals are to be broadly educated and not focused on specific skills, I general prefer to see students or job candidates who have been through non-technical course loads rather than technical ones.

There are any number of good non-technical programs from which to choose. Great examples include communications, business, accounting and psychology. It is good, of course, if any program includes some technical concepts such as project management and systems analysis, but these can simply be addressed through electives. It is also best if any program include studies in math, especially statistics and risk analysis, and general business classes, basic accounting and management. Students, we hope, will leave school with a firm foundation in understanding business context, people and communications because these are the soft skills that are most critical to an IT career and even moreso to an SMB IT career where there is far less departmental isolation between some tech positions and the operational side of the business.

For those that do not want to take the most liberal of paths as described above, universities often offer a large range of degrees within or near the IT discipline itself. This plethora of IT or IT-like options can often lead to confusion and risks making the selection rather dangerous as a highly technical degree that is in the wrong area of study would be the worst possible option – teaching neither IT nor teaching the broad skill set that IT practitioners desperately need. Even worse is going through the wrong field of study will often wildly mislead students as to what to expect when they enter the IT field and may actively look extremely bad on a resume as it can appear (and rightfully so in many cases) that the student did not take the time to understand their chosen field of study, know what degrees would be applicable to it and failed to realize this through years of university classes or did and did not bother to switch to an appropriate program! This is what we most want to avoid, actively bad degree programs.

To make this as challenging as possible, IT degrees often come with a variety of names. And IT degrees may be included under multiple schools or colleges within a university. Some universities have IT degrees inside of an IT school, others may have them within a more general science program, a math program or often within engineering. Some even have IT degrees under a business school. It is not unheard of for IT degrees to exist in multiple places within the same university with different foci depending on which college is administering the program.

We must also address the big question of “is software engineering and programming a part of IT?” In universities, the answer is generally yes even though in the professional world the answer is a resounding “no” – the two are clearly different fields of study and different disciplines. Software engineering is dedicated to the design and building of products. IT is dedicated to the building and support of the infrastructure of businesses. There is some overlap as any two fields might have, but they are very clearly different career fields that deal with extremely different day to day duties and tasks. It is quite common to find software engineering, developer and programmer courses and degree programs lumped into the same schools as IT or even put under an IT umbrella. This is not necessarily bad but can be quite confusing. We must be clear, however, that software engineering is not IT and any degree focused on programming should be avoided for someone with an interest in heading into the world of IT. Any respectable IT program is going to teach programming as a core foundation to the field, but the program will never be focused on it. If it is, this is a mislabeled program and should be avoided.

Proper IT programs should have names such as Information Technology, Computer Information Systems or Management Information Systems. IT and CIS programs are often interchangeable. MIS programs tend to be a subset of IT more focused on certain management-supporting aspects of IT.

Programs that are most insidious and dangerous to IT hopefuls are ones that are most closely named but least closely associated with the IT field: computer engineering and computer science. These two should never, ever cross paths with those looking for careers in IT.

Computer engineering is older than IT and is a subset of electrical engineering. This is a traditional engineering field that focuses on the design of computers and computer components (like processors, chips, boards, peripherals) themselves and has effectively no crossover with IT or any IT-related discipline in any way. Computer engineering and IT should almost never even appear within the same school or college within a university.

If software engineering (which itself is not an IT discipline but is at least closely related) is the programming world’s analogue to the world of traditional product development engineering then computer science is the programming world’s analogue to physics or mathematics. Computer science is truly a “science and math” type field, developing the theories and foundation that is then used by the software engineering discipline to build products often used and managed by the IT discipline. Computer Science, CS, is probably the most commonly mistaken field that IT hopefuls will enter and if a true CS program it is completely inappropriate and a waste of time. This is the program to look out for the most. Avoid CS completely and avoid any university attempting to pass IT programs off as CS, the two never overlap.

Do not take the selection of a university major lightly. My recommendation is to keep your selection as liberal as possible, use electives to introduce IT elements like basic programming and networking into your curriculum, fill your time with mind-broadening classes and learn about business, finance, accounting, communications, writing, speaking and statistics. Attempt to find internships or opportunities in the university to work with IT departments. Actively work to leverage your opportunities at university to make yourself as prepared as possible to focus on the specific skills of IT externally to your university training.

The Information Technology Resource for Small Business