Category Archives: IT Management

Explaining the Lack of Large Scale Studies in IT

IT practitioners ask for these every day and yet, none exist – large scale risk and performance studies for IT hardware and software.  This covers a wide array of possibilities, but common examples are failure rates between different server models, hard drives, operating systems, RAID array types, desktops, laptops, you name it.  And yet, regardless of the high demand for such data there is none available.  How can this be.

Not all cases are the same, of course, but by and large there are three really significant factors that come into play keeping this type of data from entering the field.  These are the high cost of conducting a study, the long time scale necessary for a study and a lack of incentive to produce and/or share this data with other companies.

Cost is by far the largest factor.  If the cost of large scale studies could be overcome, all other factors could have solutions found for them.  But sadly the nature of a large scale study is that it will be costly.  As an example we can look at server reliability rates.

In order to determine failure rates on a server we need a large number of servers in order to collect this data.  This may seem like an extreme example but server failure rates is one of the most commonly requested large scale study figures and so the example is an important one.  We would need perhaps a few hundred servers for a very small study but to get statistically significant data we would likely need thousands of servers.  If we assume that a single server is five thousand dollars, which would be a relatively entry level server, we are looking at easily twenty five million dollars of equipment!  And that is just enough to do a somewhat small scale test (just five thousand servers) of a rather low cost device.  If we were to talk about enterprise servers we would easily just to thirty or even fifty thousand dollars per server taking the cost even to a quarter of a billion dollars.

Now that cost, of course, is for testing a single configuration of a single model server.  Presumably for a study to be meaningful we would need many different models of servers.  Perhaps several from each vendor to compare different lines and features.  Perhaps many different vendors.  It is easy to see how quickly the cost of a study becomes impossibly large.

This is just the beginning of the cost, however.  To do a good study is going to require carefully controlled environments on par with the best datacenters to isolate environmental issues as much as possible.  This means highly reliable electric, cooling, airflow, humidity control, vibration and dust control.  Good facilities like this are very expensive and are why many companies do not pay for them, even for valuable production workloads.  In a large study this cost could easily exceed the cost of the equipment itself over the course of the study.

Then, of course, we must address the needs for special sensors and testing.  What exactly constitutes a failure?  Even in production systems there is often dispute on this.  Is a hard drive failing in an array a failure, even if the array does not fail?  Is predictive failure a failure? If dealing with drive failure in a study, how do you factor in human components such as drive replacement which may not be done in a uniform way?  There are ways to handle this, but they add complication and make the studies skew away from real world data to contrived data for a study.  Establishing study guidelines that are applicable and useful to end users is much harder than it seems.

And the biggest cost, manual labor.  Maintaining an environment for a large study will take human capital which may equal the cost of the study itself.  It takes a large number of people to maintain a study environment, run the study itself, monitor it and collect the data.  All in all, the cost are generally, simply impossible to do.

Of course we could greatly scale back the test, run only a handful of servers and only two or three models, but the value of the test rapidly drops and risks ending up with results that no one can use while still having spent a large sum of money.

The second insurmountable problem is time.  Most things need to be tested for failure rates over time and as equipment in IT is generally designed to work reliably for decades, collecting data on failure rates requires many years.  Mean Time to Failure numbers are only so valuable, Mean Time Between Failures and failure types, modes and statistics on that failure is very important in order for a study to be useful.  What this means is that for a study to be truly useful it must run for a very long time creating greater and greater cost.

But that is not the biggest problem.  The far larger issue is that for a study to have enough time to generate useful failure numbers, even if those numbers were coming out “live” as they happened it would already be too late.  The equipment in question would already be aging and nearing time for replacement in the production marketplace by the time the study was producing truly useful early results.  Often production equipment is only purchased for three to five years total lifespan.  Getting results even one year into this span would have little value.  And new products may replace those in the study even more rapidly than the products age naturally making the study only valuable from a historic context without any use in determining choices in a production decision role – the results would be too old to be useful by the time that they were available.

The final major factor is a lack of incentive to provide existing data to those who need it.  While few sources of data exists, a few do, but nearly all are incomplete and exist for large vendors to measure their own equipment quality, failure rates and such.  These are rarely done in controlled environments and often involve data collected from the field.  In many cases this data may even be private to customers and not legally able to be shared regardless.

But vendors who collect data do not collect it in an even, monitored way so sharing that data could be very detrimental to them because there is no assurance that equal data from their competitors would exist.  Uncontrolled statistics like that would offer no true benefit to the market nor do the vendors who have them so vendors are heavily incentivized to keep such data under tight wraps.

The rare exception are some hardware studies from vendors such as Google and BackBlaze who have large numbers of consumer class hard drives in relatively controlled environments and collect failure rates for their own purposes but have little or no risk from their own competitors leveraging that data but do have public relations value in doing so and so, occasionally, will release a study of hardware reliability on a limited scale.  These studies are hungrily devoured by the industry even though they generally contain relatively little value as their data is old and under unknown conditions and thresholds, and often do not contain statistically meaningful data for product comparison and, at best, contain general industry wide statistical trends that are somewhat useful for predicting future reliability paths at best.

Most other companies large enough to have internal reliability statistics have them on a narrow range of equipment and consider that information to be proprietary, a potential risk if divulged (it would give out important details of architectural implementations) and a competitive advantage.  So for these reasons they are not shared.

I have actually been fortunate enough to have been involved and run a large scale storage reliability test that was conducted somewhat informally, but very valuably on over ten thousand enterprise servers over eight years resulting in eighty thousand server years of study, a rare opportunity.  But what was concluded in that study was that while it was extremely valuable what it primarily showed is that on a set so large we were still unable to observe a single failure!  The lack of failures was, itself, very valuable.  But we were unable to produce any standard statistic like Mean Time to Failure.  To produce the kind of data that people expect we know that we would have needed hundreds of thousands of server years, at a minimum, to get any kind of statistical significance but we cannot reliably state that even that would have been enough.  Perhaps millions of servers years would have been necessary.  There is no way to truly know.

Where this leaves us is that large scale studies in IT simply do not exist and will never, likely, exist.  When they do they will be isolated and almost certainly crippled by the necessities of reality.  There is no means of monetizing studies on the scale necessary to be useful, mostly because failure rates of enterprise gear is so low while the equipment is so expensive, so third party firms can never cover the cost of providing this research.  As an industry we must accept that this type of data does not exist and actively pursue alternatives to having access to such data.  It is surprising that so many people in the field expect this type of data to be available when it never has been historically.

Our only real options, considering this vacuum, are to collect what anecdotal evidence exists (a very dangerous thing to do which requires careful consideration of context) and the application of logic to assess reliability approaches and techniques.  This is a broad situation where observation necessarily fails us and only logic and intuition can be used to fill the resulting gap in knowledge.

It’s a Field, Not a Road

Over the years I have become aware of a tendency in the Information Technology arena to find strong expectations of exactly how much someone should know about certain technologies based on their job title and length of time having worked in IT.  Of course, someone’s current job title and experience level should give you some, if only a little, insight into what they are doing on the job today, but it should rarely give you much insight into what they have done in the past or how they got to where they are today.

There are some abundantly common “paths” through IT, especially in the small and medium business markets, which help to stereotype the advancement of an IT professional over time.  The most common path goes something like this: high school, four year college degree, one or two basic certifications from CompTIA, entry level helpdesk job, better help desk job, deskside support job, basic Microsoft certification, system administrator or IT manager position.  This path is common enough that many people who have taken it simply assume that everyone else in the IT world has done so as well and this assumption creates a lot of problems in many different areas.

First of all, it must be stated, that there is no standard path in IT, not even remotely.  Too often IT professionals, applying their own experiences to their view of other people, see IT as a road when it truly is a field (pun only partially intended.)  IT has no defined entry point nor exit point from the industry.  IT is a massive field made up of many different sub-disciplines that exist in little, if any, linear progression of any sort from one to another.  There are far more lateral moves in IT than there are ladders to climb.

Besides the completely untrue assumption that any specific education and certification requirements exist in order to enter IT, the widely held belief that helpdesk positions are the only entry level IT position that exists and that they are only a stepping stone job are completely unfounded and untrue.  Many, likely most, IT professionals do not enter the field through helpdesk, call centers, or even deskside support and probably not through any type of Windows-centric support at all.  While end user focused,  helpdesk remains only a small percentage of all IT careers and one through which only a portion of IT professionals will pass.  Windows-centric support is one of the most important foci within IT and clearly the most visible to end users and those outside of IT; this high level of visibility can be misleading, however.  It is equally true that helpdesk, call center, deskside support and the like are not stepping stone jobs exclusively and are career options in their own right.  It is unfortunate that such a high percentage of IT professionals view such positions as being inappropriate career goals because it is widely recognized that a lack of skilled and dedicated people in those specific positions is often what causes the most friction between end users and IT departments.

I have found on several occasions hiring managers who discounted hiring anyone who was truly interested in helpdesk or deskside support as a career and who enjoyed working with customers; and only desired to hire someone who looked down on those roles as necessary evils that should be passed over as quickly as possible en route to a more “rewarding” career destination.  I find this sad on many levels.  It implies that the hiring manager lacks empathy for other professionals and does not consider their individual desires or strengths.  It implies that the company in question is institutionalizing a system by which people are not hired to do something that they love nor something that they are good at but only hires people willing to do a job role that they don’t want to do in the hopes of eventually doing one that they do want to do.  This rules out anyone actually qualified to do the desired job since those people will go straight into those positions.  It almost guarantees, as well, that end user support will be poor as no one is hired that is specifically good at or interested in that role.  The hiring manager clearly sees end user support as not being a priority and the entire idea is that anyone going into that role will “succeed” by moving out of that role as quickly as possible, and thus leaving end users with a lack of continuity as well as a never ending cycle of churn.  Believing that IT is a road and not a field has tangible, negative consequences.

Seeing IT careers as a direct path from point A to point B creates an inappropriate set of expectations as well.  It is not uncommon at all for someone to say that anyone with five years of experience in IT must know how to <insert somewhat common Windows desktop task here> based on nothing but their length of time working in IT, completely ignoring the possibility that they have never worked on Windows or in a role that would do that task.  While Windows is common, many people working in IT have never performed those roles and there is no reason why it would be expected that a specific task like that would be known automatically.  This goes beyond the already problematic attitude that many people have that the tasks that they personally did in a specific job role are the same tasks that everyone in that job role have done.  This is, of course, completely untrue.  A Windows system admin at one company and a Windows system admin at another company or even just in another department, may do similar tasks or possibly completely different tasks.  Even a decade in those roles may produce almost completely unique experiences and skills.  There is just so much potential in IT for doing different things that we cannot make specific task assumptions.

This assumptive process carries over to certifications and education as well.  While many fields succumb to the cliche that anyone over a certain level must have a college education, it is far less common for that assumption to be true in IT.  Few fields find university training to be as optional as IT does, and remembering that alternative means of entering the field exist is critical.  Many of the best and brightest enter IT directly and not through an educational channel.  These candidates are often years ahead of their “educated” counterparts and often represent the most passionate, driven and capable pool of talent; and they are almost certainly the most capable of self motivation and self education which are both extremely important traits in IT.

Similarly I was recently introduced, for the first time, to assumptions about certifications.  Certifications are specific to job roles and none would apply broadly to all roles and none would be sensible for someone to hold if a higher certification was held or if they have never passed through that specific point in that specific job role.  The example that came up was with a hiring manager who actually believe that anyone at ten years of experience would be expected to have both an A+ and a Network+ certification.  Both are entry level certifications and not relevant to the vast majority of IT careers (the A+ especially has little broad applicability while the Network+ is much more general case but still effectively entry level.) While it would not be surprising to find these being held by a ten year IT veteran, it would make no sense whatsoever to be used as filtering agents by which someone would rule out candidates for lacking.  This is completely ridiculous.  Those certs are designed only to show rudimentary knowledge in specific IT career paths.  Anyone who has passed that point in their career without needing them would never go back and spend time and money earning entry level certifications while already at a career mid-point.  Once you have a PhD, you don’t go back and get another Associates degree just to show that you could have done it, the PhD is enough to demonstrate the ability to get an entry level degree.  And most people with a significant history in the field will often have passed the career point where those certs made sense years before the certs even existed (the Network+, for example, did not exist until I was in IT for more than a decade already!)

I am particular sensitive to this issue both because I spent several years as a career counselor and helped to put IT professionals on a path to career growth and development and because I myself did not take what is considered to be a conventional path into IT.  I was lucky enough to have interned in software development during the middle and high school years and was offered a position in UNIX support right out of high school.  I never passed through any Windows-centric roles, nor did I ever work on a helpdesk or do deskside support outside of a small amount of UNIX high end research labs.  My career took me in many different directions but almost none followed the paths that so many hiring managers expect.  Attempting to predict the path that one’s career will take in the future is impossible.  Equally, attempting to determine what path must have been taken to have reached a current location is also impossible.  There are simply too many ways to get from point A to point B.

Embracing uniqueness in IT is important.  We all bring different strengths and weaknesses, different ideas and priorities, different goals and different things that we enjoy or despise doing.  The job that one person sees as a necessary evil another will love doing and that passion for the role will show.  The passionate, career-focused helpdesk professional will bring an entirely different joie de vivre to the job than will someone who feels that they are trapped doing an undesirable job until another opportunity comes along.  This doesn’t mean that the later will not work hard and try their best, but there is little that can be done to compete with someone passionate about a specific role.

It is also very easy, when we look at IT as a singular path, to forget that individual roles, such as helpdesk, actually have progressions within the role itself.  Often many steps exist within specific roles.  In the case of a helpdesk it is common to refer to these as L0 through L3.  Plus there are helpdesk team leads and helpdesk manager positions that are common.  An entire career can be had just within the helpdesk focus sub-discipline within IT.  There is nothing wrong with entering IT directly into the role type that interests you.  There is also nothing wrong with achieving a place in your career where you are happy to stay. Everyone has an ideal position, a career position where they both excel at what they do and are happy doing indefinitely.  In most fields, people actually strive to achieve this type of position somewhat early in their careers.  In IT, it is strangely uncommon.

There is a large amount of social pressure within IT to have “ambition” pushing you towards more and more challenging positions within the field.  Partially this is because IT is such an enormous field that is so dynamic that most people really do enter wherever opportunity presents itself and then attempt to maneuver themselves into positions that they find interesting over a period of many years.  This creates a culture of continuous change and advancement expectations, to some degree. This is not entirely bad but it often marginalizes or even penalizes people who manage to find their desired positions, especially if this happens early in their careers and even more specifically if it happens in a role which many people see as a “stepping stone” role such as with helpdesk or deskside support.  This is not good for individuals, for businesses or for the field in general.  It pushes people into roles where they are not happy and not well suited in order to satisfy social pressures rather than career aspirations or business needs.

Ambition is not necessarily a good thing.  It certainly is not a bad thing.  But too often hiring managers look for ambition when it is not in anyone’s interest.  Hiring someone young or inexperienced in the hopes that they grow over time and move into more and more advanced roles is an admirable goal and can work out great.  But avoiding hiring someone perfectly suited for a role because they will want to stay where they are well suited and where they excel makes no sense at all.  In an ideal world, everyone would be hired directly into the perfect position for them and no one would ever need to change jobs.  This is best for both the employees and the employer. It is rarely possible, but certainly should not be avoided when the opportunity presents itself.

Creating stereotypes and using them to judge IT professionals has negative consequences for everyone.  It increases stress, reduces career satisfaction, decreases performance and lowers the quality of IT service delivery while making it more expensive to provide.  It is imperative that we accept IT as a field, not as a road, and that we also accept that IT professionals are individuals with different goals, different career motivations and different ambitions.  Variety and diversity in IT are far more important than they are in most fields because IT is so large and requires so many different perspectives to perform optimally.  Unlike a road that travels a single, predictable path, a field allows you to wander in many directions and arrive at many different destinations.

IT Generalists and Specialists

IT Professionals generally fall into two broad categories based on their career focus: generalists and specialists. These two categories actually carry far more differences than they may at first appear to do and moving between them can be extremely difficult once a career path has been embarked upon; often the choice to pursue one path or the other is made very early on in a career.

There are many aspects that separate these two types of IT professionals, one of the most poignant and misunderstood is the general marketplace for these two skillsets. It is often assumed, I believe, that both types exist commonly throughout the IT market but this is not true. Each commands its own areas.

In the small and medium business market, the generalist rules. There is little need for specialties as there are not enough technical needs in any one specific area to warrant a full time staff member dedicating themselves to them. Rather, a few generalists are almost always called upon to handle a vast array of technical concerns. This mentality also gives way to “tech support sprawl” where IT generalists are often called upon to venture outside of IT to manage legacy telephones, electrical concerns, HVAC systems and even sprinklers! The jack of all trades view of the IT generalist has a danger of being taken way too far.

It should be mentioned, though, that in the SMB space the concept of a generalist is often one that remains semi-specialized. SMB IT is nearly a specialization on its own. Rather than an SMB generalist touching nearly every technology area it is more common for them to focus across a more limited subset. Typically an SMB generalist will be focused primarily on Windows desktop and server administration along with application support, hardware management and some light security. SMB generalists may touch nearly any technology but the likelihood of doing so is generally rather low.

In the enterprise space, the opposite is true. Enterprise IT is almost always broken down by departments, each department handling very focused IT tasks. Typically these include networking, systems, storage, desktop, helpdesk, application specific support, security, datacenter support, database administration, etc. Each department focuses on a very specific area, possibly with even more specialization within a department. Storage might be broken up by block and file. Systems by Windows, mainframe and UNIX. Networking by switching and firewalls. In the enterprise there is a need for nearly all IT staff to be extremely deep in their knowledge and exposure to the products that

they support while needing little understanding of products that they do not support as they have access to abundant resources in other departments to guide them where there are cross interactions. This availability of other resources and a departmental separation of duties, highlights the differences in generalists and specialists.

Generalists live in a world of seeing “IT” as their domain to understand and oversee, potentially segmented by “levels” of difficulty rather than technological focus and typically a lack of specialized resources to turn to internally for help. While specialists live in a world of departmental division by technology where there are typically many peers working at different experience levels within a single technology stack.

It is a rare SMB that would have anything but a generalist working there. It is not uncommon to have many generalists, even generalists who lean towards specific roles internally but who remain very general and lacking a deep, singular focus. This fact can make SMB roles appear more specialized that they truly are to IT professionals who have only experienced the SMB space. It is not uncommon for SMB IT professionals to not even be aware of what specialized IT roles are like.

A good example of this is that job titles common and generally well defined in the enterprise space for specialists are often used accidentally or incorrectly with generalists not realizing that the job roles are specific. Specialists titles are often used for generalists positions that are not truly differentiated.

Two exceptionally common examples are the network engineering and IT manager titles.  For a specialist, network engineer means a person whose full time, or nearly full time, job focus is in the design and planning and possibly implementation of networks including the switching, routing, security, firewalling, monitoring, load balancing and the like, of the network itself.  They have no role in the design or management of the systems that use the network, only the network itself.  Nor do they operate or maintain the network, that is for the network administrator to do who, again, only touches switches, routers, firewalls, load balancers and so forth not computers, printers, servers and other systems.  It is a very focused title.  In the SMB it is common to give this title to anyone who operates any device on a network often with effectively zero design or network responsibilities at all.  No role overlaps.

Likewise in the enterprise an IT manager is a management role in an IT department.  What an IT manager manages, like any manager, is people.  In the SMB this title may be used correctly but it is far more common to find the term applies to the same job role to which network engineer is used – someone who has no human reports and manages devices on a network like computers and printers.  Not a manager at all, but a generalist administrator.  Very different than what the title implies or how it is expected to be used in the large business and enterprise space.

Where specialists sometimes enter the SMB realm is through consultants and service providers who provide temporary, focused technical assistance to smaller firms that cannot justify having those skills maintained internally. Typically areas where this is common is storage and virtualization where consultants will often design and implement core infrastructure components and leave the day to day administration of them to the in-house generalists.

In the enterprise the situation is very different. Generalists do exist but, in most cases, the generalization is beaten out of them as their careers take them down the path of one specialization or another. Entry level enterprise workers will often come in without a clear expectation of a specialization but over time find themselves going into one quite naturally. Most, if not all, IT growth paths through enterprise IT require a deep specialization (which may mean focusing on management rather than technical.) Some large shops may provide for cross training or exposure to different disciplines but rarely is this extensively broad and generally does not last once a core specialization is chosen.

This is not to say that enterprises and other very large shops do not have generalists, they do. It is expected that at highest echelons of enterprise IT that the generalists roles will begin to reemerge as new disciplines that are not seen lower in the ranks. These titles are often labeled differently such as architect, coordinator or, of course, CIO.

The reemergence of generalists at the higher levels of enterprise IT poses a significant challenge for an industry that does little to groom generalists. This forces the enterprise generalist to often “self-groom” – preparing themselves for a potential role through their own devices. In some cases, organic growth through the SMB channels can lead to an enterprise generalist but this is extremely challenging due to the lack of specialization depth available in the majority of the SMB sector and a lack of demonstrable experience in the larger business environment.

These odd differences that almost exclusively fall down SMB vs. enterprise lines creates a natural barrier, beyond business category exposure, to IT professionals migrating back and forth between larger and smaller businesses. The type of business and work experience is vastly different and the technology differences are dramatically different. Both enterprise IT pros are often lost moving to an SMB and SMB pros find that what they felt was deep, focused experience in the SMB is very shallow in the enterprise. The two worlds operate differently at every level, but outside of IT the ability to move between them is far easier.

Enterprise IT carries the common titles that most people associate with IT career specialization: system administration, network engineer, database administrator, application support, helpdesk, desktop support, datacenter technician, automation engineer, network operations center associate, project manager, etc. SMB titles are often confusing both inside of and outside of the industry. It is very common for SMB roles to coopt specialization titles and apply them to roles that barely resemble their enterprise counterparts in any way and do not match the expectation of a title at all, as I demonstrated earlier. This further complicates the fluid movement between realms as both sides become increasingly confused trying to understand how people and roles related to each other coming from the other realm. There are titles associated with generalists, such as the rather dated LAN Administration, IT Generalist and architect titles but their use, in the real world, is very rare.  The SMB struggles to define meaningful titles and has no means by which to apply or enforce these across the sector.  This lack of clear definition will continue to plague both the SMB and generalists who have little ability to easily convey the nature of their job role or career path.

Both career paths offer rewarding and broad options but the choice between them does play a rather significant role in deciding the flavor of a career.  Generalists, beyond gravitating towards smaller businesses, will also likely picking up a specialization in an industry over time as they move into higher salary ranges (manufacturing, medical, professional services support, legal, etc.)  Specialists will find their focus is in their technology and their focus on market will be less.  Generalist will find it easier to find work in any given local market, specialists will find that they often need to move to major markets and potentially only the core markets will provide great growth opportunities but within those markets mobility and career flexibility will be very good.  Generalists have to work hard to keep up with a broad array of technologies and changes in the market.  Specialists will often have deep vendor resources available to them and will find the bulk of their educational options come directly from the vendors in their focus area.

It is often personality that pushes young IT professionals into one area or the other.  Specialists are often those that love a particular aspect of IT and not others or want to avoid certain types of IT work as well as those that look at IT more as a predetermined career plan.  Generalists often come from the ranks of those that love IT as a whole and fear being stuck in just one area where there are so many aspects to explore.  Generalists are also far more likely to have “fallen into” IT rather than having entered the field having a strategic plan.

Understanding how each approaches the market and how the markets approach IT professionals help the IT professional have an opportunity to assess what it is that they like about their field and make good career choices to keep themselves happy and motivated and allows them to plan in order to maximize the impact of their career planning decisions.  Too often, for example, small business generalists will attempt to do a specialization focus, very often in enterprise Cisco networking just as a common example, which have almost no potential value to the marketplace where their skills and experience are focused.  Professionals doing this will often find their educational efforts wasted and be frustrated that the skills that they have learned go unused and atrophy while also being frustrated that gaining highly sought skills do not appear to contribute to new job opportunities or salary increases.

There is, of course, opportunity to move between general and special IT roles.  But the more experience a professional gains in one area or the other, the more difficult it becomes to make a transition, at least without suffering from a dramatic salary loss in order to do so.  Early in an IT career, there is relatively high flexibility to move between these areas at the point where the broadening of generalization is minimal or the deep technical skills of specialization are not yet obtained.  Entry level positions in both areas are effectively identical and there is little differentiation in career starting points.

Greater perspective on IT careers gives everyone in the field more ability and opportunity to pursue and achieve the IT career that will best satisfy their technical and personal work needs.

Hello, 1998 Calling….

Something magic seems to have happened in the Information Technology profession somewhere around 1998.  I know, from my own memory, that the late 90s were a special time to be working in IT.  Much of the architecture and technology that we have today stem from this era.  Microsoft moved from their old DOS products to Windows NT based, modern operating systems.  Linux became mature enough to begin appearing in business.  Hardware RAID became common, riding on the coattails of Intel’s IA32 processors as they finally begin to become powerful enough for many businesses to use seriously in servers.  The LAN became the business standard and all other models effectively faded away.  The Windows desktop became the one and only standard for regular computing and Windows servers were rapidly overtaking Novell as the principle player in LAN-based computing.

What I have come to realize over the last few years is that a large chunk of the communal wisdom of the industry appears to have been adopted during these formative and influential years of the IT profession and have since then passed into myth.  Much like the teachings of Aristotle who went for millennia considered to be the greatest thinker of all time and not to be questioned – stymieing scientific thought and providing a cornerstone for the dark ages.  A foundation of “rules of thumb” used in IT have passed from mentor to intern, from professor to student, from author to reader over the past fifteen or twenty years, many of them being learned by rote and treated as infallible truths of computing without any thought going into the reasoning and logic behind the initial decisions.  In many cases so much time has come and gone that the factors behind the original decisions are lost or misunderstood as those hoping to understand them today lack firsthand knowledge of computing from that era.

The codification of IT in the late nineties happened on an unprecedented scale driven primarily by Microsoft sudden lurching from lowly desktop maker to server and LAN ecosystem powerhouse.  When Microsoft made this leap with Windows NT 4 they reinvented the industry, a changing of the guard, with an entirely new generation of SMB IT Pros being born and coming into the industry right as this shift occurred.  This was the years leading up to the Y2K bubble with the IT industry swelling its ranks as rapidly as it could find moderately skilled computer-interested bodies.  This meant that everything had to be scripted (steps written on paper, that is) and best practices had to be codified to allow those with less technical backgrounds and training to work.  A perfect environment for Microsoft and their “never before seen” level of friendliness NT server product.  All at once the industry was full of newcomers without historical perspective, without the training and experience and with easy to use servers with graphical interfaces making them accessible to anyone.

Microsoft lept at the opportunity and created a tidal wave of documentation, best practices and procedures to allow anyone to get basic systems up and running quickly, easily and, more or less, reliably.  To do this they needed broad guidelines that were applicable in nearly all common scenarios, they needed it written in clear published form and they needed to guarantee that the knowledge was being assimilated.  Microsoft Press stepped in with the official publications of the Microsoft guidelines and right on its heels Microsoft MCSE program came into the spotlight totally changing the next decade of the profession.  There had been other industry certifications before the MCSE but the Windows NT 4 era and the MCP / MCSE certification systems were the game changing events of the era.  Soon everyone was getting boot camped through certification quickly memorizing Microsoft best practices and recommendations, learning them by rote and getting certified.

In the short term, the move did wonders for providing Microsoft an army of minimally skilled, but skilled nonetheless, supporters who had their own academic interests aligned with Microsoft’s corporate interest forming a symbiotic relationship that completely defined the era.  Microsoft was popular because nearly every IT professional was trained on it and nearly every IT professional encourage the adoption of Microsoft technologies because they had been trained and certified on it.

The rote guidelines of the era touched many aspects of computing, many are probably still unidentified to this day so strong was the pressure that Microsoft (and others) put on the industry at the time.  Most of today’s concepts of storage and disk arrays, filesystems, system security, networking, system architecture, application design, memory, swap space tuning and countless others all arose during this era and passed, rather quickly, into lore.  At the time we were aware that these were simply rules of thumb, subject to change just as they always had based on the changed in the industry.  Microsoft, and others, tried hard to make it clear what underlying principles created the rules of thumb.  It was not their intention to create a generation having learned by rote, but it happened.

That generation went on to be the effective founding fathers of modern LAN management.  In the small and medium business space the late 1990s represented the end of the central computer and remote terminals design, the Internet became ubiquitous (providing the underpinnings for the extensive propagation of the guidelines of the day), Microsoft washed away the memory of Novell and LANtastic, Ethernet over twisted pair completely abolished all competing technologies in LAN networking, TCP/IP beat out all layer three networking competitors and more.  Intel’s IA32 processor architecture began to steal the thunder from the big RISC processors of the previous era or the obscure sixteen and thirty two bit processors attempting to unseat Intel for generations.  The era was defining to a degree few who come since will ever understand.  Dial up networking gave way to always-on connections.  Disparate networks that could not communicate with each other lost to the Internet and a single, global networking standard.  Vampire taps and hermaphrodite connectors gave in as RJ45 connectors took to the field.  The LAN of 1992 looked nothing like the LAN of 1995.  But today, what we use, while faster and better polished, is effectively identical to the computing landscape as it was by around 1996.

All of this momentum, whether intentional or accidental, created an unstoppable force of myth driving the industry.  Careers were built on this industry wisdom taught around the campfire at night.  One generation clinging to their established beliefs, no longer knowing why they trusted those guidelines or if they applied, and another being taught them with little way to know that they were being taught distilled rules of thumb meant to be taught with coinciding background knowledge and understanding and having been designed not only for a very specific era, roughly the band from 1996 to 1999, but also, in a great many cases, for very specific implementations or products, generally Windows 95 and Windows NT 4 desktops and Windows NT 4 servers.

Today this knowledge is everywhere.  Ask enough questions and even young professionals still at university or doing a first internship are likely to have heard at least a few of the more common nuggets of conventional IT industry wisdom.  Sometimes the recommendations, applied to day, are nearly benign representing little more than inefficiency or performance waste.  In other cases they may represent pretty extreme degrees of bad practice today carrying significant risk.

It will be interesting to see just how long the late 1990s continue to so vastly influence our industry today.  Will the next generation of IT professionals finally issue a broad call to deep understanding and question the rote learning of the past eras?  Will misunderstood recommendations still be commonplace in the 2020s?  At the current pace of change, it seems unlikely that any significant change to the thinking of the industry is likely to change too much prior to 2030.  IT has been attempting to move from its wild west, everyone distilling raw knowledge into practical terms on their own to large scale codification like other, similar, fields like civil or electrical engineering, but the rate of change, while tremendously slowed since the rampant pace of the 70s and 80s, still remains so high that the knowledge of one generation is nearly useless to the next and only broad patterns, approaches and thought processes have great value to be taught mentor to student.  We may easily face another twenty years of the wild west before things begin to really settle down.