Category Archives: Career

Make Your Business Jealous

I have, in the past, discussed how home technology infrastructures and lab environments are one of the keys to personal career success and how IT practitioners should set a high bar in their own lives and hold their businesses accountable to an even higher bar.  I often speak to IT practitioners looking for ways to improve their own education, skill set and résumé.  I propose an approach to building your home network and lab – “Make Your Business Jealous.”

I mean it quite literally.  Why not approach your home network as an opportunity to raise the bar, both on your own experience but also on what you expect from businesses?  The approach is an excellent one for changing how you think about your home network and its goals.  Don’t just use your home to learn new skills in a “checkmark” fashion such as “learned Active Directory management.”  That is an excellent and obvious use of your home network.  But we should consider taking this even farther.

In many ways, this applies far more to those IT practitioners who work in small and medium businesses where it is common to cut corners, eschew best practices, leave work half done and not approve realistic budgets, but it is valuable for anyone.  You can take this from the smallest of features in your own home – the physical aspects (like cabling, labeling and organization) and take it very large (servers, services, security, backup, etc..)

For a real world example, we should begin with the simplest component, cabling.  Few people, even IT practitioners, take the time to cable their homes for data let alone cabling it well.  This is a big missed opportunity.  This is something that is not only utilitarian but ends up adding value to the house as well.  And many businesses do a very poor job of this themselves.  Even though cabling is not strictly an IT discipline, it is a fringe area of electrical support that is related to IT and well worth using as a physically visible showcase of your home environment.

Doing good cabling at home, since it is your home and presumably there is nearly unlimited time to do things well, can really be taken to an extreme.  I recommend using forward looking cable, CAT 6 or better, so that you can handle flawless GigE today and faster in the future.  You do not want your home infrastructure to become dated unnecessarily.  Once putting in the effort it is all about doing it “right”.  This is a chance not just to run a few cables but to implement a complete cabling plant with over-provisioned cable runs to all parts of the house.  Of course you can do this in stages doing just a room or a few at a time.

In my own home projects I ran four cable runs to nearly all rooms with some like the master bedroom and living room getting more like six or eight.  This may sound like a lot, but in reality for a well wired home, it is not at all, as we will see.  You want to run more cabling than you are ever likely to use now, while putting in the effort, both because it is good practice and because it is simply impressive.  Having extra cabling means an easier time keeping things organized and more ability to flexibly change things in the future.

Running the cables in the walls, up through an attic or down through the basement or whatever is appropriate with well managed cable runs is best.  In my own house this was the attic with large “J” hooks keeping cables in place.  Be sure to label all cabling very clearly.  This is another chance to go above and beyond.  Install nice wall jacks, label every jack and every cable in every jack.  Make the physical cabling plant as organized, impressive and manageable as possible.  All it really takes is a little time and effort and you can have something that you are really proud of and that you want to show off.

Where the cables run to, of course, is a matter for some discussion.  Maybe you will want to have a networking closet or even a server room.  Having a good place to store your home networking gear can be a wonderful addition to a home lab.  A patch panel for your cable runs is ideal.  Do everything “right”, just like you would find in a dream office environment.

Good cabling will allow us to move as much as possible from wireless to wired in the home environment.  Wired connections are faster, more stable, less management, potentially more secure and improve the performance of the remaining wireless devices by lowering the wireless saturation levels in the home.  If you are like me your home network consists of many devices that have regular Internet access and are often wireless but can be wired such as desktops, video game consoles, Internet television devices, smart televisions and more.  Many or most of these devices can easily be removed from wireless which can only be beneficial.  Save the wireless for mobile devices.

A cabling plant, of course, needs something to attach to.  A good switch will be the true heart of any home network, or any network at all.  If you have a large home you could easily need a forty eight port switch or even need to look at stacked switches.  A small apartment might use sixteen ports.  Size will vary by need.

In acquiring a switch it is a good time to consider not just size but features.  Adding PoE (Power over Ethernet) now is ideal allowing for even cleaner cable management and a yet more impressive network.  Wait till you see what else we consider adding to our home network that might leverage PoE.

Also this is a time when we can consider more advanced features on the switch.  Rather than just using a traditional unmanaged switch we can look at nice, rackmount switches that are either smart (web managed) or fully managed which is excellent for getting broader switch experience.  We might want to add a guest VLAN, for example, for visitors who need Internet access.  If you have a guest bedroom in the house, maybe those Ethernet ports should be VLAN’d as guest access all of the time, for example.

You are going to want to monitor that closet too, no doubt.  Once you start to put nice equipment in there you will want to keep it safe.  Perhaps temperature and moisture sensors that communicate onto the network?

Most home networks live and die on their wireless capabilities.  This is a place to really shine.  Instead of cheap “all in one” networking gear with anemic wireless that is stuck in the most useless corner of the house you can use your shiny, new PoE cabling plant to put high quality, commercial wireless access points in the most useful point(s) of your home (and considering placing them on the grounds too, if you own multiple buildings or open space.)  Centrally managed units are very affordable and can make for a very fast, robust wireless network and can make having guest wireless access very easy as well.

Next to consider, since we have such a robust cabling system in place already, is security.  Why not add a camera or two to watch the outside of your house or the front door?  You can use PoE cameras that are centrally managed.  Look for business class solutions, of course, not consumer cameras and software available at the local store.

One of the bigger and more interesting home projects to consider is a full scale VoIP PBX.  This can be a great and interesting project, one of the few really good uses for running a server at home that will be really used as a “production” service.  A home VoIP PBX can make it easy to have separate extensions for family members, rooms or purposes.  You can have features like individual voicemail mailboxes, a house intercom, front door intercom, room to room calling, wake up calls, video conferencing, free calling for other family members (outside of the home), guest lines, multiple lines for simultaneous calling and the ability to make and take calls while traveling!

Once we have a PBX in the home installing physical phones throughout the home, on PoE of course, is the next step.  Phones can be very useful around a home, especially a larger one.  Having real phones to manage can be very educational and certainly lets you take your home to another level of IT.

No server closest would be complete without features like a domain controller, home web server (why not have a guest website for visitors and a wiki for tracking your home projects and information) and the biggest home systems – storage.  Traditional storage like NAS or file server can be very useful for storing private photos and video, music and document collections, movies and other media.  DLNA streaming can make an audio and video library available to the entire house.  Traditional storage such as SMB and NFS can provide high speed, protected mapped drives available to the computers in the home.  And more modern storage techniques like “cloud storage” can be hosted as well.

Of course all of those workloads can be virtualized and run on a server (or two) and run in the server closet.  If you are incredibly ambitious this could include features like high availability or fault tolerance, although these will generally push costs into a range often impractical for home use by nearly any standard.

And the pièce de résistance is, of course, backups.  Use real backup software, several enterprise options are even free for home-scale use.  Taking good backups, testing restores, using different media, backup strategies and backup types (such as images and file-based backups) can really showcase the reliability of your home network.

Don’t forget to go beyond running systems into monitoring.  Log collection and analysis, bandwidth monitoring, system monitoring, load monitoring and more can be added for completeness.  Think of all the things that exist, or that you wish would exist, in an ideal office setting.  There is rarely any reason not to bring these same technologies into your home.

Beyond these steps there are many places that one could go to make a home network.  Features that might be interesting for you.  Go crazy.

Our goal here is to raise the bar.  Do at home what few businesses do.  Building an amazing home network, one that is really used, beyond building a great home IT lab, is valuable for many reasons.  A great home network is more than just an amazing learning experience, it makes for a perfect interview conversation starter, it is a “portfolio” piece demonstrating skills in cradle to grave LAN design and management, it shows dedication and initiative to the field and it sets a bar to be used when speaking to businesses.

Go ahead, don’t be afraid to make your business jealous of your home network.

Better IT Hiring: Contract To Hire

Information Technology workers are bombarded with “Contract to Hire” positions, often daily.  There are reasons why this method of hiring and working is fundamentally wrong and while workers immediately identify these positions as bad choices to make, few really take the time to move beyond emotional reaction to understand why this working method is so flawed and, more importantly, few companies take the time to explore why using tactics such as this undermine their staffing goals.

To begin we must understand that there are two basic types of technology workers: consultants (also called contractors) and permanent employees (commonly known as the FTEs.)  Nearly all IT workers fall into a desire to be one of these two categories. Neither is better or worse, they are simply two different approaches to employment engagements and represent differences in personality, career goals, life situations and so forth.  Workers do not always get to work they way that they desire, but basically all IT workers seek to be in either one camp or the other.

Understanding the desires and motivations of IT workers seeking to be full time employees is generally very easy to do.  Employees, in theory, have good salaries, stable work situations, comfort, continuity, benefits, vacations, protection and so forth.  At least this is how it seems, whether these aspects are real or just illusionary can be debated elsewhere.  What is important is that most people understand why people want to be employees, but the opposite is rarely true.  Many people lack the empathy for those seeking to not be employees.

Understanding professional or intentional consultants can be difficult.  Consultants live a less settled life but generally earn higher salaries and advance in their careers faster, see more diverse environments, get a better chance to learn and grow, are pushed harder and have more flexibility.  There are many factors which can make consulting or contracting intentionally a sensible decision.  Intentional contracting is very often favored by younger professionals looking to grow quickly and gain experience that they otherwise could not obtain.

What makes this matter more confusing is that the majority of workers in IT wish to work as full time employees but a great many end up settling for contract positions to hold them over until a desired full time position can be acquired.  This situation arises so commonly that a great many people both inside and outside of the industry and on both sides of the interview table may mistakenly believe that all cases are this way and that consulting is a lower or lesser form of employment.  This is completely wrong.  In many cases consulting is highly desired and contractors can benefit greatly for their choice of engagement methodology.  I, myself, spent most of my early career, around fifteen years, seeking only to work as a contractor and had little desire to land a permanent post.  I wanted rapid advancement, opportunities to learn, chances to travel and variety.

It is not uncommon at all for the desired mode of employment to change over time.  It is most common for contractors to seek to move to full employment at some point in their careers. Contracting is often exhausting and harder to sustain over a long career.  But certainly full time employees sometimes chose to move into a more mobile and adventurous contracting mode as well.  And many choose to only work one style or the other for the entirety of their careers.

Understanding these two models is key.  What does not fit into this model is the concept of a Contract to Hire.  This hiring methodology starts by hiring someone willing to work a contract position and then, sometimes after a set period of time and sometimes after an indefinite period of time, either promises to make a second determination to see if said team member should be “converted” into an employee, or let go.  This does not work well when we attempt to match it up against the two types of workers.  Neither type is a “want to start as one thing and then do another”.  Possibly somewhere there is an IT worker who would like to work as a contractor for four months and then become an employee, getting benefits but only after a four month delay, but I am not aware of such a person and it is reasonable to assume that if there is such a person he is unique and already has done this process and would not want to do it again.

This leaves us with two resulting models to match into this situation.  The first is the more common model of an IT worker seeking permanent employment and being offered a Contract to Hire position.  For this worker the situation is not ideal, the first four months represent a likely jarring and complex situation and a scary one that lacks the benefits and stability that is needed and the second decision point as to whether to offer the conversion is frightening.  The worker must behave and plan as if there was no conversion and must be actively seeking other opportunities during the contract period, opportunities that are pure employment from the beginning.  If there was any certainty of a position becoming a full employment one then there would be no contract period at all.  The risk is exceptionally high to the employee that no conversion will be offered.  In fact, it is almost unheard of in the industry for this to happen.

It must be noted that, for most IT professionals, the idea that a Contract to Hire will truly offer a conversion at the end of the contract duration is so unlikely that it is generally assumed that the enticement of the conversion process is purely a fake one and that there is no possibility of it happening at all.  And for reasons we will discover here it is obvious why companies would not honestly expect to attempt this process.  The term Contract to Hire spells almost certain unemployment for IT workers going down that path.  The “to Hire” portion is almost universally nothing more than a marketing ploy and a very dishonest one.

The other model that we must consider is the model of the contract-desiring employee accepting a Contract to Hire position.  In this model we have the better outcome for both parties.  The worker is happy with the contract arrangement and the company is able to employ someone who is happy to be there and not seeking something that they likely will be unable to get.  In cases where the company was less than forthcoming about the fact that the “to Hire” conversion would never be considered this might actually even work out well, but is far less likely to do so long term and in repeating engagements than if both parties were up front and honest about their intentions on a regular basis.  Even for professional contractors seeing the “to Hire” addendum is a red flag that something is amiss.

The results for a company, however, when obtaining an intentional contractor via a Contract to Hire posting is risky.  For one, contractors are highly volatile and are skilled and trained at finding other positions.  They are generally well prepared to leave a position the moment that the original contract is done.

One reason that the term Contract to Hire is used is so that companies can easily “string along” someone desiring a conversion to a full time position by dangling the conversion like a carrot and prolonging contract situations indefinitely.  Intentional contractors will see no carrot in this arrangement and will be, normally, prepared to leave immediately upon completion of their contract time and can leave without any notice as they simply need not renew their contract leaving the company in a lurch of their own making.

Even in scenarios where an intentional contractor is offered a conversion at the end of a contract period there is the very real possibility that they will simply turn down the conversion.  Just as the company maintains the right to not offer the conversion, the IT worker maintains an equal right to not agree to offered terms.  The conversion process is completely optional by both parties.  This, too, can leave the company in a tight position if they were banking on the assumption that all IT workers were highly desirous of permanent employment positions.

This may be the better situation, however.  Potentially even worse is an intentional contractor accepting a permanent employment position when they were not actually desiring an arrangement of that type.  They are likely to find the position to be something that they do not enjoy, or else they would have been seeking such an arrangement already, and will be easily tempted to leave for greener pastures very soon – defeating the purpose of having hired them in the first place.

The idea behind the Contract to Hire movement is the mistaken belief by companies that companies hold all of the cards and that IT workers are all desperate for work and thankful to find any job that they can.  This, combined with the incorrect assumption that nearly all IT workers truly want stable, traditional employment as a full time employee combines to make a very bad hiring situation.

Based on this, a great many companies attempt to leverage the Contract to Hire term in order to lure more and better IT workers to apply based on false promises or poor matching of employment values.  It is seen as a means of lowering cost, testing out potential employees, hedging bets against future head count needs, etc.

In a market where there is a massive over supply of IT workers a tactic such as this may actually pay off.  In the real world, however, IT workers are in very short supply and everyone is aware of the game that companies play and what this term truly means.

It might be assumed that IT workers would still consider taking Contract to Hire because they are willing to take on some risk and hope to convince the employer that conversion, in their case, would be worth while.  And certainly some companies do this process and for some people it has worked out well.  However, it should be noted, that any contract position offers the potential of a conversion offer and in positions where the to “Contract to Hire” is not used, conversions are actually quite common, or at least offers for conversion.  It is specifically when a potential future conversion is offered like a carrot that the conversions become exceptionally rare.  There is no need for an honest company and a quality workplace to mention “to Hire” when bringing on contractors.

What happens, however, is more complex and requires study.  In general the best workers in any field are those that are already employed.  It goes without saying that the better you are, the more likely you are to be employed.  This does not mean that great people never change jobs or find themselves unemployed but the better you are the more time you will average not seeking employment from a position of being unemployed and the worse you are the more likely you are to be unemployed non-voluntarily.  That may seem obvious, but when you combine that with other information that we have, something is amiss.  A Contract to Hire position can never, effectively, entice currently working people in any way.  A great offer of true, full time employment with better pay and benefits might entice someone to give up an existing position for a better one, that happens every day.  But good people generally have good jobs and are not going to give up the positions that they have, the safety and stability to join an unknown situation that only offers a short term contract with an almost certain no chance conversion carrot.  It just is not going to happen.

Likewise when good IT workers are unemployed they are not very likely to be in a position of desperation and even then are very unlikely to even talk to a position listing as Contract to Hire (or contract at all) as most people want full time employment and good IT people will generally be far too busy turning down offers to waste time looking at Contract to Hire positions.  Good IT workers are flooded with employment opportunities and being able to quickly filter out those that are not serious is a necessity.  The words “Contract to Hire” are one of the best low hanging fruits of this filtering process.  You do not need to see what company it is, what region it is in, what the position is or what experience they expect.  The position is not what you are looking for, move along, nothing to see here.

The idea that employers seem to have is the belief that everyone, employed and unemployed IT workers alike, are desperate and thankful for any possible job opening.  This is completely flawed.  Most of the industry is doing very well and there is no way to fill all of the existing job openings that we have today, IT workers are in demand.  Certainly there is always a certain segment of the IT worker population that is desperate for work for one reason or another – personal situations, geographic ties, over staffed technology specialization or, most commonly, not being very competitive.

What Contract to Hire positions do is filter out the best people.  They effectively filter out every currently employed IT worker completely.  In demand skills groups (like Linux, storage, cloud and virtualization) will be sorted out too, they are too able to find work anywhere to consider poor offerings.  Highly skilled individuals, even when out of work, will self filter as they are looking for something good, not looking for just anything that comes along.

At the end of the day, the only people in any number seriously considering Contract to Hire positions, often even to the point of being the only ones even willing to respond to postings, are the truly desperate.  Only the group that either has so little experience that they do not realize how foolish the concept is or, more commonly by far, those that are long out of work and have few prospects and feel that the incredible risks and low quality of work associated with Contract to Hire is acceptable.

This hiring problem begins a vicious loop of low quality, if one did not already exist. But most likely issues with quality already will exist before a company considers a Contract to Hire tactic.  Once good people begin to avoid a company, and this will happen even if only some positions are Contract to Hire, – because the quality of the hiring process is exposed, the quality of those able to be hired will begin to decline.  The worse it gets, the harder to turn the ship around.  Good people attract good people.  Good IT workers want to work with great IT workers to mentor them, to train them and to provide places where they can advance by doing a good job.  Good people do not seek to work in a shop staffed by the desperate.  Both because working only with desperate people is depressing and the quality of work is very poor, but also because once a shop gains a poor reputation it is very hard to shake and good people will be very wary of having their own reputation tarnished by having worked in such a place.

Contact to Hire tactics signal desperation and a willingness to admit defeat on the part of an employer.  Once a company sinks to this level with their hiring they are no longer focusing on building great teams, acquiring amazing talent or providing a wonderful work environment.  Contract to Hire is not always something that every IT professional can avoid all of the time.  All of us have times when we have to accept something less than ideal.  But it is important for all parties involved to understand their options and just what it means when a company moves into this mode.  Contract to Hire is not a tactic for vetting potential hires, it simply does not work that way.  Contract to Hire causes companies to be vetted and filtered out of consideration by the bulk of potential candidates without those metrics ever being made available to hiring firms.  Potential candidates simply ignore them and write them off, sometimes noting who is hiring this way and avoiding them even when other options come along in the future.

As a company, if you desire to have a great IT department and hire good people, do not allow Contract to Hire to ever be associated with your firm.  Hire full time employees and hire intentional contractors, but do not play games with dangling false carrots hoping that contractors will change their personalities or that full time employees will take huge personal risks for no reason, that is simply not how the real world works.

On DevOps and Snowflakes

One can hardly swing a proverbial cat in IT these days without hearing people talking about DevOps.  DevOps is the hot new topic in the industry picking up from where the talk of cloud left off and to hear people talk about it one might believe that traditional systems administration is already dead and buried.

First we must talk about what we mean by DevOps.  This can be confusing because, like cloud, an older term is often being stolen to mean something different or, at best, related to something that already existed.  Traditional DevOps was the merging of developer and operational roles.  In the 1960s through the 1990s, this was the standard way of running systems.  In this world the people who wrote the software were generally the same ones who deployed and maintained it.  Hence the merging of “developer” and “operations”, operations being a semi-standard term for the role of system administrator.  These roles were not commonly separated until the rise of the “IT Department” in the 1990s and the 2000s.  Since then, the return to the merging of the two roles has started to rise in popularity again primarily because of the way that the two can operate together with great value in many modern, hosted, web application situations.

Where DevOps is often talked about today is not as a strict merging of the developers and the operations staff but as a modification to the operations staff with a much higher focus on coding not the application itself but in defining application infrastructures as code as a natural extension of cloud architectures.  This can be rather confusing at first.  What is important to note is that traditional DevOps is not what is commonly occurring today but a new “fake” DevOps where developers remain developers and operations remains operations but operations has evolved into a new “code heavy” role that continues to focus on managing servers running code provided by the developers.

What is significant today is that the role of the system administrator has begun to diverge into two related, but significantly different roles, one of which is improperly called DevOps by most of the industry today (most of the industry being too young to remember when DevOps was the norm, not the exception and certainly not something new and novel.)  I refer to these two aspects of the system administrator role here as the DevOps and the Snowflake approaches.

I use the term Snowflake to refer to traditional architectures for systems because each individual server can be seen as a “unique Snowflake.”  They are all different, at least insofar as they are not somehow managed in such a way as to keep them identical.  This doesn’t mean that they have to be all unique, just that they retain the potential to be.  In traditional environments a system administrator will log into each server individually to work on them.  Some amount of scripting is common to ease administration tasks but at its core the role involves a lot of time working on individual systems.

Easing administration of Snowflake architectures often involved attempts to minimize differences between systems in reasonable ways.  This generally starts with things like choosing a single standard operating system and version (Windows 2012 R2 or Red Hat Enterprise Linux 7) rather than allowing every server installation to be a different OS or version.  This standardization may seem basic but many shops lack this standardization even today.

A next step is commonly creating a standard deployment methodology or a gold master image that is used for making all systems so that the base operating system and all base packages, often including system customization, monitoring packages, security packages, authentication configuration and similar modifications are standard and deployed uniformly.  This provides a common starting point for all systems to minimize divergence.  But technically they only ensure a standard starting point and over time divergence in configuration must be anticipated.

Beyond these steps, Snowflake environments typically use custom, bespoke administration scripts or management tools to maintain some standardization between systems over time.  The more commonalities that exist between systems the easier they are to maintain and troubleshoot and the less knowledge is needed by the administration staff.  More standardization means fewer surprises, fewer unknowns and much better testing capabilities.

In a single system administrator environment with good practices and tooling, Snowflake environments can take on a high degree of standardization.  But in environments with many system administrators, especially those supported around the clock from many regions, and with a large number of systems, standardization, even with very diligent practices, can become very difficult.  And that is even before we tackle the obvious issues surrounding the fact that different packages and possibly package versions are needed on systems that perform different roles.

The DevOps approach grows organically out of the cloud architecture model.  Cloud architecture is designed around automatically created and automatically destroyed, broadly identical systems (at least in groups) that are controlled through a programmatic interface or API.  This model lends itself, quite obviously, to being controlled centrally through a management system rather than through the manual efforts of a system administrator.  Manual administration is effectively impossible and completely impractical under this model.  Individual systems are not unique like in the Snowflake model and any divergence will create serious issues.

The idea that has emerged from the cloud architecture world is one that systems architecture should be defined centrally “in code” rather than on the servers themselves.  This sounds confusing at first but makes a lot of sense when we look at it more deeply.  In order to support this model a new type of systems management tool that has yet to take on a really standard name but is often called a systems automation tool, DevOps framework, IT automation tool or simply “infrastructure as code” tool has begun to emerge.  Common toolsets in this realm include Puppet, Chef, CFEngine and SaltStack.

The idea behind these automation toolsets is that a central service is used to manage and control all systems.  This central authority manages individual servers by way of code-based descriptions of how the system should look and behave.  In the Chef world, these are called “recipes” to be cute but the analogy works well.  Each system’s code might include information such as a list of which packages and package versions should be installed, what system configurations should be modified and files to be copied to the box.  In many cases decisions about these deployments or modifications are handled through potentially complex logic and hence the need for actual code rather than something more simplistic such as markup or templates.  Systems are then grouped by role and managed as groups.  The “web server” role might tell a set of systems to install Apache and PHP and configure memory to swap very little.  The “SQL Server” role might install MS SQL Server and special backup tools only used for that application and configure memory to be tuned as desired for a pool of SQL Server machines.  These are just examples.  Typically an organization would have a great many roles, some may be generic such as “web server” and others much more specific to support very specific applications.  Roles can generally be layered, so a system might be both a “web server” and a “java server” getting the combined needs of both met.

These standard definitions mean that systems, once designated as belonging to one role or another, can “build themselves” automatically.  A new system might be created by an administrator requesting a system or a capacity monitoring system might decide that additional capacity is needed for a role and spawn new server instances automatically without any human intervention whatsoever.  At the time that the system is requested, by a human or automatically, the role is designated and the system will, by way of the automation framework, transform itself into a fully configured and up to date “node.”  No human system administration intervention required.  The process is fast, simple and, most importantly, completely repeatable.

Defining systems in code has some non-obvious consequences.  One is that backups of complete systems are no longer needed.  Why backup a system that you can recreate, with minimum effort, almost instantly?  Local data from database systems would need to be backed up but only the database data, not the entire system.  This can greatly reduce strain on backup infrastructures and make restore processes faster and more reliable.

The amount of documentation needed for systems already defined in code is very minimal.  In Snowflake environments the system administrator needs to maintain documentation specific to every host and maintain that documentation manually. This is very time consuming and error prone.   Systems defined by way of central code need little to no documentation and the documentation can be handled at a group level, not the individual node level.

Testing systems that are defined in code is easy to do as well.  You can create a system via code, test it and know that when you move that definition into production that the production system will be created repeatably exactly as it was created in testing.  In Snowflake environments it is very common to have testing practices that attempt to do this but do so through manual efforts and are prone to being sloppy and not exactly repeatable and very often politics will dictate that it is faster to mimic repeatability than to actually strive for it.  Code defined systems bypass these problems making testing far more valuable.

Outside of needing to define a number of nodes to exist within each role, the system can reprovision an entire architecture, from scratch, automatically.  Rebuilding after a disaster or bringing up a secondary site can be very quickly and easily done.  Also moving between locally hosted systems and remotely hosted systems including those from companies like Amazon, Microsoft, IBM, Rackspace and others is extremely easy.

Of course, in the DevOps world there is a great value to using cloud architectures to enable the most extreme level of automation but using cloud architectures is unnecessary to leverage these types of tools.  And, of course, having a code defined architecture could be used partially while manual administration could be implemented too for a hybrid approach but this is rarely recommended on individual systems.  It is generally far better to have two environments, one that is managed as Snowflakes and one that is managed as DevOps when the two approaches are mandated.  This makes  a far better hybridization.  I have seen this work extremely well in an enterprise environment with more scores of thousands of “Snowflake” servers each very unique but with a dedicated environment of ten thousands nodes that was managed in a DevOps manner because all of the nodes were to be identical and interchangeable using one of two possible configurations.  Hybridization was very effective.

The DevOps approach, however, comes with major caveats as well.  The skill sets necessary to manage a system in this way are far greater than those needed for traditional systems administration as, at a minimum, all traditional systems administration knowledge is still needed plus solid programming knowledge typically of modern languages like Python and Ruby and knowledge of the specific frameworks in question as well.  This extended knowledge base requirement means that DevOps practitioners are not only rare but expensive too.  It also means that university education, already far short of preparing either systems administrators or developers for the professional world are now farther still from preparing graduates to work under a DevOps model.

System administrators working in each of these two camps have a tendency to see all systems as needing to fit into their own mold. New DevOps practitioners often believe that Snowflake systems are legacy and need to be updated.  Snowflake (traditional) admins tend to see the “infrastructure as code” movement as silly, filled with unnecessary overhead, overly complicated and very niche.

The reality is that both approaches have a tremendous amount of merit and both are going to remain extremely viable.  Both make sense for very different workloads and large organizations, I suspect, will commonly see both in place via some form of hybridization.  In the SMB market where there are typically only a tiny number of servers, no scaling leverage to justify cloud architectures and a high disparity between systems, I suspect that DevOps will remain almost indefinitely outside of the norm as the overhead and additional skills necessary to make it function are impractical or even impossible to acquire.  Larger organizations have to look at their workloads.  Many traditional workloads and much of traditional software is not well suited to the DevOps approach, especially cloud automation, and will either require hybridization or an impractically high level of coding on a per system basis making the DevOps model impossible to justify.  But workloads built on web architectures or that can scale horizontally extremely well will benefit heavily from the DevOps model at scale.  This could apply to large enterprise companies or smaller companies likely producing hosted applications for external consumption.

This difference in approach means that, in the United States for example, most of the US is comprised of companies that will remain focused on the Snowflake management model while some east coast companies could evaluate the DevOps model effectively and begin to move in that direction.  But on the west coast where more modern architectures and a much larger focus on hosted applications and applications for external consumption are the driving economic factors, DevOps is already moving from newcomer to mature, established normalcy.  DevOps and Snowflake approaches will likely remain heavily segregated by regions in this way just as IT, in general, sees different skill sets migrate to different regions.  It would not be surprising to see DevOps begin to take hold in markets such as Austin where traditional IT has performed rather poorly.

Neither approach is better or worse, they are two different approaches servicing two very different ways of provisioning systems and two different fundamental needs of those systems.  With the rise of cloud architectures and the DevOps model, however, it is critically important that existing system administrators understand what the DevOps model means and when it applies so that they can correctly evaluate their own workloads and unique needs.  A large portion of the traditional Snowflake system administration world will be migrating, over time, to the DevOps model.  We are very far from reaching a steady state in the industry as to the balance of these two models.

Originally published on the StorageCraft Blog.

It’s a Field, Not a Road

Over the years I have become aware of a tendency in the Information Technology arena to find strong expectations of exactly how much someone should know about certain technologies based on their job title and length of time having worked in IT.  Of course, someone’s current job title and experience level should give you some, if only a little, insight into what they are doing on the job today, but it should rarely give you much insight into what they have done in the past or how they got to where they are today.

There are some abundantly common “paths” through IT, especially in the small and medium business markets, which help to stereotype the advancement of an IT professional over time.  The most common path goes something like this: high school, four year college degree, one or two basic certifications from CompTIA, entry level helpdesk job, better help desk job, deskside support job, basic Microsoft certification, system administrator or IT manager position.  This path is common enough that many people who have taken it simply assume that everyone else in the IT world has done so as well and this assumption creates a lot of problems in many different areas.

First of all, it must be stated, that there is no standard path in IT, not even remotely.  Too often IT professionals, applying their own experiences to their view of other people, see IT as a road when it truly is a field (pun only partially intended.)  IT has no defined entry point nor exit point from the industry.  IT is a massive field made up of many different sub-disciplines that exist in little, if any, linear progression of any sort from one to another.  There are far more lateral moves in IT than there are ladders to climb.

Besides the completely untrue assumption that any specific education and certification requirements exist in order to enter IT, the widely held belief that helpdesk positions are the only entry level IT position that exists and that they are only a stepping stone job are completely unfounded and untrue.  Many, likely most, IT professionals do not enter the field through helpdesk, call centers, or even deskside support and probably not through any type of Windows-centric support at all.  While end user focused,  helpdesk remains only a small percentage of all IT careers and one through which only a portion of IT professionals will pass.  Windows-centric support is one of the most important foci within IT and clearly the most visible to end users and those outside of IT; this high level of visibility can be misleading, however.  It is equally true that helpdesk, call center, deskside support and the like are not stepping stone jobs exclusively and are career options in their own right.  It is unfortunate that such a high percentage of IT professionals view such positions as being inappropriate career goals because it is widely recognized that a lack of skilled and dedicated people in those specific positions is often what causes the most friction between end users and IT departments.

I have found on several occasions hiring managers who discounted hiring anyone who was truly interested in helpdesk or deskside support as a career and who enjoyed working with customers; and only desired to hire someone who looked down on those roles as necessary evils that should be passed over as quickly as possible en route to a more “rewarding” career destination.  I find this sad on many levels.  It implies that the hiring manager lacks empathy for other professionals and does not consider their individual desires or strengths.  It implies that the company in question is institutionalizing a system by which people are not hired to do something that they love nor something that they are good at but only hires people willing to do a job role that they don’t want to do in the hopes of eventually doing one that they do want to do.  This rules out anyone actually qualified to do the desired job since those people will go straight into those positions.  It almost guarantees, as well, that end user support will be poor as no one is hired that is specifically good at or interested in that role.  The hiring manager clearly sees end user support as not being a priority and the entire idea is that anyone going into that role will “succeed” by moving out of that role as quickly as possible, and thus leaving end users with a lack of continuity as well as a never ending cycle of churn.  Believing that IT is a road and not a field has tangible, negative consequences.

Seeing IT careers as a direct path from point A to point B creates an inappropriate set of expectations as well.  It is not uncommon at all for someone to say that anyone with five years of experience in IT must know how to <insert somewhat common Windows desktop task here> based on nothing but their length of time working in IT, completely ignoring the possibility that they have never worked on Windows or in a role that would do that task.  While Windows is common, many people working in IT have never performed those roles and there is no reason why it would be expected that a specific task like that would be known automatically.  This goes beyond the already problematic attitude that many people have that the tasks that they personally did in a specific job role are the same tasks that everyone in that job role have done.  This is, of course, completely untrue.  A Windows system admin at one company and a Windows system admin at another company or even just in another department, may do similar tasks or possibly completely different tasks.  Even a decade in those roles may produce almost completely unique experiences and skills.  There is just so much potential in IT for doing different things that we cannot make specific task assumptions.

This assumptive process carries over to certifications and education as well.  While many fields succumb to the cliche that anyone over a certain level must have a college education, it is far less common for that assumption to be true in IT.  Few fields find university training to be as optional as IT does, and remembering that alternative means of entering the field exist is critical.  Many of the best and brightest enter IT directly and not through an educational channel.  These candidates are often years ahead of their “educated” counterparts and often represent the most passionate, driven and capable pool of talent; and they are almost certainly the most capable of self motivation and self education which are both extremely important traits in IT.

Similarly I was recently introduced, for the first time, to assumptions about certifications.  Certifications are specific to job roles and none would apply broadly to all roles and none would be sensible for someone to hold if a higher certification was held or if they have never passed through that specific point in that specific job role.  The example that came up was with a hiring manager who actually believe that anyone at ten years of experience would be expected to have both an A+ and a Network+ certification.  Both are entry level certifications and not relevant to the vast majority of IT careers (the A+ especially has little broad applicability while the Network+ is much more general case but still effectively entry level.) While it would not be surprising to find these being held by a ten year IT veteran, it would make no sense whatsoever to be used as filtering agents by which someone would rule out candidates for lacking.  This is completely ridiculous.  Those certs are designed only to show rudimentary knowledge in specific IT career paths.  Anyone who has passed that point in their career without needing them would never go back and spend time and money earning entry level certifications while already at a career mid-point.  Once you have a PhD, you don’t go back and get another Associates degree just to show that you could have done it, the PhD is enough to demonstrate the ability to get an entry level degree.  And most people with a significant history in the field will often have passed the career point where those certs made sense years before the certs even existed (the Network+, for example, did not exist until I was in IT for more than a decade already!)

I am particular sensitive to this issue both because I spent several years as a career counselor and helped to put IT professionals on a path to career growth and development and because I myself did not take what is considered to be a conventional path into IT.  I was lucky enough to have interned in software development during the middle and high school years and was offered a position in UNIX support right out of high school.  I never passed through any Windows-centric roles, nor did I ever work on a helpdesk or do deskside support outside of a small amount of UNIX high end research labs.  My career took me in many different directions but almost none followed the paths that so many hiring managers expect.  Attempting to predict the path that one’s career will take in the future is impossible.  Equally, attempting to determine what path must have been taken to have reached a current location is also impossible.  There are simply too many ways to get from point A to point B.

Embracing uniqueness in IT is important.  We all bring different strengths and weaknesses, different ideas and priorities, different goals and different things that we enjoy or despise doing.  The job that one person sees as a necessary evil another will love doing and that passion for the role will show.  The passionate, career-focused helpdesk professional will bring an entirely different joie de vivre to the job than will someone who feels that they are trapped doing an undesirable job until another opportunity comes along.  This doesn’t mean that the later will not work hard and try their best, but there is little that can be done to compete with someone passionate about a specific role.

It is also very easy, when we look at IT as a singular path, to forget that individual roles, such as helpdesk, actually have progressions within the role itself.  Often many steps exist within specific roles.  In the case of a helpdesk it is common to refer to these as L0 through L3.  Plus there are helpdesk team leads and helpdesk manager positions that are common.  An entire career can be had just within the helpdesk focus sub-discipline within IT.  There is nothing wrong with entering IT directly into the role type that interests you.  There is also nothing wrong with achieving a place in your career where you are happy to stay. Everyone has an ideal position, a career position where they both excel at what they do and are happy doing indefinitely.  In most fields, people actually strive to achieve this type of position somewhat early in their careers.  In IT, it is strangely uncommon.

There is a large amount of social pressure within IT to have “ambition” pushing you towards more and more challenging positions within the field.  Partially this is because IT is such an enormous field that is so dynamic that most people really do enter wherever opportunity presents itself and then attempt to maneuver themselves into positions that they find interesting over a period of many years.  This creates a culture of continuous change and advancement expectations, to some degree. This is not entirely bad but it often marginalizes or even penalizes people who manage to find their desired positions, especially if this happens early in their careers and even more specifically if it happens in a role which many people see as a “stepping stone” role such as with helpdesk or deskside support.  This is not good for individuals, for businesses or for the field in general.  It pushes people into roles where they are not happy and not well suited in order to satisfy social pressures rather than career aspirations or business needs.

Ambition is not necessarily a good thing.  It certainly is not a bad thing.  But too often hiring managers look for ambition when it is not in anyone’s interest.  Hiring someone young or inexperienced in the hopes that they grow over time and move into more and more advanced roles is an admirable goal and can work out great.  But avoiding hiring someone perfectly suited for a role because they will want to stay where they are well suited and where they excel makes no sense at all.  In an ideal world, everyone would be hired directly into the perfect position for them and no one would ever need to change jobs.  This is best for both the employees and the employer. It is rarely possible, but certainly should not be avoided when the opportunity presents itself.

Creating stereotypes and using them to judge IT professionals has negative consequences for everyone.  It increases stress, reduces career satisfaction, decreases performance and lowers the quality of IT service delivery while making it more expensive to provide.  It is imperative that we accept IT as a field, not as a road, and that we also accept that IT professionals are individuals with different goals, different career motivations and different ambitions.  Variety and diversity in IT are far more important than they are in most fields because IT is so large and requires so many different perspectives to perform optimally.  Unlike a road that travels a single, predictable path, a field allows you to wander in many directions and arrive at many different destinations.