Category Archives: Business of IT

You Are Not Special

It is not my intention for this to sound harsh, but I think that it has to be said: “You are not special.”  And by “you” here, of course, I mean your business.  The organization that you, as an IT practitioner, support.  For decades we have heard complaints about how modern education systems attempt to make every student feel unique and special, when awards are given out schools attempt to find a way, especially with elementary students, to make sure that every student gets an award of some sort.  Awards for best attendance, posture, being quiet in class or whatever are created to award completely irrelevant things in order to make every student not only feel like part of the group, but to be a special, unique individual that has accomplished something better than anyone else.

This attitude, this belief that everyone is special and that all of those statistics, general rules and best practices apply to “someone else” has become pervasive in IT now as well, manifesting itself in the belief that each business, each company is so special and unique that IT industry knowledge does not apply in this situation.  IT practitioners with whom I have spoken almost always agree that best practices and accumulated industry knowledge are good and apply in nearly every case – except for their own.  All of those rules of thumb, all of those guidelines are great for someone else, but not for them.  The problem is that nearly everyone feels this way, but this cannot be the case.

I have found this problem to be most pronounced and, in fact, almost exclusive to the small business market where, in theory, the likelihood of a company being highly unique is actually much lower than the large enterprise space of the Fortune 100 where uniqueness is somewhat expected.  But instead of small businesses assuming uniformity and enormous businesses expecting uniqueness the opposite appears to happen.  Large businesses understand that even at massive scale IT problems are mostly standard patterns and by and large should be solved using tried and true, normal approaches.  And likewise, small businesses, seemingly driven by an emotional need to be “special” claim a need for avoiding industry patterns often eschewing valuable knowledge to a ludicrous degree and often while conforming to the most textbook example of the use case for the pattern.  It almost seems, from my experience, that the more “textbook” a small business is, the more likely that its IT department will avoid solutions designed exactly for them and attempt to reinvent the wheel at any cost.

Common solutions and practices apply to the majority of businesses and workloads, easily in excess of 99.9% of them.  Even in larger companies where there is opportunity for uniqueness we expect to only see rare workloads that fall into a unique category.  Even in the world’s largest businesses the average workload is, well, average.  Large enterprises with tens of thousands of servers and workloads often find themselves with a handful of very unique situations for which there is no industry standard to rely on.  But even so, they have many thousands of very standard workloads that are not special in any way.  The smaller the business not only the less opportunity for a unique workload but the less chance of it occurring on a workload by workload basis because they have so many fewer workloads.

One of the reasons that small businesses, even ones very unique as small businesses go, are rarely actually unique is because when a small business has an extreme need for say performance, capacity, scale or security it [almost] never means that it needs that thing in excess of existing standards for larger businesses.  The standards of how to deal with large data sets or extreme security, for example, are already well established in the industry at large and small businesses need only leverage the knowledge and practices developed for larger players.

What is surprising is when a small business with relatively trivial revenue believes that its data requires a level of secrecy and security in excess of the security standards of the world’s top financial institutions, military organizations, governments, hospitals or nuclear power facilities.  What makes the situation more absurd is that in pursuing these extremes of security, small businesses almost always result in very low security standards.  They often cite needs for “extreme security” for doing insecure or as we often say “tin foil hat” procedures.

Security is one area where this behavior if very pronounced.  Often it is small business owners or small business IT “managers” who create this feeling of distrusting industry standards, not IT practitioners themselves, although the feeling that a business is unique often trickles down and is seen there as well.

Similar to security, the need for unlimited uptime and highly available systems, rarely needed even for high end enterprise workloads, seem an almost ubiquitous goal in small businesses.  Small businesses often spend orders of magnitude more money, in relationship to revenue, on procuring high availability systems compared to their larger business counterparts.  Often this is done with the mistaken belief that large businesses always use high availability and that small business must do so to compete, that if they do not that they are not a viable business or that any downtime equates to business collapse.  None of these are true.  Enterprises have far lower cost of reliability compared to revenue and still do considerable cost analysis to see what reliability expenditures are justified through risk.  Small businesses rarely do that best practice analysis and jump, almost universally, to the very unlikely belief that their workloads are dramatically more valuable than even the largest enterprises and that they have no means of mitigating downtime.  Eschewing both business best practices (doing careful cost and risk analysis before investing in risk mitigation), financial best practices (erring on the side of up front cost savings) or technology best practices (high availability only when needed and justified) leaves many businesses operating from the belief that they are “special” and none of the normal rules apply to them.

By approaching all technology needs from the assumption of being special, businesses that do this are unable to leverage the vast, accumulated knowledge of the industry.  This means that businesses are continuously reinventing the wheel and attempting to forge new paths where well trodden, safe paths already exist.  Not only can this result in an extreme degree of overspending in some cases and in dangerous risk in others but it effectively guarantees that the cost of any project is unnecessarily high.  Small business, especially, have the extreme advantage of being able to leverage the research and experience of larger businesses allowing smaller businesses to be more agile and lean.  This is a key component to making small businesses compete against the advantages of scale inherent to large businesses.  When small businesses ignore this advantage they are left with neither the scale of big business nor the advantages of being small.

There is no simple solution here – small business IT practitioners and small business managers need to step down from their pedestals and take a long, hard look at their companies and ask if they really are unique and special or if they are a normal business with normal needs.  I guarantee you are not the first to face the problems that you have.  If there isn’t a standard solution approach available already then perhaps the approach to the problem is wrong itself.  Take a step back and evaluate with an eye to understanding that many businesses share common problems and can tackle them effectively using standard patterns, approaches and often best practices.  If your immediate reaction to best practices, patterns and industry knowledge is “yes but that doesn’t apply here” you need to stop and reevaluate – because yes, it certainly does apply to you.  It is almost certainly true that you have misunderstood the uniqueness of your business or you have misunderstood how the guidance is applied resulting in the feeling that those guidelines are not applicable.  Even those rare businesses with very unique workloads only have them for a small number of their workloads and not the majority of them; the most extremely unique businesses and organizations still have many common workloads.

Patterns and best practices are our friends and allies, our trusted partners in IT.  IT, and business in general, is challenging and complex.  To excel as IT practitioners we can seek to stand on the shoulders of giants, walk the paths that have been mapped and trodden for us and leverage the work of others to make our solutions as stable, predictable and supportable as possible.  This allows us to provide maximum value to the businesses that we support.

Explaining the Lack of Large Scale Studies in IT

IT practitioners ask for these every day and yet, none exist – large scale risk and performance studies for IT hardware and software.  This covers a wide array of possibilities, but common examples are failure rates between different server models, hard drives, operating systems, RAID array types, desktops, laptops, you name it.  And yet, regardless of the high demand for such data there is none available.  How can this be.

Not all cases are the same, of course, but by and large there are three really significant factors that come into play keeping this type of data from entering the field.  These are the high cost of conducting a study, the long time scale necessary for a study and a lack of incentive to produce and/or share this data with other companies.

Cost is by far the largest factor.  If the cost of large scale studies could be overcome, all other factors could have solutions found for them.  But sadly the nature of a large scale study is that it will be costly.  As an example we can look at server reliability rates.

In order to determine failure rates on a server we need a large number of servers in order to collect this data.  This may seem like an extreme example but server failure rates is one of the most commonly requested large scale study figures and so the example is an important one.  We would need perhaps a few hundred servers for a very small study but to get statistically significant data we would likely need thousands of servers.  If we assume that a single server is five thousand dollars, which would be a relatively entry level server, we are looking at easily twenty five million dollars of equipment!  And that is just enough to do a somewhat small scale test (just five thousand servers) of a rather low cost device.  If we were to talk about enterprise servers we would easily just to thirty or even fifty thousand dollars per server taking the cost even to a quarter of a billion dollars.

Now that cost, of course, is for testing a single configuration of a single model server.  Presumably for a study to be meaningful we would need many different models of servers.  Perhaps several from each vendor to compare different lines and features.  Perhaps many different vendors.  It is easy to see how quickly the cost of a study becomes impossibly large.

This is just the beginning of the cost, however.  To do a good study is going to require carefully controlled environments on par with the best datacenters to isolate environmental issues as much as possible.  This means highly reliable electric, cooling, airflow, humidity control, vibration and dust control.  Good facilities like this are very expensive and are why many companies do not pay for them, even for valuable production workloads.  In a large study this cost could easily exceed the cost of the equipment itself over the course of the study.

Then, of course, we must address the needs for special sensors and testing.  What exactly constitutes a failure?  Even in production systems there is often dispute on this.  Is a hard drive failing in an array a failure, even if the array does not fail?  Is predictive failure a failure? If dealing with drive failure in a study, how do you factor in human components such as drive replacement which may not be done in a uniform way?  There are ways to handle this, but they add complication and make the studies skew away from real world data to contrived data for a study.  Establishing study guidelines that are applicable and useful to end users is much harder than it seems.

And the biggest cost, manual labor.  Maintaining an environment for a large study will take human capital which may equal the cost of the study itself.  It takes a large number of people to maintain a study environment, run the study itself, monitor it and collect the data.  All in all, the cost are generally, simply impossible to do.

Of course we could greatly scale back the test, run only a handful of servers and only two or three models, but the value of the test rapidly drops and risks ending up with results that no one can use while still having spent a large sum of money.

The second insurmountable problem is time.  Most things need to be tested for failure rates over time and as equipment in IT is generally designed to work reliably for decades, collecting data on failure rates requires many years.  Mean Time to Failure numbers are only so valuable, Mean Time Between Failures and failure types, modes and statistics on that failure is very important in order for a study to be useful.  What this means is that for a study to be truly useful it must run for a very long time creating greater and greater cost.

But that is not the biggest problem.  The far larger issue is that for a study to have enough time to generate useful failure numbers, even if those numbers were coming out “live” as they happened it would already be too late.  The equipment in question would already be aging and nearing time for replacement in the production marketplace by the time the study was producing truly useful early results.  Often production equipment is only purchased for three to five years total lifespan.  Getting results even one year into this span would have little value.  And new products may replace those in the study even more rapidly than the products age naturally making the study only valuable from a historic context without any use in determining choices in a production decision role – the results would be too old to be useful by the time that they were available.

The final major factor is a lack of incentive to provide existing data to those who need it.  While few sources of data exists, a few do, but nearly all are incomplete and exist for large vendors to measure their own equipment quality, failure rates and such.  These are rarely done in controlled environments and often involve data collected from the field.  In many cases this data may even be private to customers and not legally able to be shared regardless.

But vendors who collect data do not collect it in an even, monitored way so sharing that data could be very detrimental to them because there is no assurance that equal data from their competitors would exist.  Uncontrolled statistics like that would offer no true benefit to the market nor do the vendors who have them so vendors are heavily incentivized to keep such data under tight wraps.

The rare exception are some hardware studies from vendors such as Google and BackBlaze who have large numbers of consumer class hard drives in relatively controlled environments and collect failure rates for their own purposes but have little or no risk from their own competitors leveraging that data but do have public relations value in doing so and so, occasionally, will release a study of hardware reliability on a limited scale.  These studies are hungrily devoured by the industry even though they generally contain relatively little value as their data is old and under unknown conditions and thresholds, and often do not contain statistically meaningful data for product comparison and, at best, contain general industry wide statistical trends that are somewhat useful for predicting future reliability paths at best.

Most other companies large enough to have internal reliability statistics have them on a narrow range of equipment and consider that information to be proprietary, a potential risk if divulged (it would give out important details of architectural implementations) and a competitive advantage.  So for these reasons they are not shared.

I have actually been fortunate enough to have been involved and run a large scale storage reliability test that was conducted somewhat informally, but very valuably on over ten thousand enterprise servers over eight years resulting in eighty thousand server years of study, a rare opportunity.  But what was concluded in that study was that while it was extremely valuable what it primarily showed is that on a set so large we were still unable to observe a single failure!  The lack of failures was, itself, very valuable.  But we were unable to produce any standard statistic like Mean Time to Failure.  To produce the kind of data that people expect we know that we would have needed hundreds of thousands of server years, at a minimum, to get any kind of statistical significance but we cannot reliably state that even that would have been enough.  Perhaps millions of servers years would have been necessary.  There is no way to truly know.

Where this leaves us is that large scale studies in IT simply do not exist and will never, likely, exist.  When they do they will be isolated and almost certainly crippled by the necessities of reality.  There is no means of monetizing studies on the scale necessary to be useful, mostly because failure rates of enterprise gear is so low while the equipment is so expensive, so third party firms can never cover the cost of providing this research.  As an industry we must accept that this type of data does not exist and actively pursue alternatives to having access to such data.  It is surprising that so many people in the field expect this type of data to be available when it never has been historically.

Our only real options, considering this vacuum, are to collect what anecdotal evidence exists (a very dangerous thing to do which requires careful consideration of context) and the application of logic to assess reliability approaches and techniques.  This is a broad situation where observation necessarily fails us and only logic and intuition can be used to fill the resulting gap in knowledge.

What Do I Do Now? Planning for Design Changes

Quite often I am faced with talking to people about their system designs, plans and architectures.  And many times that discussion happens too late and designs are either already implemented or they are partially implemented.  This can be very frustrating if the design in progress has been deemed to not be ideal for the situation.

I understand the feeling of frustration that will come from a situation like this but it is something that we in IT must face on a very regular basis and managing this reaction constructively is a key IT skill.  We must become masters of this situation both technically as well as emotionally.  We should not be crippled by it, it is a natural situation that every IT professional will experience on a regular basis.  It should not be discouraging or crippling but it is very understandable that it can feel that way.

One key reason that we experience this so often is because IT is a massive field with a great number of variables to be considered in every situation.  It is also a highly creative field where there can be numerous, viable approaches to any given problem.  That there is even a single “best” option is rarely true.  Normally there any many competitive options.  Sometimes these are very closely related, sometimes these options are drastically different making them very difficult to compare meaningfully.

Another key reason is that factors change.  This could be that new techniques or information come to light, new products are released, products are updated, prices change or business needs change near to or even during the decision making and design processes.  This rate of change is not something that we, as IT professionals, can hope to ever control.  It is something that we must accept and deal with as best as we can.

Another thing that I often see missed is that a solution that was ideal when made may not be ideal if the same decision was being made today.  This does not, in any way, constitute a deficiency in the original design yet I have seen many people react to it as if it did.  The most common scenario that I run into where I see people exhibit this behaviour is in the aversion to the use of RAID 5 in modern storage design, RAID 6 and RAID 10 being the popular alternatives for good reason.  But this RAID 5 aversion, common since about 2009, did not exist always and from the middle of the 1990s until nearly the end of the 2000s RAID 5 was not only viable, it was very commonly the best solution for the given business and technical needs (the increase in aversion to it was mostly gradual, not sudden.)  However many people see RAID 5 as understandably poor as an option today but apply this new aversion to systems designed and implemented long ago, sometimes close to two decades ago.  This makes no sense and is purely an emotional reaction.  RAID 5 being the best choice for a scenario in 2002 in no way implies that it will still be the best choice in 2015.  But likewise, RAID 5 being a poor choice in 2015 for a scenario in no way belittles or negates the fact that it was very often a great choice several years ago.

I have been asked many times what to do once less than ideal design decisions have been made.  “What do I do now?”

Learning what to do when perfection is no longer an option (as if it ever really was, all IT is about compromises) is a very important skill.  The first things that we must tackle are the emotional problems as these will undermine everything else.  We must do our best to step back, accept the situation and act rationally.  The last thing that we want to do is take a non-ideal situation and make things worse by attempting to reverse justify bad decisions or panicking.

Accepting that no design is perfect, that there is no way to always get things completely right and that dealing with this is just part of working IT is the first step.  Step back, breathe deep.  It isn’t that bad.  This is not a unique situation.  Every IT pro doing design goes through this all of the time.  You should try your best to make the best decisions possible but you must also accept that that can rarely be done – no one has access to enough resources to really be able to do that.  We work with what we have.  So here we are.  What’s next?

Next is to assess the situation.  Where are we now?  In many cases the implementation is done and there is nothing more to do.  The situation is not ideal, but is it bad?  Very often the biggest mistake that I see people facing of an all ready implemented design is that it is too costly – typically “better” solutions are not better because they are faster or more reliable but are better because they are cheaper, easier or faster to have implemented.  That’s an unfortunate situation but hardly a crippling one.  Whatever time or money was spent must have been an acceptable amount at the time and must have been approved.  The best that we can do, right now, is learn from the decision process and attempt to avoid the overspending in the future.  It does not mean that the existing solution will not work or even not work amazingly well.  It is simply that it may not have been a perfect choice given the business needs, primarily financial, involved.

There are situations where a design that has been implemented does not adequately meet the stated business requirements.  This is thankfully less common, in my experience, as it is a much more difficult situation.  In this case we need to make some modifications in order to fulfill our business needs.  This may prove to be expensive or complex.  But things may not be as bad as what they seem.  Often reactions to this are misleading and the situation can often be salvaged.

The first step once we are in a position where we have implemented a solution that fails to meet business needs is to reassess the business needs.  This is not to imply that we should fudge the needs to massage them into being whatever our system is able to fulfill, not at all.  But it is a good time to go back and see if the originally stated needs are truly valid or if they were simply not vetted well enough or, even more likely, to go and see if the business needs have changed during the time that the implementation took place.  It may be that the implemented solution does, in fact, meet actual business needs even if they were originally misstated or because the needs have changed over time.   Or it might be that business needs have changed so dramatically that even perfect planning would originally have fallen short of the existing needs and the fact that the implemented solution does not perform as expected is of minor consequence.   I have been very surprised just how often this verification of business needs has turned a solution believed to be inadequate into an “overkill” solution that actually cost more than necessary simply because no one pushed back on overstatements of business needs or no one questioned financial value to certain technology investments.

The second step is to create a new technology baseline.  This is a very important step to assist in preventing IT from falling into the drop of the sunk cost fallacy.  It is extremely common for anyone, this is not unique to IT in any way, to look at the time and money spent on a project and assume that continuing down the original path, no matter how foolish it is, is the way to go because so many resources have been expended on that path already.  But this makes no sense, of course, how you got to your current state is irrelevant.  What is relevant is assessing the current needs of the department and company and taking stock of the currently available solutions, technologies and resources.  Given the current state, the best course forward can be determined.  Any consideration given to the effort expended to get to the current state is only misleading.

A good example of the sunk cost fallacy is in the game of chess.  With each move it is important to assess all available moves, risks and strategies again because what moves were used to get to the current state have no bearing on what moves make sense going forward.  If the world’s greatest chess player or an amazing computer chess algorithm was to be brought in mid-game they would not require any knowledge as to how the current state had come to be – they would simply assess the current state and create a strategy based upon it.

This is the same as we should be behaving in IT.  Our current state is our current state.  It does not matter for strategic planning what unfolded to get us into that state.  We only care about those decisions and costs when doing a post mortem process in order to determine where decision making may have failed in order to learn from it.  Learning about ourselves and our processes is very important.  But that is a very different task from doing strategic planning for the current initiative.

The unfortunate thing here is that we must begin our planning processes again but this time with, we assume, more to work with.    But this cannot be avoided.  In the worst cases, budgets are no longer available and there are no resources to fix the flawed design and achieve the necessary business goals.  Compromises sometimes are necessary.  Making do with what we have is sometimes that best that we can do. But, in the vast majority of cases it would seem, some combination of additional budget or creative reuse of existing products can be adequate to remedy the situation.

Once we have reached a state in which we have addressed our short falls, whether simply by accepting that we have over spent, under-delivered or have adjusted to meet needs we have an opportunity to go back and investigate our decision making processes.  It is by doing this that we hope to grow as individuals and, if at all possible, on an organizational level to learn from our mistakes, or determine if there even were mistakes.  Every company and every individual makes mistakes.  What separates us is the ability to learn from them and avoid those same mistakes in the future.  Growth comes primarily from experiencing pain in this way and while often unpleasant to face it is here that we have the best opportunity to create real, lasting value.  Do not push off or skip this opportunity for review whether it be a harsh, personal review that you do yourself or a formal, organizational review run by people trained to do so or something in-between.  The sooner that the decision processes are evaluated the fresher the memory will be and the sooner the course correction can take effect.

The final step that we can do is to begin the decision process to design a replacement for the current implementation as soon as possible, once the review of the decision process is complete.  This does not necessarily mean that we should intend to spend money or change our designs in the near future.  Not at all.  But by being extremely pro-active in design making we can attempt to avoid the problems of the past by giving ourselves additional time for planning, more time for requirements gathering and documentation, better insight into changes in requirements over time by regularly revisiting those requirements to see if they remain stable or if they are changing, more opportunity to get management and peer buy in and investment in the decision and better understanding of the problem domain so that we are better equipped to alter the intended design or know when to scrap it and start over before implementing it the next time.  It also could, potentially, give us a better chance of codifying organizational knowledge that could be passed on to a successor should you yourself not be in the position of decision making or implementation when the next cycle comes around.

With good, rational processes and a good understanding of the steps that need to be taken in a case of less than ideal systems design or implementation we can recover from missteps and not only, in most cases, recover from them in the short term but we can insulate the organization from the same mistakes in the future.

Better IT Hiring: Contract To Hire

Information Technology workers are bombarded with “Contract to Hire” positions, often daily.  There are reasons why this method of hiring and working is fundamentally wrong and while workers immediately identify these positions as bad choices to make, few really take the time to move beyond emotional reaction to understand why this working method is so flawed and, more importantly, few companies take the time to explore why using tactics such as this undermine their staffing goals.

To begin we must understand that there are two basic types of technology workers: consultants (also called contractors) and permanent employees (commonly known as the FTEs.)  Nearly all IT workers fall into a desire to be one of these two categories. Neither is better or worse, they are simply two different approaches to employment engagements and represent differences in personality, career goals, life situations and so forth.  Workers do not always get to work they way that they desire, but basically all IT workers seek to be in either one camp or the other.

Understanding the desires and motivations of IT workers seeking to be full time employees is generally very easy to do.  Employees, in theory, have good salaries, stable work situations, comfort, continuity, benefits, vacations, protection and so forth.  At least this is how it seems, whether these aspects are real or just illusionary can be debated elsewhere.  What is important is that most people understand why people want to be employees, but the opposite is rarely true.  Many people lack the empathy for those seeking to not be employees.

Understanding professional or intentional consultants can be difficult.  Consultants live a less settled life but generally earn higher salaries and advance in their careers faster, see more diverse environments, get a better chance to learn and grow, are pushed harder and have more flexibility.  There are many factors which can make consulting or contracting intentionally a sensible decision.  Intentional contracting is very often favored by younger professionals looking to grow quickly and gain experience that they otherwise could not obtain.

What makes this matter more confusing is that the majority of workers in IT wish to work as full time employees but a great many end up settling for contract positions to hold them over until a desired full time position can be acquired.  This situation arises so commonly that a great many people both inside and outside of the industry and on both sides of the interview table may mistakenly believe that all cases are this way and that consulting is a lower or lesser form of employment.  This is completely wrong.  In many cases consulting is highly desired and contractors can benefit greatly for their choice of engagement methodology.  I, myself, spent most of my early career, around fifteen years, seeking only to work as a contractor and had little desire to land a permanent post.  I wanted rapid advancement, opportunities to learn, chances to travel and variety.

It is not uncommon at all for the desired mode of employment to change over time.  It is most common for contractors to seek to move to full employment at some point in their careers. Contracting is often exhausting and harder to sustain over a long career.  But certainly full time employees sometimes chose to move into a more mobile and adventurous contracting mode as well.  And many choose to only work one style or the other for the entirety of their careers.

Understanding these two models is key.  What does not fit into this model is the concept of a Contract to Hire.  This hiring methodology starts by hiring someone willing to work a contract position and then, sometimes after a set period of time and sometimes after an indefinite period of time, either promises to make a second determination to see if said team member should be “converted” into an employee, or let go.  This does not work well when we attempt to match it up against the two types of workers.  Neither type is a “want to start as one thing and then do another”.  Possibly somewhere there is an IT worker who would like to work as a contractor for four months and then become an employee, getting benefits but only after a four month delay, but I am not aware of such a person and it is reasonable to assume that if there is such a person he is unique and already has done this process and would not want to do it again.

This leaves us with two resulting models to match into this situation.  The first is the more common model of an IT worker seeking permanent employment and being offered a Contract to Hire position.  For this worker the situation is not ideal, the first four months represent a likely jarring and complex situation and a scary one that lacks the benefits and stability that is needed and the second decision point as to whether to offer the conversion is frightening.  The worker must behave and plan as if there was no conversion and must be actively seeking other opportunities during the contract period, opportunities that are pure employment from the beginning.  If there was any certainty of a position becoming a full employment one then there would be no contract period at all.  The risk is exceptionally high to the employee that no conversion will be offered.  In fact, it is almost unheard of in the industry for this to happen.

It must be noted that, for most IT professionals, the idea that a Contract to Hire will truly offer a conversion at the end of the contract duration is so unlikely that it is generally assumed that the enticement of the conversion process is purely a fake one and that there is no possibility of it happening at all.  And for reasons we will discover here it is obvious why companies would not honestly expect to attempt this process.  The term Contract to Hire spells almost certain unemployment for IT workers going down that path.  The “to Hire” portion is almost universally nothing more than a marketing ploy and a very dishonest one.

The other model that we must consider is the model of the contract-desiring employee accepting a Contract to Hire position.  In this model we have the better outcome for both parties.  The worker is happy with the contract arrangement and the company is able to employ someone who is happy to be there and not seeking something that they likely will be unable to get.  In cases where the company was less than forthcoming about the fact that the “to Hire” conversion would never be considered this might actually even work out well, but is far less likely to do so long term and in repeating engagements than if both parties were up front and honest about their intentions on a regular basis.  Even for professional contractors seeing the “to Hire” addendum is a red flag that something is amiss.

The results for a company, however, when obtaining an intentional contractor via a Contract to Hire posting is risky.  For one, contractors are highly volatile and are skilled and trained at finding other positions.  They are generally well prepared to leave a position the moment that the original contract is done.

One reason that the term Contract to Hire is used is so that companies can easily “string along” someone desiring a conversion to a full time position by dangling the conversion like a carrot and prolonging contract situations indefinitely.  Intentional contractors will see no carrot in this arrangement and will be, normally, prepared to leave immediately upon completion of their contract time and can leave without any notice as they simply need not renew their contract leaving the company in a lurch of their own making.

Even in scenarios where an intentional contractor is offered a conversion at the end of a contract period there is the very real possibility that they will simply turn down the conversion.  Just as the company maintains the right to not offer the conversion, the IT worker maintains an equal right to not agree to offered terms.  The conversion process is completely optional by both parties.  This, too, can leave the company in a tight position if they were banking on the assumption that all IT workers were highly desirous of permanent employment positions.

This may be the better situation, however.  Potentially even worse is an intentional contractor accepting a permanent employment position when they were not actually desiring an arrangement of that type.  They are likely to find the position to be something that they do not enjoy, or else they would have been seeking such an arrangement already, and will be easily tempted to leave for greener pastures very soon – defeating the purpose of having hired them in the first place.

The idea behind the Contract to Hire movement is the mistaken belief by companies that companies hold all of the cards and that IT workers are all desperate for work and thankful to find any job that they can.  This, combined with the incorrect assumption that nearly all IT workers truly want stable, traditional employment as a full time employee combines to make a very bad hiring situation.

Based on this, a great many companies attempt to leverage the Contract to Hire term in order to lure more and better IT workers to apply based on false promises or poor matching of employment values.  It is seen as a means of lowering cost, testing out potential employees, hedging bets against future head count needs, etc.

In a market where there is a massive over supply of IT workers a tactic such as this may actually pay off.  In the real world, however, IT workers are in very short supply and everyone is aware of the game that companies play and what this term truly means.

It might be assumed that IT workers would still consider taking Contract to Hire because they are willing to take on some risk and hope to convince the employer that conversion, in their case, would be worth while.  And certainly some companies do this process and for some people it has worked out well.  However, it should be noted, that any contract position offers the potential of a conversion offer and in positions where the to “Contract to Hire” is not used, conversions are actually quite common, or at least offers for conversion.  It is specifically when a potential future conversion is offered like a carrot that the conversions become exceptionally rare.  There is no need for an honest company and a quality workplace to mention “to Hire” when bringing on contractors.

What happens, however, is more complex and requires study.  In general the best workers in any field are those that are already employed.  It goes without saying that the better you are, the more likely you are to be employed.  This does not mean that great people never change jobs or find themselves unemployed but the better you are the more time you will average not seeking employment from a position of being unemployed and the worse you are the more likely you are to be unemployed non-voluntarily.  That may seem obvious, but when you combine that with other information that we have, something is amiss.  A Contract to Hire position can never, effectively, entice currently working people in any way.  A great offer of true, full time employment with better pay and benefits might entice someone to give up an existing position for a better one, that happens every day.  But good people generally have good jobs and are not going to give up the positions that they have, the safety and stability to join an unknown situation that only offers a short term contract with an almost certain no chance conversion carrot.  It just is not going to happen.

Likewise when good IT workers are unemployed they are not very likely to be in a position of desperation and even then are very unlikely to even talk to a position listing as Contract to Hire (or contract at all) as most people want full time employment and good IT people will generally be far too busy turning down offers to waste time looking at Contract to Hire positions.  Good IT workers are flooded with employment opportunities and being able to quickly filter out those that are not serious is a necessity.  The words “Contract to Hire” are one of the best low hanging fruits of this filtering process.  You do not need to see what company it is, what region it is in, what the position is or what experience they expect.  The position is not what you are looking for, move along, nothing to see here.

The idea that employers seem to have is the belief that everyone, employed and unemployed IT workers alike, are desperate and thankful for any possible job opening.  This is completely flawed.  Most of the industry is doing very well and there is no way to fill all of the existing job openings that we have today, IT workers are in demand.  Certainly there is always a certain segment of the IT worker population that is desperate for work for one reason or another – personal situations, geographic ties, over staffed technology specialization or, most commonly, not being very competitive.

What Contract to Hire positions do is filter out the best people.  They effectively filter out every currently employed IT worker completely.  In demand skills groups (like Linux, storage, cloud and virtualization) will be sorted out too, they are too able to find work anywhere to consider poor offerings.  Highly skilled individuals, even when out of work, will self filter as they are looking for something good, not looking for just anything that comes along.

At the end of the day, the only people in any number seriously considering Contract to Hire positions, often even to the point of being the only ones even willing to respond to postings, are the truly desperate.  Only the group that either has so little experience that they do not realize how foolish the concept is or, more commonly by far, those that are long out of work and have few prospects and feel that the incredible risks and low quality of work associated with Contract to Hire is acceptable.

This hiring problem begins a vicious loop of low quality, if one did not already exist. But most likely issues with quality already will exist before a company considers a Contract to Hire tactic.  Once good people begin to avoid a company, and this will happen even if only some positions are Contract to Hire, – because the quality of the hiring process is exposed, the quality of those able to be hired will begin to decline.  The worse it gets, the harder to turn the ship around.  Good people attract good people.  Good IT workers want to work with great IT workers to mentor them, to train them and to provide places where they can advance by doing a good job.  Good people do not seek to work in a shop staffed by the desperate.  Both because working only with desperate people is depressing and the quality of work is very poor, but also because once a shop gains a poor reputation it is very hard to shake and good people will be very wary of having their own reputation tarnished by having worked in such a place.

Contact to Hire tactics signal desperation and a willingness to admit defeat on the part of an employer.  Once a company sinks to this level with their hiring they are no longer focusing on building great teams, acquiring amazing talent or providing a wonderful work environment.  Contract to Hire is not always something that every IT professional can avoid all of the time.  All of us have times when we have to accept something less than ideal.  But it is important for all parties involved to understand their options and just what it means when a company moves into this mode.  Contract to Hire is not a tactic for vetting potential hires, it simply does not work that way.  Contract to Hire causes companies to be vetted and filtered out of consideration by the bulk of potential candidates without those metrics ever being made available to hiring firms.  Potential candidates simply ignore them and write them off, sometimes noting who is hiring this way and avoiding them even when other options come along in the future.

As a company, if you desire to have a great IT department and hire good people, do not allow Contract to Hire to ever be associated with your firm.  Hire full time employees and hire intentional contractors, but do not play games with dangling false carrots hoping that contractors will change their personalities or that full time employees will take huge personal risks for no reason, that is simply not how the real world works.