Category Archives: Business of IT

You Aren’t Gonna Need It

I’m lucky that I work in IT but come from a software engineering background, this gives me a bit of a different perspective on the world of IT both in understanding much of what is happening behind the scenes with release cycles and features and also in applying knowledge gained from that industry to this one.

In the software engineering community in recent years the concept of “You Aren’t Gonna Need It” or YAGNI has become a popular one.  YAGNI arose from the Extreme Programming (XP) group of Agile developers and is stated as this rule: “Always implement things when you actually need them, never when you just foresee that you need them.”

I like to rephrase YAGNI in development to “Don’t invest in something until you know that you need it.”  But the concept is the same – if you spend time and money building pieces that you aren’t sure that you will ever need you take on risks such as not getting value as early as possible (by focusing on the things that don’t matter yet while neglecting the things that do) and investing in technology that will never be used (because requirements change, project gets canceled, etc.)

This concept ports over to IT extremely well.  Design and purchasing are both heavily influences, or should be, by YAGNI.  Storage is a great example.  Don’t invest in storage today that you think that you will use tomorrow.  We can list a lot of reasons for why early storage investment is bad: the business has little to no ability to accurately predict its own growth, IT is poor at predicting storage growth based on business growth, the time-value of money and buying storage today is more costly than buying the same storage tomorrow.  Anytime that we buy based on predictions we take on risk.  Predictions rarely come true.

If we over buy storage today we are paying a premium for that storage because storage costs drop dramatically over time.  If we buy with 100% headroom and it takes three years or more before we use that headroom we are paying too much for the storage and getting older technology when buying later would give us better insight into what we actually need at that time (not just capacity but speed, reliability, features, etc.), lower cost and more options.

Overbuying is one risk, underbuying is another.  Underbuying is, obviously, less of a risk but still a concern.  If you buy today for needs three years out and at two years suddenly have a burst of need you may have overinvested in a platform or technology that cannot meet your needs.

Storage is one example but this can apply anywhere from software licenses, to CPU capacity, memory, high availability technologies even desktops.  Few shops would over buy desktops by a hundred percent just to be ready for a predicted head count increase in three years, but strangely they won’t hesitate doing it elsewhere.

By buying what is required for the immediate need and holding purchasing decisions until later there is a significant opportunity for cost savings and technology improvements.  In some cases it may be that the future need never arises whether because of bad predictions, changes in market or strategy or a change in technology direction either internally or externally.

Beyond purchasing, YAGNI can apply to network design.  It is not uncommon for large, complex designs to be proposed and implemented based on anticipated growth often years away and, to be honest, seldom very likely in a realistic world.  Building, for example, a complex high availability environment with expensive licensing, complex networking, lots of storage for an expected company growth in the future when just two servers and a nice backup plan is all that is cost justified today is dangerous.  Not only must the necessary growth happen to justify the IT spend but it must happen so quickly that the time-value of the money is justified and the cost of the technology does not drop so much as to have made implementing two systems more cost effective.  It is surprising how easily it can happen that putting in a smaller, stop-gap system and then implementing a larger scale system when needed can be far cheaper just because the cost of building the larger, more complex system has dropped in price so much since the first system was put in place and that is before taking into account the risk of bad predictions.

Spending early has an additional risk – it ties up corporate finances in unused architecture.  That money could be invested in other parts of the business in order to grow the business.  In extreme cases, overinvestment in infrastructure could be a contributor to a company failing completely – a self fulfilling situation where not using YAGNI in and of itself created the situation where YAGNI most applied.  The architected solution was never needed as the company failed.

YAGNI is a risk mitigation process.  Working with the needs that you know versus the needs that you anticipate.

Maybe IT shops over buy today because they are given specific budgets.  This is understandable that IT ends up in a technology grab attempting to implement whatever they can when the whims of the business smile upon them.  This, however, is an extremely poor business practice.  Businesses need to realize that large sums of money are being wasted on IT because IT is forced to implement systems with the assumption of clairvoyancey based on arbitrary budgets from the business with no basis in the real world.  IT is stuck buying what they can “sell” to the business based on often very unclear factors and the business often funds IT quite capriciously.  The creates a very unhealthy business and IT relationship where IT is wasting money because it has little choice and the business sees IT as a waste because they are not allowed to operate efficiently.

To fix this situation the business and IT need to work together.  IT needs to act more like a business-savvy unit and the business needs to lean on IT for guidance and not use prediction-based budgeting or become entangled in picking technological approaches without the technical understanding of the ramifications of those choices.  IT needs to be able to trust the business to make logical business financial decisions and the business needs to be able to trust IT to make logical technological decisions for the business.  The business drives IT, IT enables the business.  It is a symbiotic relationship.  If the business insists on making IT predict and operate on fixed budgets, IT will continue to be forced to overspend and overarchitect whenever possible in the hopes of being prepared for tomorrow when the budget may not be approved.  If IT was trusted to request what is needed and the business was trusted to fund technology needs at the appropriate time both could operate more effectively for the common good.

Takeaway: Don’t invest early, you don’t know what technology or the business will do tomorrow.

Nearly As Good Is Not Better

As IT professionals we often have to evaluate several different approaches, products or techniques.  The IT field is vast and we are faced with so many options that it can become difficult to filter out the noise and find just the options that truly make sense in our environment.

One thing that I have found repeatedly creating a stumbling block for IT professionals is that they come from a stance of traditional, legacy knowledge (a natural situation since all of our knowledge has to have come from sometime in the past) and attempting to justify new techniques or technologies in relationship to the existing, established assumptions of “normal.”  This is to be expected.

IT is a field of change, however, and it is critical that IT professionals accept change as normal and not react to it as an undermining of traditional values.  It is not uncommon for people to feel that decisions that they have made in the past will be judged by the standards of today.  They feel that because there is a better option now that their old decision is somehow invalid or inadequate.  This is not the case.  This is exacerbated in IT because decisions made in the past that have been dramatically overturned in favour of new knowledge might only be a few years old and the people who made them still doing the same job.  Change in IT is much more rapid than in most fields and we can often feel betrayed by good decisions that we have made not long ago.

This reaction puts us into a natural, defensive position that we must rationally overcome in order to make objective decisions about our systems.

One trick that I have found is to reverse questions involved assumed norms.  That is to say, if you believe that you must justify a new technique against an old and find that while convincing you are not totally sways, perhaps you should try the opposite – justify the old, accepted approach versus the new one.  I will give some examples that I see in the real world regularly.

Example one, in which we consider virtualization where none existed before.  Typically someone looking to do this will look for virtualization to provide some benefit that they consider to be significant.  Generally this results in someone feeling that virtualization either doesn’t offer adequate benefits or that they must incorporate other changes and end up going dramatically overboard for what should have been a smaller decision.  Instead, attempt to justify not using virtualization.  Treat virtualization as the accepted pattern (actually, it long has been, just not the in SMB space) and try to justify going with physical servers instead.

What we find is that, normally, our minds accepted that the physical machine only had to be “nearly as good” or “acceptable” in order to be chosen even though virtualization was, in nearly all cases, “better”.  Why would be decide to use something that is not “better”?  Because we approached one as change and one as not change.  Our minds play tricks on us.

Example two, in which traditional server storage is two arrays with the operating system on one RAID 1 array and the data partition on a second RAID 5 array versus the new standard of a single RAID 10 array holding both operating system and data.  If we argue from the aspect of the traditional approach we can make decent arguments, at times, that we can make the old system adequate for our needs.  Adequate seems good enough to not change our approach.  But argue from the other direction.  If we assume RAID 10 is the established, accepted norm (again, it is today) then it is clear that it comes out as dramatically superior in nearly all scenarios.  If we try to justify why we would chose a split array with RAID 1 and RAID 5 we would quickly see that they never provide a compelling value.  So sticking with RAID 10 is a clear win.

This reversal of thinking can provide for a dramatic, eye-opening effect on decision making.  Making assumptions about starting points and forcing new ideas to significantly “unseat” incumbent thinking is dangerous.  This keeps us from moving forward.  In reality, most approaches should start from equal ground and the “best” option should win.  It is far too often than a solution is considered “adequate” when it is not the best.  Yes, a solution may very well work in a given situation but why would we ever intentionally choose a less than superior solution (we assume that cost is factored into the definition of best?)

As IT professionals attempting to solve problems for a business we should be striving to recommend and implement the best possible solutions, but making due with less than ideal ones simply because we forget to equally consider the reasonable options against one another.  And it is important to remember that cost is inclusive in deciding when a solution is best or adequate.  The best solution is not a perfect solution but the best for the company, for the money.  But very often solutions are chosen that cost more and do less simply because they are considering the de facto starting point and the alternatives are expected to dramatically outperform them rather than simply being “better”.

Taking a fresh look at decision making can help us become better professionals.

The True Cost of Printing

Of all of the things that are handled by your technology support department, printing is likely the one that you think about the least.  Printing isn’t fancy or exciting or a competitive advantage.  It is a lingering item from an age without portable reading devices, from an era before monitors.  Printers are going to be around for a long time to come, I do not wish to imply that they are not, but there is a lot to be considered when it comes to printers and much of that consideration can be easily overlooked.

When considering the cost of printing we often calculate the cost of the printer itself along with the consumables: paper and ink.  These things alone rack up a pretty serious per-page cost for an average business.  Planning for an appropriate lifespan and duty cycle of a printer are critical to making printing remain cost effective.  And do not forget the cost of parts replacement as well as stock-piled ink and paper.  These may seem minor, but printers often cause an investment in inventory that is never recovered.  When the printer dies, supplies for that printer are often useless.

The big, hidden cost of printing is none of these things. The big cost is in supporting the printers, both upfront with the initial deployment but even moreso in continuing support.  This is especially true in a smaller shop where the trend is to use many small printers rather than fewer large ones.  Deploying and supporting a five thousand dollar central office printer is no more than, and possibly lower than, the cost of deploying a two hundred dollar desktop inkjet.  The bigger the printer the better the support in drivers and support from the vendor that can usually be expected making normal support tasks easier and more reliable.

At a minimum, rolling out a new desktop printer is going to take half an hour.  Realistically it is far more likely to take closer to an hour.  Go ahead, count up the time: time to deliver printer to station, time to unpack printer, time to physically set up printer, time to plug in printer, time to install printer drivers and software, time to set up printer and time to print a test page.  If it was a one time race, you could probably do these steps pretty quickly.  But printer support is not a production line and rarely, if ever, do you have someone with these exact steps being performed in a rapidly repeatable manner.  Likely installing a printer is a “one off” activity that requires learning the new printer, tracking down the current driver and troubleshooting potential issues.

An hour to deploy a two hundred dollar printer could add fifty percent to the cost of the printer quite easily.  There are a lot of factors that can cause this number to skyrocket from a long travel distance between receiving location and the desk to missing cables to incompatible drivers.  Any given printer could take the better part of a day to deploy when things go wrong.  We are not even considering “disruption time” – that time in which the person receiving the printer is unable to work since someone is setting up a printer at their workstation.

Now that the printer has been set up and is, presumably, working just fine we need to consider the ongoing cost of printer support.  It is not uncommon for a printer to sit, undisturbed, for years chugging along just fine.  But printers have a surprisingly high breakage rate caused by the nature of ink, the nature of paper, a propensity for printers to be reassigned to different physical locations or for the machine to which they are attached to be changed or updated introducing driver breakage.  Add these things together and the ongoing support cost of a printer can be surprisingly high.

I recently witnessed the support of a company with a handful of high profile printers.  In a run of documentation, physical cabling and driver issues the printers were averaging between four and eight hours of technician time, per printer, to set up correctly.  Calculate out the per hour cost for that support and those printers, likely already costly, just became outrageously expensive.

I regularly hear of shops that decide to re-purpose printers and spend many times the cost of the printers in labor hours as older printers are massaged into working with newer computer setups or vice versa. Driver incompatibility or unavailability is far more common than people realize.

Printers have the additional complication of being used in many different modes such as directly attached to a workstation, directly attach and shared, directly attached to a print server, directly attached to the network or attached to a print server over the network.  While this complexity hardly creates roadblocks it does significantly slow work done on printers in a majority of businesses.

Printers, by their nature, are very difficult to support remotely.  Getting a print driver installed remotely is easy.  Knowing that something has printed successfully is something completely different.  Considering that printer support should be one of the lower cost support tasks this need for physical on-site presence for nearly every printer support task dramatically increases the cost of support if only because it increases the time to perform a task and receive appropriate feedback.

When we take these costs and combine them with the volume of printing normally performed by a printer we can start to acquire a picture of what printing is really costing.  The value to centralized printing suddenly takes on a new level of significance when seen through the eyes of support rather than through the eyes of purchasing.  Even beyond centralizing printing when possible it is important to eliminate unnecessary printing.

Good planning, strategic purchasing and a holistic approach can mitigate the potential for surprise costs in printing.

 

Just Because You Can…

I see this concept appear in discussions surrounding virtualization all of the time.  This is a broader, more general concept but virtualization is the “hot, new technology” facing many IT organizations and seems to be the space where currently we see the “just because you can, doesn’t mean you should” problems rearing their ugly heads most prevalently.  As with everything in IT, it is critical that all technical decisions be put into a business context so that we understand why we choose to do what we do and not blindly attempt to make our decisions based on popular deployment methodologies or worse, myths..

Virtualization itself, I should point out, I feel should be a default decision today for those working in the x64 computing space with systems being deployed sans virtualization only when a clear and obvious necessity exists such as specific hardware needs, latency sensitive applications, etc.  Baring any specific need, virtualization is free to implement from many vendors and offers many benefits both today and in future-proofing the environment.

That being said, what I often see today is companies deploying virtualization not as a best practice but as a panacea to all perceived IT problems.  This it certainly is not.  Virtualization is a very important tool to have in the IT toolbox and one that we will reach for very often, but it does not solve every problem and should be treated like every other tool that we posses and used only when appropriate.

I see several things recurring when virtualization discussions come up as a topic.  Many companies today are moving towards virtualization not because they have identified a business need but because it is the currently trending topic and people feel that if they do not implement virtualization that somehow they will be left behind or miss out on some mythical functionality.  This is generally good as it is increasing virtualization adoption, but it is bad because good IT and business decision making processes are being bypassed.  What happens is often that in the wave of virtualization hype IT departments feel that not only do they have to implement virtualization itself but do so in ways that may not be appropriate for their business.

There are four things that I often see tied to virtualization, often accepted as virtualization requirements, whether or not they make sense in a given business environment.  These are server consolidation, blade servers, SAN storage and high availability or live failover.

Consolidation is so often vaunted as the benefit of virtualization that I think most IT departments forget that there are other important reasons for doing implementing it.  Clearly consolidation is a great benefit for nearly all deployments (mileage may vary, of course) and is nearly always able to be achieved simply through better utilization of existing resources.  It is a pretty rare company that runs more than a single physical server that cannot shave some amount of cost through limited consolidation and it is not uncommon to see datacenter footprints decimated in larger organizations.

In extreme cases, though, it is not necessary to abandon virtualization projects just because consolidation proves to be out of the question.  These cases exist for companies with high utilization systems and little budget for a preemptive consolidation investment.  But these shops can still virtualize “in place” systems on a one to one basis to gain other benefits of virtualization today and look to consolidate when hardware needs to be replaced tomorrow or when larger, more powerful servers become more cost effective in the future.  It is important to not rule out virtualization just because its most heralded benefit may not apply at the current time in your environment.

Blade servers are often seen as the choice for virtualization environments.  Blades may play better in a standard virtualization environment than they do with more traditional computational workloads but this is both highly disputable and not necessarily applicable data.  Being a good scenario for blades themselves does not make it a good scenario for a business.  Just because the blades perform better than normal when used in this way does not imply that they perform better than traditional servers – only that they have potentially closed the gap.

Blades needs to be evaluated using the same harsh criteria when virtualizing as when not and, very often, they will continue to fail to provide the long term business value needed to choose them over the more flexible alternatives.  Blades remain far from a necessity for virtualization and often, in my opinion, a very poor choice indeed.

One of the most common misconceptions is that by moving to virtualization one must also move to shared storage such as SAN.  This mindset is the obvious reaction to the desire to also achieve other benefits from virtualization which, if they don’t require SAN, benefit greatly from it.  The ability to load balance or failover between systems is heavily facilitated by having a shared storage backend.  It is a myth that this is a hard requirement, but replicated local storage brings its own complexities and limitations.

But shared storage is far from a necessity of virtualization itself and, like everything, needs to be evaluated on its own.  If virtualization makes sense for your environment but you need no features that require SAN, then virtualize without shared storage.  There are many cases where local storage backed virtualization is an ideal deployment scenario.  There is no need to dismiss this approach without first giving it serious consideration.

The last major assumed necessary feature of virtualization is system level high availability or instant failover for your operating system.  Without a doubt, high availability at the system layer is a phenomenal benefit that virtualization brings us.  However, few companies needed high availability at this level prior to implementing virtualization and the price tag of the necessary infrastructure and software to do it with virtualization is often so high as to make it too expensive to justify.

High availability systems are complex and often overkill.  It is a very rare business system that requires transparent failover for even the most critical systems and those companies with that requirement would almost certainly already have failover processes in place.  I see companies moving towards high availability all of the time when looking at virtualization simply because a vendor saw an opportunity to dramatically oversell the original requirements.  The cost of high availability is seldom justified by the potential loss of revenue from the associated reduction in downtime.  With non-highly available virtualization, downtime for a failed hardware device might be measured in minutes if backups are handled well.  This means that high availability has to justify its cost in potentially eliminating just a few minutes of unplanned downtime per year minus any additional risks assumed by the added system complexity.  Even in the biggest organizations this is seldom justified on any large scale and in a more moderately sized company it is uncommon altogether.  But today we find many small businesses implementing high availability systems at extreme cost on systems that could easily suffer multi-day outages with minimal financial loss simply because the marketing literature promoted the concept.

Like anything, virtualization and all of the associated possibilities that it brings to the table need to be evaluated individually in the context of the organization considering them.  If the individual feature does not make sense for your business do not assume that you have to purchase or implement that feature.  Many organizations virtualize but use only a few, if any, of these “assumed” features.  Don’t look at virtualization as a black box, look at the parts and consider them like you would consider any other technology project.

What often happens in a snowball effect where one feature, likely high availability, is assumed to be necessary without the proper business assessment being performed.  Then a shared storage system, often assumed to be required for high availability, is added as another assumed cost.  Even if high availability features are not purchased the decision to use SAN might already be made and fail to be revisited after changes to the plan are made.  It is very common, in my experience, to find projects of this nature with sometimes more than fifty percent of the total expenditure on the project being spent on products that the purchaser is unable to even describe the reason for having purchased.

This concept does not stop at virtualization.  Extend it to everything that you do.  Keep IT in perspective of the business and don’t assume that going with one technology automatically assumes that you must adopt other technologies that are popularly associated with it.