You Aren’t Gonna Need It

I’m lucky that I work in IT but come from a software engineering background, this gives me a bit of a different perspective on the world of IT both in understanding much of what is happening behind the scenes with release cycles and features and also in applying knowledge gained from that industry to this one.

In the software engineering community in recent years the concept of “You Aren’t Gonna Need It” or YAGNI has become a popular one.  YAGNI arose from the Extreme Programming (XP) group of Agile developers and is stated as this rule: “Always implement things when you actually need them, never when you just foresee that you need them.”

I like to rephrase YAGNI in development to “Don’t invest in something until you know that you need it.”  But the concept is the same – if you spend time and money building pieces that you aren’t sure that you will ever need you take on risks such as not getting value as early as possible (by focusing on the things that don’t matter yet while neglecting the things that do) and investing in technology that will never be used (because requirements change, project gets canceled, etc.)

This concept ports over to IT extremely well.  Design and purchasing are both heavily influences, or should be, by YAGNI.  Storage is a great example.  Don’t invest in storage today that you think that you will use tomorrow.  We can list a lot of reasons for why early storage investment is bad: the business has little to no ability to accurately predict its own growth, IT is poor at predicting storage growth based on business growth, the time-value of money and buying storage today is more costly than buying the same storage tomorrow.  Anytime that we buy based on predictions we take on risk.  Predictions rarely come true.

If we over buy storage today we are paying a premium for that storage because storage costs drop dramatically over time.  If we buy with 100% headroom and it takes three years or more before we use that headroom we are paying too much for the storage and getting older technology when buying later would give us better insight into what we actually need at that time (not just capacity but speed, reliability, features, etc.), lower cost and more options.

Overbuying is one risk, underbuying is another.  Underbuying is, obviously, less of a risk but still a concern.  If you buy today for needs three years out and at two years suddenly have a burst of need you may have overinvested in a platform or technology that cannot meet your needs.

Storage is one example but this can apply anywhere from software licenses, to CPU capacity, memory, high availability technologies even desktops.  Few shops would over buy desktops by a hundred percent just to be ready for a predicted head count increase in three years, but strangely they won’t hesitate doing it elsewhere.

By buying what is required for the immediate need and holding purchasing decisions until later there is a significant opportunity for cost savings and technology improvements.  In some cases it may be that the future need never arises whether because of bad predictions, changes in market or strategy or a change in technology direction either internally or externally.

Beyond purchasing, YAGNI can apply to network design.  It is not uncommon for large, complex designs to be proposed and implemented based on anticipated growth often years away and, to be honest, seldom very likely in a realistic world.  Building, for example, a complex high availability environment with expensive licensing, complex networking, lots of storage for an expected company growth in the future when just two servers and a nice backup plan is all that is cost justified today is dangerous.  Not only must the necessary growth happen to justify the IT spend but it must happen so quickly that the time-value of the money is justified and the cost of the technology does not drop so much as to have made implementing two systems more cost effective.  It is surprising how easily it can happen that putting in a smaller, stop-gap system and then implementing a larger scale system when needed can be far cheaper just because the cost of building the larger, more complex system has dropped in price so much since the first system was put in place and that is before taking into account the risk of bad predictions.

Spending early has an additional risk – it ties up corporate finances in unused architecture.  That money could be invested in other parts of the business in order to grow the business.  In extreme cases, overinvestment in infrastructure could be a contributor to a company failing completely – a self fulfilling situation where not using YAGNI in and of itself created the situation where YAGNI most applied.  The architected solution was never needed as the company failed.

YAGNI is a risk mitigation process.  Working with the needs that you know versus the needs that you anticipate.

Maybe IT shops over buy today because they are given specific budgets.  This is understandable that IT ends up in a technology grab attempting to implement whatever they can when the whims of the business smile upon them.  This, however, is an extremely poor business practice.  Businesses need to realize that large sums of money are being wasted on IT because IT is forced to implement systems with the assumption of clairvoyancey based on arbitrary budgets from the business with no basis in the real world.  IT is stuck buying what they can “sell” to the business based on often very unclear factors and the business often funds IT quite capriciously.  The creates a very unhealthy business and IT relationship where IT is wasting money because it has little choice and the business sees IT as a waste because they are not allowed to operate efficiently.

To fix this situation the business and IT need to work together.  IT needs to act more like a business-savvy unit and the business needs to lean on IT for guidance and not use prediction-based budgeting or become entangled in picking technological approaches without the technical understanding of the ramifications of those choices.  IT needs to be able to trust the business to make logical business financial decisions and the business needs to be able to trust IT to make logical technological decisions for the business.  The business drives IT, IT enables the business.  It is a symbiotic relationship.  If the business insists on making IT predict and operate on fixed budgets, IT will continue to be forced to overspend and overarchitect whenever possible in the hopes of being prepared for tomorrow when the budget may not be approved.  If IT was trusted to request what is needed and the business was trusted to fund technology needs at the appropriate time both could operate more effectively for the common good.

Takeaway: Don’t invest early, you don’t know what technology or the business will do tomorrow.

Where Windows 8 Fails

There is a lot of talk about why people love or hate Windows 8, but I see a lot of talk about this from the perspective of the IT department and the big picture seems to be often dropped completely.  Overall, Windows 8 is a great operating system delivering a lot of cool, new features and building on the Windows Vista legacy (and, in turn, on the Windows 7 legacy.)  We have a stable, mature kernel and a lot of new efficiency in the system itself.

To really look at Windows 8 we need to look at the historic value of Windows as a desktop operation system.  For many years, even as far back as the late 1990s, Windows has competed against other desktop options on two fundamental premises.  The first premise is that moving to the next version of Windows is less disruptive and requires less retraining of staff than moving to a competitive platform allowing the majority of end users to remain comfortable and efficient even when going through major desktop upgrades.  The second is that the majority of business applications are written for Windows and moving to another platform severely limits application options.

Windows provides ancillary benefits of course such as a tight security model, well known support processes, massive user community, IT departments well versed in supporting it, excellent training and certification programs and good change processes.  But to a business selecting its next computing platform, continuity of usability and application support are the features traditionally driving the almost blind adoption of subsequent Windows versions year after year.

What makes Windows 8 unique in the long history of Windows desktop operating environments is that for the very first time even since the Windows 3.1 days there is a major change in the look, feel and usability of the desktop environment leaving many users stranded and confused in extreme cases and at least, in most cases, inefficient and frustrated.  Windows has never before departed from the basic need to make sure that users felt as little pain moving from version to version before and the need for retraining was basically out of the question beyond quick highlights showing where something has moved or showing off new features.  Windows 95 was the most extreme change of the past ~20+ years of Windows desktops and compared to Windows 8 it was relatively trivial.

With Windows 8 the move to the latest Windows edition is so dramatic that it begs comparison to the competition.  It is not that Windows 8 is bad, it is quite good, but that it doesn’t delivery the traditional Windows desktop upgrade value proposition either in user experience or in being a unique application platform as the majority of modern business applications are desktop platform agnostic running in the web browser leaving Windows in a very precarious position.  There are Linux desktops, for example, that offer user experiences far closer to Windows 7 than Windows 8 offers.  This, combined with the widespread use of enterprise web-based applications, means that, in theory, Windows 8 is no longer the simple upgrade path for desktops but it, in fact, the more difficult option requiring more training, more time and more pain for users and, from what we have seen, more long term loss of productivity as Windows 8 simply lacks the end-user efficiencies of most non-Windows platforms (Linux, Mac OSX and BSD.)

I’ve heard many people attempt to defend Windows but but the defense seems, to me, to be universally centered around mitigating Windows 8 flaws rather than finding places where it excels.  That users should not avoid it because they “haven’t taken time to learn how to deal with it”, that users should learn “keyboard shortcuts to make up for GUI deficiencies”, that they should “work hard to customize the Metro interface to make it less painful” or that “users should remove and/or replace troublesome Windows applications with more functional third party components” all, to me, sound like failures of the platform rather than reasons why Windows 8 is a good choice.  Yes, Windows 8 can certainly be made to be functional.  But Mac OSX or Linux Mint, as examples, solve all of these issues out of the box.  Users can hit the ground running and remain productive in the future.

From an IT support perspective there is a lot of pressure to maintain a status quo.  While Windows 8 is a departure it does not represent any significant change from supporting past Windows versions.  The tools and techniques are the same.  The set of experience and skills acquired for many years can be leveraged against Windows 8 and everyone comes to Windows 8 fresh so if there are new skills to be learned existing Windows desktop administrators and supporters are in the best position to learn them first.  Windows 8 continues to be the best job retention gamble and leverages best the in place support teams.  Moving to any new platform means that completely new skills and approaches need to be learned, new vendors need to be engaged and the risk of large portions of the department being replaced with outsides already possessing those skills looms large.

For end users, though, pressures might be the opposite.  IT needs to keep perspective that IT is not the end user of technology but the supplier of it.  The business and the business users are the end users of technology and it is the role of the IT department to support those needs.  If Windows 8 fails to deliver business value in comparison to competing options then it is IT’s job to deliver alternatives even if that means retraining for IT in order to make the business run more smoothly and more cost effectively.

When we step back and do a business by business analysis, Windows 8 is going to continue to dominate, there is no question.  But a shift is clear, Windows desktops are no longer the clear and obvious choice for end user easy of use and continued efficiency.  Microsoft is playing a dangerous game of alienating those whom they have courted for the longest.  Users looking for an easy transition will have to think twice about Windows 8 and the future of the Windows desktop.  Windows is already suffering from having lost the mobile and tablet space to the iOS and Android camps and has seen heavy market attrition in netbooks to Linux and the traditional desktop and laptop space to Mac OSX.  Windows’ areas of market dominance are growing fewer and those remaining are shrinking.  Ten years ago running a company without Windows on the desktop was unthinkable.  Today it is a very real consideration and both Mac OSX and many Linux distros have an opportunity to go through one or even several iterations before Windows 8’s replacement OS arrives from Microsoft giving them time to polish, advance and attract users who will be considering the Windows 8 move over the next few years.

Windows 8 fails to continue providing the Windows desktop’s traditional value.  Windows 8 fails to deliver new benefits to justify it on its own right.  Windows 8 fails to convince users and businesses of Microsoft long term vision.

 

Nearly As Good Is Not Better

As IT professionals we often have to evaluate several different approaches, products or techniques.  The IT field is vast and we are faced with so many options that it can become difficult to filter out the noise and find just the options that truly make sense in our environment.

One thing that I have found repeatedly creating a stumbling block for IT professionals is that they come from a stance of traditional, legacy knowledge (a natural situation since all of our knowledge has to have come from sometime in the past) and attempting to justify new techniques or technologies in relationship to the existing, established assumptions of “normal.”  This is to be expected.

IT is a field of change, however, and it is critical that IT professionals accept change as normal and not react to it as an undermining of traditional values.  It is not uncommon for people to feel that decisions that they have made in the past will be judged by the standards of today.  They feel that because there is a better option now that their old decision is somehow invalid or inadequate.  This is not the case.  This is exacerbated in IT because decisions made in the past that have been dramatically overturned in favour of new knowledge might only be a few years old and the people who made them still doing the same job.  Change in IT is much more rapid than in most fields and we can often feel betrayed by good decisions that we have made not long ago.

This reaction puts us into a natural, defensive position that we must rationally overcome in order to make objective decisions about our systems.

One trick that I have found is to reverse questions involved assumed norms.  That is to say, if you believe that you must justify a new technique against an old and find that while convincing you are not totally sways, perhaps you should try the opposite – justify the old, accepted approach versus the new one.  I will give some examples that I see in the real world regularly.

Example one, in which we consider virtualization where none existed before.  Typically someone looking to do this will look for virtualization to provide some benefit that they consider to be significant.  Generally this results in someone feeling that virtualization either doesn’t offer adequate benefits or that they must incorporate other changes and end up going dramatically overboard for what should have been a smaller decision.  Instead, attempt to justify not using virtualization.  Treat virtualization as the accepted pattern (actually, it long has been, just not the in SMB space) and try to justify going with physical servers instead.

What we find is that, normally, our minds accepted that the physical machine only had to be “nearly as good” or “acceptable” in order to be chosen even though virtualization was, in nearly all cases, “better”.  Why would be decide to use something that is not “better”?  Because we approached one as change and one as not change.  Our minds play tricks on us.

Example two, in which traditional server storage is two arrays with the operating system on one RAID 1 array and the data partition on a second RAID 5 array versus the new standard of a single RAID 10 array holding both operating system and data.  If we argue from the aspect of the traditional approach we can make decent arguments, at times, that we can make the old system adequate for our needs.  Adequate seems good enough to not change our approach.  But argue from the other direction.  If we assume RAID 10 is the established, accepted norm (again, it is today) then it is clear that it comes out as dramatically superior in nearly all scenarios.  If we try to justify why we would chose a split array with RAID 1 and RAID 5 we would quickly see that they never provide a compelling value.  So sticking with RAID 10 is a clear win.

This reversal of thinking can provide for a dramatic, eye-opening effect on decision making.  Making assumptions about starting points and forcing new ideas to significantly “unseat” incumbent thinking is dangerous.  This keeps us from moving forward.  In reality, most approaches should start from equal ground and the “best” option should win.  It is far too often than a solution is considered “adequate” when it is not the best.  Yes, a solution may very well work in a given situation but why would we ever intentionally choose a less than superior solution (we assume that cost is factored into the definition of best?)

As IT professionals attempting to solve problems for a business we should be striving to recommend and implement the best possible solutions, but making due with less than ideal ones simply because we forget to equally consider the reasonable options against one another.  And it is important to remember that cost is inclusive in deciding when a solution is best or adequate.  The best solution is not a perfect solution but the best for the company, for the money.  But very often solutions are chosen that cost more and do less simply because they are considering the de facto starting point and the alternatives are expected to dramatically outperform them rather than simply being “better”.

Taking a fresh look at decision making can help us become better professionals.

Choosing a Storage Type

While technicalities defining which type of storage is which can become problematic, the underlying concepts are pretty well understood.  There are four key types of storage that we use in everyday server computing: local disks, DAS, NAS and SAN.  Choosing which we want to use, in most cases, can be broken down into a relatively easy formula.

The quick rule of thumb for storage should be: Local before DAS, DAS before NAS, NAS before SAN.  Or as I like to write it:

Local Disks -> DAS -> NAS -> SAN

To use this rule you simply start with your storage requirements in hand and begin on the left hand side.  If local disks meet your requirements, then almost certainly they are your best choice.  If they don’t meet your requirements move to the right and check if DAS will meet your requirements.  If so, great, if not continue the process.

That’s the rule of thumb, so if that is all you need, there you go.  But we will dive into the “why” of the rule below. The quick overview is that on the left we get speed and reliability at the lowest cost.  As we move to the right complexity increases as does price typically.  The last two, while very different, are actually the most alike in many ways due to their networked nature.

Local Disks:  Local drives inside your server chassis are your best bet for most tasks.  Being inside the chassis means the least amount of money spent on extra containers to hold and power the drives, least physical risk, most solid connection technologies, shortest distance and least amount of potential bottlenecks. Being raw disks, local disks are block devices.

Direct Attached Storage:  DAS is, more or less, local drives housed outside of the server chassis.  The server itself will see them exactly like any other local drives making them very easy to use.  DAS is simple but still has extra external containers and extra cables.  This adds cost and some complexity.  DAS makes it easier to attach multiple servers to the same set of drives as this is almost impossible, and always cumbersome, with local disks.  So DAS is effectively our first type of physically sharable storage.  Being identical to local disks, DAS is a form of block device.

Network Attached Storage: NAS is unique in that it is the only non-block device from which we have to choose.  A NAS, or a traditional file server – they are truly one and the same, is the first of our technologies designed to run over a network.  This adds a lot of complication.  NAS shares storage out at the filesystem level.  A NAS is an intelligent device that allows users over the network to easily and safely share storage because the NAS has the necessary logic on board to handle multiple users at one time.  NAS is very easy for anyone to use and is even commonly used by people at home.

Storage Area Network: SAN is an adaptation of DAS with the addition of a network infrastructure allowing the SAN to behave as a remote hard drive (block device) that an operating system sees as no different from any other hard drive attached to it.  SANs require advanced networking knowledge, are surrounded by a large amount of myth and rumor, are poorly understood by the average IT professional, are generally complex to use and understand and because they lack the logic of a NAS they effectively expose a hard drive directly to the network making it trivially easy to corrupt and destroy data.  It is, in fact, so easy to lose data on a SAN due to misconfiguration that the most commonly expected use of a SAN is a use case for which a SAN cannot be used.

Of course there is much grey area.  What is normally considered a DAS can be turned into a SAN.  A SAN can be direct connected.  NAS can be direct connected.  Local storage can act as either NAS or SAN depending on configuration such as with a VSA (Virtual Storage Appliance.)  Many devices are simultaneously NAS and SAN and the determination is by configuration, not by the physical device itself.  But in generally accepted use, the terms are mostly straightforward.

The point being that as we move from left to right in our list we move from simple and easy to difficult and complex.  SAN itself is a rock solid technology; it is the introduction of humans and their tendency to do dangerous things easily with SAN that makes it a dangerous storage technique for the average user.  As with everything in IT, keeping our technologies and processes simple brings stability and security and, often, cost savings as well.

There are many times when movement to “the right” is necessary.  Local disks do not scale well and can become too expensive to maintain for certain types of larger deployments.  DAS, likewise, doesn’t scale well in many cases.  NAS scales well but being a non-block protocol is a bit unique and doesn’t always work for our purposes, a good example being HyperV that requires a block device for storage.  SAN is the final catchall of storage.  If nothing else works, SAN is always there to fall back on – or, as I like to say, SAN is the storage of last resort.

This is a very high level look at the basics of choosing a storage approach.  This is a common IT task that must be done with great regularity.  I did not intend this post, in any way, to explain any deep knowledge of storage but simply to provide a handy guide to understanding where to start looking at storage options.  Exceptions and special cases abound, but it is extremely common to simply skip the best option and go straight to considering something big, expensive and complex and rapidly forget that something much more simple might do the same job in a far superior manner.  The underlying concept is the simplest solution that meets the need is usually the best.

The Information Technology Resource for Small Business