All posts by Scott Alan Miller

Started in software development with Eastman Kodak in 1989 as an intern in database development (making database platforms themselves.) Began transitioning to IT in 1994 with my first mixed role in system administration.

Nearly As Good Is Not Better

As IT professionals we often have to evaluate several different approaches, products or techniques.  The IT field is vast and we are faced with so many options that it can become difficult to filter out the noise and find just the options that truly make sense in our environment.

One thing that I have found repeatedly creating a stumbling block for IT professionals is that they come from a stance of traditional, legacy knowledge (a natural situation since all of our knowledge has to have come from sometime in the past) and attempting to justify new techniques or technologies in relationship to the existing, established assumptions of “normal.”  This is to be expected.

IT is a field of change, however, and it is critical that IT professionals accept change as normal and not react to it as an undermining of traditional values.  It is not uncommon for people to feel that decisions that they have made in the past will be judged by the standards of today.  They feel that because there is a better option now that their old decision is somehow invalid or inadequate.  This is not the case.  This is exacerbated in IT because decisions made in the past that have been dramatically overturned in favour of new knowledge might only be a few years old and the people who made them still doing the same job.  Change in IT is much more rapid than in most fields and we can often feel betrayed by good decisions that we have made not long ago.

This reaction puts us into a natural, defensive position that we must rationally overcome in order to make objective decisions about our systems.

One trick that I have found is to reverse questions involved assumed norms.  That is to say, if you believe that you must justify a new technique against an old and find that while convincing you are not totally sways, perhaps you should try the opposite – justify the old, accepted approach versus the new one.  I will give some examples that I see in the real world regularly.

Example one, in which we consider virtualization where none existed before.  Typically someone looking to do this will look for virtualization to provide some benefit that they consider to be significant.  Generally this results in someone feeling that virtualization either doesn’t offer adequate benefits or that they must incorporate other changes and end up going dramatically overboard for what should have been a smaller decision.  Instead, attempt to justify not using virtualization.  Treat virtualization as the accepted pattern (actually, it long has been, just not the in SMB space) and try to justify going with physical servers instead.

What we find is that, normally, our minds accepted that the physical machine only had to be “nearly as good” or “acceptable” in order to be chosen even though virtualization was, in nearly all cases, “better”.  Why would be decide to use something that is not “better”?  Because we approached one as change and one as not change.  Our minds play tricks on us.

Example two, in which traditional server storage is two arrays with the operating system on one RAID 1 array and the data partition on a second RAID 5 array versus the new standard of a single RAID 10 array holding both operating system and data.  If we argue from the aspect of the traditional approach we can make decent arguments, at times, that we can make the old system adequate for our needs.  Adequate seems good enough to not change our approach.  But argue from the other direction.  If we assume RAID 10 is the established, accepted norm (again, it is today) then it is clear that it comes out as dramatically superior in nearly all scenarios.  If we try to justify why we would chose a split array with RAID 1 and RAID 5 we would quickly see that they never provide a compelling value.  So sticking with RAID 10 is a clear win.

This reversal of thinking can provide for a dramatic, eye-opening effect on decision making.  Making assumptions about starting points and forcing new ideas to significantly “unseat” incumbent thinking is dangerous.  This keeps us from moving forward.  In reality, most approaches should start from equal ground and the “best” option should win.  It is far too often than a solution is considered “adequate” when it is not the best.  Yes, a solution may very well work in a given situation but why would we ever intentionally choose a less than superior solution (we assume that cost is factored into the definition of best?)

As IT professionals attempting to solve problems for a business we should be striving to recommend and implement the best possible solutions, but making due with less than ideal ones simply because we forget to equally consider the reasonable options against one another.  And it is important to remember that cost is inclusive in deciding when a solution is best or adequate.  The best solution is not a perfect solution but the best for the company, for the money.  But very often solutions are chosen that cost more and do less simply because they are considering the de facto starting point and the alternatives are expected to dramatically outperform them rather than simply being “better”.

Taking a fresh look at decision making can help us become better professionals.

Choosing a Storage Type

While technicalities defining which type of storage is which can become problematic, the underlying concepts are pretty well understood.  There are four key types of storage that we use in everyday server computing: local disks, DAS, NAS and SAN.  Choosing which we want to use, in most cases, can be broken down into a relatively easy formula.

The quick rule of thumb for storage should be: Local before DAS, DAS before NAS, NAS before SAN.  Or as I like to write it:

Local Disks -> DAS -> NAS -> SAN

To use this rule you simply start with your storage requirements in hand and begin on the left hand side.  If local disks meet your requirements, then almost certainly they are your best choice.  If they don’t meet your requirements move to the right and check if DAS will meet your requirements.  If so, great, if not continue the process.

That’s the rule of thumb, so if that is all you need, there you go.  But we will dive into the “why” of the rule below. The quick overview is that on the left we get speed and reliability at the lowest cost.  As we move to the right complexity increases as does price typically.  The last two, while very different, are actually the most alike in many ways due to their networked nature.

Local Disks:  Local drives inside your server chassis are your best bet for most tasks.  Being inside the chassis means the least amount of money spent on extra containers to hold and power the drives, least physical risk, most solid connection technologies, shortest distance and least amount of potential bottlenecks. Being raw disks, local disks are block devices.

Direct Attached Storage:  DAS is, more or less, local drives housed outside of the server chassis.  The server itself will see them exactly like any other local drives making them very easy to use.  DAS is simple but still has extra external containers and extra cables.  This adds cost and some complexity.  DAS makes it easier to attach multiple servers to the same set of drives as this is almost impossible, and always cumbersome, with local disks.  So DAS is effectively our first type of physically sharable storage.  Being identical to local disks, DAS is a form of block device.

Network Attached Storage: NAS is unique in that it is the only non-block device from which we have to choose.  A NAS, or a traditional file server – they are truly one and the same, is the first of our technologies designed to run over a network.  This adds a lot of complication.  NAS shares storage out at the filesystem level.  A NAS is an intelligent device that allows users over the network to easily and safely share storage because the NAS has the necessary logic on board to handle multiple users at one time.  NAS is very easy for anyone to use and is even commonly used by people at home.

Storage Area Network: SAN is an adaptation of DAS with the addition of a network infrastructure allowing the SAN to behave as a remote hard drive (block device) that an operating system sees as no different from any other hard drive attached to it.  SANs require advanced networking knowledge, are surrounded by a large amount of myth and rumor, are poorly understood by the average IT professional, are generally complex to use and understand and because they lack the logic of a NAS they effectively expose a hard drive directly to the network making it trivially easy to corrupt and destroy data.  It is, in fact, so easy to lose data on a SAN due to misconfiguration that the most commonly expected use of a SAN is a use case for which a SAN cannot be used.

Of course there is much grey area.  What is normally considered a DAS can be turned into a SAN.  A SAN can be direct connected.  NAS can be direct connected.  Local storage can act as either NAS or SAN depending on configuration such as with a VSA (Virtual Storage Appliance.)  Many devices are simultaneously NAS and SAN and the determination is by configuration, not by the physical device itself.  But in generally accepted use, the terms are mostly straightforward.

The point being that as we move from left to right in our list we move from simple and easy to difficult and complex.  SAN itself is a rock solid technology; it is the introduction of humans and their tendency to do dangerous things easily with SAN that makes it a dangerous storage technique for the average user.  As with everything in IT, keeping our technologies and processes simple brings stability and security and, often, cost savings as well.

There are many times when movement to “the right” is necessary.  Local disks do not scale well and can become too expensive to maintain for certain types of larger deployments.  DAS, likewise, doesn’t scale well in many cases.  NAS scales well but being a non-block protocol is a bit unique and doesn’t always work for our purposes, a good example being HyperV that requires a block device for storage.  SAN is the final catchall of storage.  If nothing else works, SAN is always there to fall back on – or, as I like to say, SAN is the storage of last resort.

This is a very high level look at the basics of choosing a storage approach.  This is a common IT task that must be done with great regularity.  I did not intend this post, in any way, to explain any deep knowledge of storage but simply to provide a handy guide to understanding where to start looking at storage options.  Exceptions and special cases abound, but it is extremely common to simply skip the best option and go straight to considering something big, expensive and complex and rapidly forget that something much more simple might do the same job in a far superior manner.  The underlying concept is the simplest solution that meets the need is usually the best.

How I Learned To Stop Worrying and Love BYOD

Bring Your Own Devices (or BYOD) is one of those hot topics this year that seems to have every IT department worried.  What does BYOD mean for the future of IT?  People have already begun to call it the consumerization of IT and IT professionals everywhere are terrified that the traditional role of IT is ending and that BYOD is shifting all control into the hands of the end users.

Is this really the case?  In a world where security and control of data are becoming increasingly regulated and exposed and as the public takes a growing interest in how companies are securing their data it is safe to assume that the movement of the IT field is not going to be towards a loss of control.  And, in my experience, BYOD means exactly the opposite.

There is no ignoring the fact that BYOD signals many changes and demands IT departments rethink traditional approaches.  But is that such a bad thing?  The old model was one of a network castle.  The firewalls were the moat and all of our devices from servers to desktops sat huddled together inside the castle courtyard talking freely one to another.  One of the greatest fears was that one of those desktops were to become “compromised” and would unleash a fifth column attack from within the castle where there were practically no defenses of which to speak.

The old model created a quagmire of issues and required complicated workarounds in order to accommodate modern changes in computing environments.  When businesses existed in only a single location or when businesses would regularly purchase leased lines connecting all of their offices the model worked rather well.  Once workers began to need to work remotely, whether at home or when on the road, the model became difficult to support and the concept of VPNs were introduced in order to extend the castle wherever it was needed.  VPNs changed how companies could physically exist but did so without addressing some fundamental issues with the architecture of a traditional IT infrastructure.

The solution to this infrastructure reinvention has been coming for a long time now.  The movement towards web applications, “cloud services”, hosted applications, Software as a Service and other terms for the new ways in which people were thinking about applications.  Slowly we started exposing applications to the “outside”.  We started simply with email, then basic web applications and slowly more and more components of business infrastructure start to be exposed externally without requiring the use of a VPN.

The advent of smartphones accelerated this process as certain applications, email and calendaring being the biggest drivers, absolutely demanded extension to these mobile devices.  For the most part, IT departments did not even see a significant shift occurring.  Instead it was little pinholes, small changes as more and more of the tools used in the business were available without connecting to the VPN, without sitting inside the office.

Today a new business might legitimately ask its CIO: “Why do we even need a LAN?  What benefit do we get from everyone sitting on a single, physical network?”  There are still plenty of good reasons why a LAN might be needed.  But it is a valuable question to ask and the answer might surprise you.  I was asked this myself and the answer was that we didn’t need a LAN, every app was available through its own, secure channel, without a need for VPNs or a local network.

Where LANs continue to shine brightest is in desktop management.  If you need to lock down and control the actual end user equipment then LANs work their best here – currently.  This too will change in time.  But this is where BYOD becomes the secret weapon of the IT department.

BYOD, while creating its own raft of obvious complications, especially around end user support expected after decades of total IT control of end user devices, offers the opportunity to eliminate the LAN, pull back the walls of the castle to surround only the core infrastructure where no end user ever need venture and to drop the support of end users devices solidly into the lap of the end users themselves.  With modern LAN-less application publishing strategies (this includes web apps, remote desktop technologies and others) end user devices are effectively thin clients often providing no more processing capacity than is necessary to display the application.  They are a window into the infrastructure, not a gateway.  They look at the servers, they aren’t sitting inside the castle with them.

Thinking of end user devices as view panels or windows rather than computing devices is the key to making BYOD an advantage to the IT department rather than its bane.  Of course, this plays into the usual ebb and flow and fat and thin clients over the history of computing.  The tide will change again, but for now, this is our current opportunity.  End users want the illusion of control and the reality of picking the device that is best suited to their needs – which are almost strictly physical needs whether of fashion or function.  IT departments want the reality of control and should be happy to allow end users to pick their own devices.  Everyone can win.

The key, of course, is eliminating legacy applications or finding workarounds.  Technological approaches such as VDI, terminal servers or even racks of datacenter-housed desktops potentially provide fallback strategies that can be accessed from nearly any device while “view” layer technologies like HTML 5 look to provide elegant, modern options for exposing applications, shifting display-related processing to the end user device and standardizing on a protocol that is likely to exist ubiquitously in the very near future.  The technologies are there today.

With the corporate network shrunk down to being only the infrastructure servers and associated networking gear suddenly IT departments have the potential for greater control and more flexibility while giving up little.  End users are happy, IT is happy.  BYOD is an opportunity for IT to exert greater control, tighter security all while giving the impression of being approachable and flexible.

The Windows Desktop Cycle

Microsoft has been bringing out desktop operating environments for decades now and those of us who have been in the industry long enough are aware of a pattern that they use, perhaps unofficially, in bringing new technologies to market that those who have not had enough exposure to their releases over the years may have missed.  The release cycle for new Windows products is a very slow one with many years between each release which makes it very difficult to see the pattern emerge if you have not been directly exposed to it for decades.  Researching the products in retrospect, especially with the public’s reaction to them in juxtaposition, is very difficult.

What is important is that Windows comes out in a flip-flop fashion with every other release being a “long term support, heavily stable” release and the alternate releases being the “new technology preview” releases.  This is not to say that any particular release is good or bad, but that one release is based around introducing a new system to the public and the next is a more polished release with fewer changes than its predecessor focused on long term adoption.

The goal of this release pattern should be obvious.  Whenever major changes from to such a widely used platform the average user, even the average IT professional, tends to resist the change and be unhappy with it.  But after a while the new look, feel and features start to feel natural.  Then a slightly updated, slightly more polished version of the same features can be released and the general public feels like Microsoft has “learned its lesson” and they appreciate the same features that they disliked a few years before.  This approach works wonders in Microsoft’s mixed consumer and business world where they get home users to adopt the latest and greatest at home with OEM licenses bundled with the computers that they buy and businesses can, and usually do, wait for the “every other” cycle to allow them to utilize only the more mature of the two releases to their users who have already lived through the pain of the changes at home.

Outside of the Windows world you can witness the same sort of adoption with the much maligned MS Office 2007 and MS Office 2010.  The former was universally hated because of the then new Ribbon interface.  The later was much loved mostly because people had already adapted to the Ribbon interface and now appreciated it but also because Microsoft had time to learn from the 2007 release and tweak the Ribbon to be improved by 2010.

This pattern started long ago and can be seen happening, to some degree, even in the DOS-based Windows era (the Windows family starting from the very beginning and running up through Windows ME.)  Of the more recent family members Windows 3 was the preview, Windows 3.1 was the long term release, Windows 95 was the preview, Windows 98 the long term release and Windows ME was the preview.  Each one of the previews had poor reception, comparatively, due to the introduction of new ideas and interfaces.  Each of the long term releases outlived its counterpart preview release on the market and were widely loved.  It is a successful pattern.

In the modern era of Windows NT, starting with Windows NT 3.1 in 1993, the overarching pattern continued with NT 3.1 itself being the “preview” member of the new Windows NT family.  Just one year later Windows NT 3.5 released and was popular for its time.  Windows NT 3.51 came out and provided the first support for the new world of interoperability with Windows 95 from the DOS family which released just a few months after NT 3.51 itself did.  Then the stable, long term Windows NT 4 released in 1996 and dominated the Windows world for the next half decade.  Windows NT 4 leveraged both the cycle from the Windows NT family as well as the cycle from the DOS/Windows family to great effect.

In 2000 when Windows 2000 released it was a dramatic shift for the Windows NT family and was poorly received.  The changes, both to the desktop and the coinciding Server product with the introduction of Active Directory were massive and disruptive.  Windows 2000 was the quintessential preview release.  It took just one year before Windows XP replaced it on the desktop.  Windows XP, per its place in the cycle, turned out to be the quintessential long term release making even Windows NT 4 look short lived.  Windows XP expanded very little on Windows 2000 Workstation but it brought additional polish and no significant changes making it exactly what businesses and most home users, were looking for as their main operating system for a very long time.

When Microsoft was ready to disrupt the desktop again with new changes, like the additional security of UAC, they did so in Windows Vista.  Vista, like Windows 2000, was not well received and was possibly the most hated Windows release of all time.  But Vista did its job perfectly.  Shortly after the release of Windows Vista came the nominally different Windows 7 with some minor UAC changes and some improved polish and was very well received.  Vista paved the way so that Windows 7 could be loved and used for many years.

Now we stand on the verge of the Windows 8 release.  Like Vista, 2000, Office 2007 and Windows 95, Windows 8 represents a dramatic departure for the platform and already, before even being released, has generated massive amounts of bad press and animosity.  If we study the history of the platform, though, we would have expected this in the Windows 8 release regardless of what changes were going to be announced.  Windows 8 is the “preview” release.  We know that a new operating system, perhaps called Windows 9, is at most two years away and will bring a slightly tweaked, more polished version of Windows 8 that end users will love and the issues with Windows 8, like its predecessors, will soon be forgotten.  The cycle is well established and very successful.  There is very little chance that it will be changing anytime soon.