All posts by Scott Alan Miller

Started in software development with Eastman Kodak in 1989 as an intern in database development (making database platforms themselves.) Began transitioning to IT in 1994 with my first mixed role in system administration.

The Smallest IT Department

Working with small businesses means working with small IT shops.  It is very common to find the “one man” shows and I am often in discussions about how to handle environments so small.  There is no easy answer.  Unlike most company departments or job roles, IT is almost always an “around the clock” job that services the fundamental “plumbing” of the business – the infrastructure on which everything else depends.  Normal departments like finance, human resources, legal, management or marketing tend to knock off at the end of the day, leave an hour early on Fridays, go completely offline during the weekend, take normal vacations with little or no office contact, require little ongoing education or training once they are established and almost never have to worry about being expected to spend their nights or weekends doing their work to avoid interrupting others while they work, but this exactly how IT departments need to function.  IT staffs don’t reminisce about that “one time” that things were so bad at work that they had to work through the whole weekend or a full overnight and still work the next day or had to give up their family vacation because the company made no allowance for it operationally – that is simply day to day life for many people in IT.  What other departments often feel is completely unacceptable in IT is just normal practice.  But that doesn’t mean that it works well, IT departments are often driven into the ground and little consideration is given for their long term viability or success.

With rare exception, IT departments have needs that are different from normal departments – based primarily on what business demand from them: high reliability, continuous availability, deep business knowledge of all departments, ability to train others, knowledge of broad and disparate technologies, business skills, financial skills, procurement skills, travel, experience across technologies and industries, efficiency and experience on the latest technologies, trends, architectures, techniques and knowledge of the latest threats and products arriving daily – and to not only use all of that skill and experience to provide support roles but to also be a productive engineer, customer service representative and to present and defend recommendations to management that often pushes back or provides erratic or emotional support of infrastructural needs.  Quite literally, no single person can possibly fill those shoes and one that could would demand a salary higher than the revenue of most small businesses.

How do larger businesses handle this daunting task?  They do so with large IT departments filled with people who specialize in specific tasks, generalists who glue specialists together, dedicated support people who don’t need to do engineering, engineers who don’t get support interruptions, tiered support roles to filter tasks by difficulty, mentors to train newcomers, career pipelines, on call schedules or follow the sun support desks and internal education systems.  The number of challenges presented to a lone IT professional or very small IT department are nearly insurmountable forcing corners to be cut nearly everywhere, often dangerously.  There is no time or resources for tiny IT departments to handle the scope of the job thrown at them.  Even if the job role is whittled down to a very specific job role, SMB IT professionals are often faced with decision making for which they cannot be prepared.  For example, a simple server failure might be seen as just another “hardware upgrade” task because the overworked, under-scoped IT professional isn’t being given the necessary latitude to be able to flag management as to an arising opportunity for some strategic roadmap execution – maybe a complete departure from previous plans due to a late breaking technology change, or a chance to consolidate systems for cost savings or a tactical upgrade or change of platform might deliver unrealized features.

Having worked both in the trenches and in management I believe that there are two thresholds that need to be considered.  One is the minimum functional IT department size.  That is, the minimal size that an internal IT department can be to be able to complete basic job functions using internal staff.  To clarify, “internal staff” can be a rather ambiguous term.  Internal here I use to mean dedicated or effectively dedicated staff.  These people can be employees or contractors.  But at a minimum, with the exception of very rare companies that don’t operate during full business hours or other niche scenario, it takes at least three IT professionals on an IT team to functionally operate as an IT department.

With three people there is an opportunity for peer review, very critical in a technical field that is complex at the best of times and a swirling quagmire of unknown requirements, continuous change and insurmountable complexity at the worst of times.  Like any technical field, IT professionals need peers to talk to, to oversee their work, to check their ideas against and to keep them from entering the SMB IT Bubble.  Three is an important number.  Two people will have a natural tendency to become adversarial with one carrying the weight of recommendation to management and one living in their shadow – typically with the one with the greater soft skills or business skills gaining the ear of management while the one with the greater technical acumen losing their voice if management isn’t careful to intentionally include them.  As with maritime chronometers, it is critical that you have three because you can have a quorum.  Two simply have an argument.

IT is an “around the clock” endeavor.  During the day there are continuous needs from IT end users and the continuous potential for an outage or other disaster plus meetings, design sessions, planning and documentation.  In the evenings and on weekends there is all of the system maintenance that cannot, or at least should not, be done while the business is up and running.  This is often an extensive level of work, not an occasional bit of missing happy hour but regular workload eliminating dinner and family time.  Then comes the emergency calls and outages that happen any time day or night.  And there is the watching of email – even if nothing is wrong it is commonplace for IT to be involved in company business twelve to sixteen hours a day and weekends too, even in very small companies.  Even the most dedicated IT professional will face rapid burnout in an environment such as this without the ability to have a service rotation to facilitate necessary rest and work/life balance.

This comes before the considerations for the unforeseeable sick days, emergency leave or even just holidays or vacation.  If there are not enough people left behind to cover the business as usual tasks plus the unforeseeables, then vacations or even sick days become nearly, if not totally, impossible.  Skipping vacations for a year or two is theoretically possible but it is not healthy and doesn’t provide for a sustainable department.

Then there is training and education.  IT is a demanding field.  Running your own IT department suggests a desire to control the level of skill and availability granted to the company.  To maintain truly useful IT staff time and resources for continuous education is critical.  IT pros at any stage in their career need to have time to engage in discussions and forums, attend classes and training, participate in user groups, go to conferences and even just sit down and read books and web sites on the latest products, techniques and technologies.  If an IT professional is not given the chance to not just maintain, but grow their skills they will stagnate and gradually become useless technically and likely to fall into depression.  A one or two man shop, with even the smallest of organizations, cannot support the necessary free time for serious educational opportunities.

Lastly, and far more critical than it seems at first, is the need to handle request queues.  If issues arise within a business at a continuous, average rate of just enough per day to require eight hours per day to service them it may seem like only one person would be necessary to handle the queue that this work load would generate.  In an ideal world, perhaps that is true.  In the real world, requests come in at varying degrees of priority and often at very inopportune moments so that even a business that has taken on the expense of having dedicated, internal IT cannot have the “instant response time” that they often hope for because their IT professional is busy on an existing task.  The idea of instant response is based on the assumption that the IT resource is sitting idle and watching the ticket queue or waiting by the phone at all times.  That is not realistic.

In large enterprises, to handle the response time concerns of critical environments, surplus IT resources are maintained so that only in the direst of emergencies would all of them be called upon at one time to deal with high criticality issues at the same time.  There is always someone left behind to deal with another pressing issue should one arise.  This not only allows for low latency response to any important customer need but also provides spare time for projects, learning and the necessary mental downtime needed for abstract processing of troubleshooting without which IT professionals in a support role will lose efficiency even if other work does not force them to multitask.

In small shops there is little to be done.  There is a lack of scale to allow for the excess IT resource capacity to be sitting n the wings just waiting for issues to arise.  Having three people is, in my opinion, an absolute minimum to allow for the handling of most cases of this nature if the business is small enough.  By having three people there is, we hope, some chance of avoiding continuous re-prioritization of requests, inefficient multi-tasking and context switching.

In larger organizations there is also a separation of duties between administration or support job roles and engineering job roles.  One job is event driven, sitting “idle” waiting for a customer request and then reacting as quickly as possible. The other focused on projects and working towards overall efficiency.  Two very different aspects of IT that are nearly impossible for a single person to tackle simultaneously.  With a three person shop these roles can exist in many cases even if the roles are temporarily assigned as needed and not permanent aspects of title or function.

With only three people an IT department still lacks the size and scale necessary to provide a healthy, professional growth and training environment internally.  There are not enough rungs on the ladder for IT employees to move up and only turnover, unlikely to happen in the top slot, allows for any upward mobility forcing good candidates to leave rapidly for the sake of their careers leaving good shops with continuous turnover and training and lesser shops with dramatically inferior staff.  There is no simple solution for small organizations.  IT is a broad field with a great many steps on the ladder from helpdesk to CIO.  Top IT organizations have thousands or, in the most extreme cases, hundreds of thousands of IT professionals in a single organization.  These environments naturally have a great degree or both upward and lateral mobility, peer interaction and review, vendor resources, mentoring, lead oversight, career guidance and development and opportunities to explore new ideas and paths often that don’t exist in SMBs of any size.

To maintain a truly healthy IT department takes a much larger pool of resources.  Likely one hundred, or more, IT professionals would be required to provide adequate internal peerage, growth and opportunity to begin to provide for career needs, rather than “job needs.”  Realistically, the SMB market cannot bear this at an individual business scale and must accept that the nature of SMB IT is to have high turnover of the best resources and to work with other businesses, typically ones that are not directly competitive, to share or exchange resources.  In the enterprise space, even in the largest businesses, this is often very common – friendly exchanges of IT staff to allow for career advancement often with no penalties for returning later in their career for different positions at the original company.

Given this bleak picture of SMB IT staff scaling needs, what is the answer?  The reality is is that there is no easy one.  SMB IT sits at a serious disadvantage to its enterprise counterparts and at some scale, especially falling below three dedicated IT staff members, the scale becomes too low to allow for a sustainable work environment in all but the most extreme cases.

In smaller organizations, one answer is turning to consulting, outsourcing and/or managed service providers who are willing and able to work either in the role of internal staff or as a hybrid with existing internal staff to provide for an effectively larger IT organization shared between many businesses.   Another is simply investing more heavily in IT resources or using other departments as part time IT to handle helpdesk or other high demand roles, but this tends to be very ineffective as IT duties tend to overwhelm any other job role.  A more theoretical approach is to form a partnership with another one or two businesses to share in house IT in a closed environment.  This last approach is very difficult and problematic and generally works only when technology is heavily shared as is geographic location between the businesses in question.

More important than providing a simple answer is the realization that IT professionals need a team on which to work in order to thrive and will perform far better on a healthy team than they will alone.  How this is accomplished depends on the unique needs of any given business.  But the efficacy and viability of the one or two “man” IT shop, for even the smallest businesses, is questionable.  Some businesses are lucky enough to find themselves in a situation where this can work for a few years but often live day to day at a high degree of risk and almost always face high turnover with their entire IT department, a key underpinning of the workings of their entire business, leaving at once with the benefits of staggered turnover that a three person and larger shop at least have an opportunity to provide.  With a single person shop there is no handover of knowledge from predecessors, no training and often no opportunity to seek an adequate replacement before the original IT professional is gone leaving at best an abrupt handover and at worst a long period of time with no IT support at all and no in house skills necessary to interview and locate a successor.

Comparing SAN and NAS

One of the greatest confusions that I have seen in recent years is that between NAS and SAN.  Understanding what each is will go a long way towards understanding where they are useful and appropriate.

Our first task is to strip away the marketing terms and move on to technical ones.  NAS stands for Network Attached Storage but doesn’t mean exactly that and SAN stands for Storage Area Network but is generally used to refer to a SAN device, not the network itself.  In its most proper form, a SAN is any network dedicated to storage traffic, but in the real world, that’s not how it is normally used.  In this case we are hear to talk about NAS and SAN devices and how they compare so we will not use the definition that includes the network rather than the device.  In reality, both NAS and SAN are marketing terms and are a bit soft around the edges because of it.  They are precise enough to use in a normal technical conversation, as along as all parties know what they mean, but when discussing their meaning we should strip away the cool-sounding names and stick to the most technical descriptions.  Both terms, when used via marketing, are used to imply that they are a certain technology that has been “appliancized” which makes the use of the terms unnecessarily complicated but no more useful.

So our first task is to define what these two names mean in a device context.  Both devices are storage servers, plain and simple, just two different ways of exposing that storage to the outside world.

The simpler of the two is the SAN which is properly a block storage device.  Any device that exposes its storage externally as a block device falls into this category and can be used interchangeably based on how it is used.  The block storage devices are external hard drives, DAS (Direct Attach Storage) and SAN.  All of these are actually the same thing.  We call it an external hard drive when we attach it to a desktop.  We call it a DAS when we attach it to a server.  We call it a SAN when we add some form of networking, generally a switch, between the device and the final device that is consuming the storage.  There is no technological difference between these devices.  A traditional SAN can be directly attached to a desktop and used like an external hard drive.  An external hard drive can be hooked to a switch and used by multiple devices on a network.  The interface between the storage device and the system using it is the block.  Common protocols for block storage include iSCSI, Fibre Channel, SAS, eSATA, USB, Thunderbolt, IEEE1394 (aka Firewire), Fibre Channel over Ethernet (FCoE) and ATA over Ethernet (AoE.)  A device attaching to a block storage device will always see the storage presented as a disk drive, nothing more.

A NAS, also known as a “filer”, is a file storage device.  This means that it exposes its storage as a network filesystem.  So any device attaching to this storage does not see a disk drive but instead sees a mountable filesystem.  When a NAS is not packaged as an appliance, we simply call it a file server and nearly all computing devices from desktops to servers have some degree of this functionality included in them.  Common protocols for file storage devices include NFS, SMB / CIFS and AFP.  There are many others, however, and technically there are special case file storage protocols such as FTP and HTTP that should qualify as well.  As an extreme example, a traditional web server is a very specialized form of file storage device.

What separates block storage and file storage devices is the type of interface that they present to the outside world, or to think of it another way, where the division between server device and client device happens within the storage stack.

It has become extremely common today for storage devices to include both block storage and file storage from the same device.  Systems that do this are called unified storage.  With unified storage, whether you can say that it is behaving as block storage or file storage device (SAN or NAS in the common parlance) or both is based upon the behavior that you configure for the device not based on what you purchase.  This is important as it drives home the point that this is purely a protocol or interface distinction, not one of size, capability, reliability, performance, features, etc.

Both types of devices have the option, but not the requirement, of providing extended features beneath the “demarcation point” at which they hand off the storage to the outside.  Both may, or may not, provide RAID, logical volume management, monitoring, etc.  File storage (NAS) may also provide file system features such as Windows NTFS ACLs.

The key advantage to block storage is that the systems that attach to it are given an opportunity to manipulate the storage system as if it were a traditional disk drive.  This means that RAID and logical volume management, which may already have been doing in the “black box” of the storage device can now be done again, if desired, at a higher level.  The client devices are not aware what kind of device they are seeing, only that it appears as a disk drive.  So you can choose to trust it (assume that it has RAID of an adequate level, for example) or you can combine multiple block storage devices together into RAID just as if they were regular, local disks.  This is extremely uncommon but is an interesting option and there are products that are designed to be used in this way.

More commonly, logical volume management such as Linux LVM, Solaris ZFS or Windows Dynamic Disks is applied on top of the exposed block storage from the device and then, on top of that, a filesystem would be employed.  This is important to remember, with block storage devices the filesystem is created and managed by the client device, not by the storage device.  The storage device is blissfully unaware of how the block storage that it is presenting is used and allows the end user to use it however they see fit with total control.  This extends even to the point that you can chain block storage devices together with one providing the storage to the next being combines, perhaps, into RAID groups – block storage devices can be layered, more or less, indefinitely.

Alternatively, a file storage device contains all of the block portion of the storage so any opportunity for RAID, logical volume management and monitoring must be handled by the file storage device.  Then, on top of the block storage, a filesystem is applied.  Commonly this would be Linux’ EXT4, FreeBSD and Solaris’ ZFS, Windows NTFS but other filesystems such as WAFL, XFS, JFS, BtrFS, UFS and more are certainly possible.   On this filesystem, data will be stored.  To them share this data with the outside world a network file system (also known as a distributed file system) is used which provides a file system interface that is network enabled – NFS, SMB and AFP being the most common but, like in any protocol family, there are numerous special case and exotic possibilities.

A remote device wanting to use storage on the file storage device would see it over the network the same as it would see a local filesystem and is able to mount it in an identical manner.  This makes file storage especially easy and obvious for end consumer to use as it is very natural in every aspect.  We use network file systems every day for normal desktop computing.  When we “map a drive” in Windows, for example, we are using a network file system.

One critical differentiation between block storage and file storage that must be differentiated between is that, while both potentially can sit on a network and allow multiple client machines to attach to them, only file storage devices have the ability arbitrate that access.  This is very important and cannot be glossed over.

Block storage appears as a disk drive.  If you simply attach a disk drive to two or more computers at once, you can imagine what will happen – each will know nothing of the other and will be unaware of new files being created, others changing and they systems will rapidly begin to overwrite each other.  If your file system is read only on all nodes, this is not a problem.  But if any system is writing or changing the data, the others will have problems.  This generally results in data corruption very quickly, typically on the order of minutes.  To see this in extreme action, imagine having two or three client system all believe that they have exclusive access to a disk drive and have them all defragment it at the same time.  All data on the drive will be scrambled in seconds.

A file storage device, on the other hand, has natural arbitration as the network file system handles the communications for access to the real file system and filesystems, by their nature, are multi-user naturally.  So if one system attached to a file storage device makes a change, all systems are immediately aware of the change and will not “step on each others toes.”  Even if they attempt to do the the file storage device’s filesystem arbitrates access and has the final say and does not let this happen.  This makes sharing data easy and transparent to end users.  (I use the term “end users” here to include system administrators.)

This does not mean that there is no means of sharing storage from a block device, but the arbitration of it cannot be handled by the block storage device itself.  Block storage devices are be made “shareable” by using what is known as a clustered file system.  These types of file systems originated back when server clusters shared storage resources by having two servers attached with a SCSI controller on either end of a single SCSI cable and having the shared drives attached in the middle of the cable.  The only means by which the servers could communicate was through the file system itself and so special clustered file systems were developed that allowed there to be communications between the devices, alerting each to changes made by the other, through the file system itself.  This actually works surprisingly well but clustered file systems are relatively uncommon with Red Hat’s GFS and Oracle’s OCFS being some of the best well known in the traditional server world and VMWare’s much newer VMFS having become extremely well known through its use for virtualization storage.  Normal users, including system administrators, may not have access to clustered file systems or may have needs that do not allow their use.  Of important note is also that the arbitration is handled through trust, not through enforcement, like with a file storage device.  With a file storage device, the device itself handles the access arbitration and there is no way around it.  With block storage devices using a clustered file system, any device that attaches to the storage can ignore the clustered file system and simply bypass the passive arbitration – this is so simple that it would normally happen accidentally.  It can happen when mounting the filesystem and specifying the wrong file system type or through a drive misbehaving or any malicious action.  So access security is critical at the network level to protect block level storage.

The underlying concept being exposed here is that block storage devices are dumb devices (think glorified disk drive) and file storage devices are smart devices (think traditional server.)  File storage devices must contain a full working “computer” with CPU, memory, storage, filesystem and networking.  Block storage devices may contain these things but need not.  At their simplest, block storage devices can be nothing more than a disk drive with a USB or Ethernet adapter attached to them.  It is actually not uncommon for them to be nothing more than a RAID controller plus Ethernet or Fiber Channel adapters to be attached.

In both cases, block storage device and file storage devices, we can scale down to trivially simple devices or can scale up to massive “mainframe class” ultra-high-availability systems.  Both can be either fast or slow.  One is not better or worse, one is not higher or lower, one is not more or less enterprise – they are different and serve generally different purposes.  And there are advanced features that either may or may not contain.  The challenge comes in knowing which is right for which job.

I like to think of block storage protocols as being a “standard out” stream, much like on a command line.  So the base level of any storage “pipeline” is always a block device and numerous block devices or transformations can exist with each being piped one to another as long as the output remains a block storage protocol.  We only terminate the chain when we apply a file system.   In this way hardware RAID, network RAID, logical volume management, etc. can be applied in multiple combinations as needed.  Block storage is truly not just blocks of data but building blocks of storage systems.

One point that is very interesting is that since block storage devices can be chained and since network storage devices must accept block storage as their “input” it is actually quite common for a block storage device (SAN) to be used as the backing storage for a file storage device (NAS), especially in high end systems.  They can coexist within a single chassis or they can work cooperatively on the network.

The Ripple Effect of Windows 8

Windows 8, with its new, dramatic Metro interface, is a huge gamble for Microsoft.  A huge gamble not only because they risk slowing update cycles and attrition of their desktop installation base but also because the Windows desktop is an underpinning of the Microsoft ecosystem – one that can easily unravel if Microsoft fails to maintain a strong foundation.

As a technologist I have been watching Windows 8 for some time having been using it, in some capacity, since the earliest public betas.  I’ve long struggled to come to terms with how Microsoft envisioned Windows 8 fitting into their existing customer base but have been, more or less, hopeful that the final release would fix many of my concerns.  When Windows 8 did finally release I was, sadly, left still wondering why it was so different from past Windows interfaces, what the ultimate intention was and how users were going to react to it.

It didn’t take long before I got a very thorough introduction to user reaction.  As a technology consultancy we tend to move quickly on new technologies and trends.  We may not deploy beta products into production but when new products release our update cycles are generally almost instantaneous.  We need to run the latest and the greatest all of the time so that we are ready for problems before anyone else allowing us to stay ahead of our customers.  So Windows 8 started getting prepped for rollout pretty much on the day that it was released to manufacturing.  This is when management got their first chance to try it out before the actual deployments started – the IT department had been playing with it since early betas.

Management came back to IT to ask critical questions concerning efficiency, usability and training.  Their reaction was that Windows 8’s interface was confusing and highly inefficient requiring a disruptive “jolt” of leaping to and from full screen menus that caused mental context shifting and loss of focus.  Many tasks require power user levels of knowledge to be usable while the interface seemed to be designed around low end “consumer” use and not very appropriate for people with the level of knowledge necessary to make the system functional.

It wasn’t that Windows 8 was unusable but failed at delivering the value traditionally associated with Windows, the value that causes us to traditionally move from version to version more or less without thinking and that is that sticking with Windows on the desktop delivers a predictable user experience requiring little to no retraining and an overall efficient experience.  Windows 8 requires extensive retraining, makes workers less efficient even after adapting to it and expects traditionally casual users to need to be power users to be effective.  While sticking with Windows is the obvious choice for IT departments with deep investments in Windows knowledge and skills (and tools), the value proposition for end users does not have the same continuity that it has in the past.

We read many reviews and consistently the answer to whether Windows 8 would deliver value to other organizations seemed to be focused on it being “good enough” and that with extensive training and all end users learning to “deal with” the interface issues and to learn totally new skills like jumping back and forth between mouse and keyboard, memorizing shortcut keys, etc. that the system could be made to be functional.  But never good, never ideal.  All concerns around Windows 8 aren’t about showing why it is better, just making it acceptable.  Hardly a position that we want to be in as an IT department.  We want to deliver solutions and value.  We want to make our businesses more efficient, not less.  We want to avoid disruption, not create it.

We even went to far as to visit Microsoft at a trade show putting on a display of Windows 8.  Even Microsoft’s own staff were unable to clarify the value proposition of Windows 8 or even in their demonstration environment get it to work “as intended”.  It is clear that even Microsoft themselves are not confident in the product or sure how their customers are expected to react to it.

The decision was made quickly: management wanted a demonstration of a Linux desktop immediately.  The first test was Linux Mint which ended up being the final choice as well. The non-IT users were really impressed with how easy Linux Mint was to use for people with a Windows background and nothing else.  It required no training – users literally just sat down and started working, unlike on Windows 8 where users were confused and needed help even with the simplest tasks like opening an application or shutting down the computer.  And there was essentially no pushback, people were universally excited about the opportunities that the new platform could provide, whereas people were actively concerned about how painful working with Windows 8 would be both up front and down the road.

That Windows 8 blundered so dramatically as to cause a competing product to get auditioned was not that surprising to me.  These things happen.  That the reaction of the non-IT staff was so dramatically in favor of a Linux distro was quite surprising, however.  Staff with no Linux exposure didn’t just see Linux as a low cost alternative or the lesser of two evils but were downright excited to use it.  Windows 8 caused Microsoft’s worst fears to come true – using Windows is no longer something that users can choose because it is familiar and comfortable.  If they feel the need or desire to test alternatives Windows will no longer compete on a “devil we know” basis like it traditionally has in the past but will need to compete on a basis of usability comparisons as Linux Mint, in this case, actually felt far more familiar and comfortable than Windows 8.

What did truly surprise me, however, was the ripple effect that changing the operating system had on the computing infrastructure.  Because Windows was being replaced this caused a series of questions to arise around other technology choices.  The first, probably somewhat obviously, was what to do about Windows-based applications that had no Linux versions?

We are lucky that the shop ran very standard applications and most applications are modern, browser-based ones so the bulk of systems worked on Linux transparently.  The only major application to require an alternative was Microsoft Office.  Fortunately the fix was easy, LibreOffice had everything that we needed and is built into the operating system.  Moving from MS Office to LibreOffice can be simple or intimidating depending on outside dependencies, complexity of use scenarios, heavy use of macros, etc.  We were lucky that across the board the move was trivial, in our case.

Dropping Microsoft Office left us without an effective email client for our Exchange email system.  So again, management asked, what compelling value is there for us in Exchange.  Shoulder shrugs followed.  Almost immediately a migration effort from a hosted Exchange service to Rackspace Email began.  This resulted in one of the largest cost savings, overall, in this entire process.

Next to be questioned was SharePoint.  Without desktop Active Directory integration, Microsoft Office integration and Exchange integration, was the overhead of running a heavy SharePoint installation of appreciable value to our organization?  SharePoint put up the biggest fight as it truly is a nearly irreplaceable system with numerous aspects and features that cannot be trivially compared to other systems.  In the end, however, without the slew of Microsoft integrated components SharePoint was deemed too costly and complex to warrant using on its own in our environment.

One by one, Microsoft products whose values were established through their tight integration with each other began to be eliminated in favor of lower cost, more flexible alternatives.  As one by one they were removed the value that they had cumulatively created diminished making each one less and less valuable without the others.

Before the move to a Linux desktop we had been preparing to install Lync as a replacement both to our instant messaging platform as well as our telephony platform.  Needless to say, that project was cancelled and our current systems, which integrate really well with Linux and were of much lower cost, were kept.

As we got to the end of eliminating Microsoft-based applications it became apparent that using Active Directory for centralized authentication was not cost effective.  This last piece will take quite some time to phase out completely as creating a new, centralized authentication mechanism will take quite a bit of planning and implementation time, but the process has begun to move to a completely different platform.

Even applications that we thought were sacred and untouchable, where plans were in place to keep them running on dedicated Windows instances just for special purposes like accounting, ended up being less sacred than we had anticipated.  New applications were found and systems were migrated.

Of course support infrastructure followed as well with System Center and Windows-focused backup systems no longer needed.  And Windows-based file servers stopped making sense without Windows clients to support.

At the end of the day what was so shocking was that the littlest thing, a concern over the efficiency and usability of Windows 8’s new interface, triggered a series of discoveries that completely unraveled our Microsoft-centered ecosystem.  No single product was unloved or disliked.  We were a team of dedicated Windows 7 desktop users on a wholly Microsoft infrastructure and happy with that decision and happy to be continuing to move more and more over to the Microsoft “way”.  But by simply questioning the assumption that we wanted or needed to be using a Windows desktop ended up bringing down an infrastructural house of cards.

From an end user perspective, the move to Linux was effortless.  There has been quite a bit of retraining and rethinking from the support side, of course.  There is a lot to learn there, but that is IT’s job – support the business and do what needs to be done to make them able to work most efficiently.

Does this bode of a dark future for Windows?  Unlikely, but it does highlight that a significant misstep on the desktop platform could easily put Microsoft’s market position on a downward spiral.  Microsoft depends on tight integration between their systems to create their value proposition.  Losing the desktop component of that integration can quickly undermine the remaining pieces.  To be sure, ours is a special case scenario – a small firm with extensive UNIX skills already existing in house, an ambitious and forward thinking management team and the agility to make broad changes combined with more than a decade of seeking platform independence in application choices, but just because we lie on the extreme edge does not mean that our story is not an important one.  For some, Windows 8 might not only represent the tipping point in the Windows desktop value proposition but the tipping point in the Microsoft ecosystem itself.

Keeping IT in Context

Information Technology doesn’t exist in a bubble, it exists to serve a business or organization (for profit, non-profit, government, etc.)  The entity which we, as IT professionals, serve provides the context for IT.  Without this context IT changes, it becomes just “technology.”

One of the biggest mistakes that I see when dealing with companies of all sizes is the proclivity for IT professionals to forget the context in which they are working and start behaving in one of two ways.  First forgetting context completely and leaving IT for “hobbyist land” and looking at the technologies and equipment that we use purely as toys for the enjoyment and fulfillment of the IT department itself without consideration for the business.  The second is treating the business as generic instead of respecting that every business has unique needs and IT must adapt to the environment which it is in.

The first problem, the hobbyist problem, is the natural extension of the route through which most IT Professionals arrive in IT – they love working on computers and would do so on their own, at home, whether they were paid to do so or not.  This brings often a lifetime of “tech for the sake of tech” feeling to an IT Pro and is nearly universal in the field.  Few other professionals find themselves so universally drawn to what they do that they would do it paid or not.  But this shared experience creates a culture of often forgetting that the IT department exists within the context of a specific corporate entity or business unit and that its mandate exists only within that context.

The second problem stems, most likely, from broad IT and business training that focuses heavily on rules of thumb and best practices which, generally, require “common scenarios” as these are easy to teach by rote and leave out the difficult pieces of problem analysis and system design.  Custom tailoring not only solutions but IT thinking to the context of a specific business with specific needs is difficult and requires learning a lot about the business itself and a lot of thought to put IT into the context of the business specifically.

The fault does not necessarily lie with IT alone.  Business often treat their IT departments as nothing but hobbyists and focus far too heavily on technical and not business skills and often keep IT at arm’s length forgetting that IT has some of the most important business insight as they tend to cross all company boundaries.  IT needs deep access to business processes, workflows, plus planning and goals to be able to provide good advisement to the business but is often treated as if this information is not needed.  Businesses, especially smaller ones, tend to think of IT as a magic box with a set budget that money goes in and network plumbing comes out.  Print and radio ads promote this thinking.  IT as a product is poor business thinking.

In the defense of the business, IT operates in a way that few businesses are really prepared to handle.  IT is a cost center in that there is a base cost needed to keep any company functioning.  But beyond this, IT can be an opportunity center in most businesses, but this requires both IT and the business to work together to create these opportunities and even moreso to leverage them.

IT is often put in the inappropriate position of being forced to justify its own existence.  This is nonsensical as human resources, accounting, legal, management, janitorial, sales, marketing and production departments are never asked to demonstrate their financial viability.  Needing to do so puts an unfair strain on the IT department requiring non-business people to present business cases and wastes resources and hampers thinking in a vain attempt at producing pointless metrics.  This is a flaw in business thinking often caused by a rift between management and the people that they’ve hired to support them.  The relationship is often cold or even adversarial or cursory when it should be close and involved.  IT should be sitting at the decision table, it brings insight and it needs insight.

One of the biggest challenges that IT faces is that it is often in a position of needing to convince the business to do what is in the business’ own best interest.  This is, for the most part, a flaw in business thinking.  The business should not demand to stand in a position of doing the wrong thing and only be willing to do the right thing if it can be “sold” to them.  This is a fundamental flaw in approach.  It should be a process of good decision making, not starting from bad decision making unless being convinced otherwise.  Other departments are not presented with a similar challenge.  What other department regularly has to mount a campaign to request necessary resources?

Due to this challenge in constantly fighting for management attention and resources, IT needs to develop internal business skills in order to cope.  This is a reality of most IT departments today.  The ability not only to keep the business that they support in context and to make IT decisions based on this context but then be able to act as marketing and sales people taking these decisions and delivering them to the business in a manner similar to how outside vendors and salespeople would do is critical.  Outside vendors are sending skilled sales people and negotiators to the business in an attempt to do an end run around IT, IT needs the same skills (with the advantage of insider knowledge and the obvious advantage of having the best interest of the business) in order to demonstrate to the business why their solutions, opportunities and needs are important for consideration.

Having good interpersonal, writing and presentation skills is not enough, of course.  Knowing business context and leveraging it efficiently includes understanding factors such as risk, opportunity, loss, profit and being able to apply these to the relationship between the businesses’ IT investments and the bottom line.  Often IT Pros will be frustrated when the business is unwilling to invest in a solution that they present but forget that the business is considering (we hope) the total cost of ownership and the impact on the company’s bottom line.  When asked how the solution will save money or generate revenue, even indirectly, often, at best, the answers are vague and lack metrics.  Before going to the business with solutions, IT departments need to vet recommendations internally and ask tough questions like:

How does this solution save money today?  Or how does it make us more money?
How much money is it expected to save or earn?
What business problem are we attempting to solve? (What itch are we trying to scratch?)
What risks do we take on or reduce?

Or similar lines of thinking.  Instead of bringing technology to the business, bring solutions.  Identify problems or opportunity and present a case.  Role play and imagine yourself as a business owner disinterested in a solution.  Would you feel that the investment requested is a good one?  Too often we in IT like a solution because it is advanced, complex, “the right way to do it”, because another company is doing it, because it is the hot trend in IT and often we have very good reasons for wanting to bring these techniques or technologies into our workplace but forget that they may not apply or apply well to the business as it is and its financial capabilities or the business roadmap.

When I speak to IT professionals looking for advice on a system design or approach my first question is pretty universally: “What business need are you attempting to solve?”  Often this question is met with silence.  The business had not been considered in the selection of the solution being presented.  Regularly bringing requests or solutions to the business that do not take into consideration the context of the IT department within the business will rapidly train business decisions makers to distrust the advice coming from the IT department.  Not that they would feel that the advice is intentionally skewed but they, and often rightfully so, will suspect that the decisions are being brought forward from a technical basis alone and isolated from the concerns of the business.  Once this distrust is in place it is difficult to return to a healthier relationship.

Making the IT department continuously act within the context of the business that it serves, encouraging IT to pursue business skills and to approach the business for information and insight and making the business see IT as a partner and supporter with whom information must be shared and insight should be gathered can be a tall order.  The business is not likely to take the first step in improving the interaction.  It is often up to IT to demonstrate that it is considering the needs of the business, often moreso than the business itself, and considering the potential financial impact or benefit of its decisions and recommendations.  There is much to be gained from this process, but it is not an easy process

It is important to remember that the need for IT to keep business context is crucial, to some degree, for all members of the IT team, especially those making recommendations, but the ability to judge business need, understand high level workflows, understand financial ramifications, seek opportunity is a combination of IT management (CIO, Dir. of IT, etc.) and the IT department as a whole.  Many non-managerial technical members need not panic and feel that their lack of holistic business vision and acumen will keep them from adequately performing their role within the business context, but it does limit their ability to provide meaningful guidance to the business outside of extremely limited scopes.  Even common job roles, such as deskside support, need to have some understanding of the fiscal responsibilities of the IT department however, such as recognizing when the cost of fixing a low cost component may far exceed the cost of replacing the component with one that is new and, potentially, better.