All posts by Scott Alan Miller

Started in software development with Eastman Kodak in 1989 as an intern in database development (making database platforms themselves.) Began transitioning to IT in 1994 with my first mixed role in system administration.

When to Consider a Private Cloud?

The idea of running a private cloud, hosted or on premise, for a single company is rapidly becoming a commonplace one.  More and more businesses are learning of cloud computing and seeing that running their own cloud platform is both feasible and potentially valuable to the business.  But do to a general lack of cloud knowledge it is becoming more and more common that clouds are recommended when they do not suit the needs of the business at all, instead being mistaken for traditional virtualization management systems.

A cloud is a special type of virtualization platform and fills a unique niche.  Cloud computing takes traditional virtualization and layers it with automated scaling and provisioning that allows for rapid, horizontal scaling of applications.  This is not a normal business need.  Cloud also lends itself, and is often tied to, self-service of resource provisioning but this alone does not make something a cloud nor justify the move to a cloud platform, but could be an added incentive.  What makes cloud interesting is the ability to provide self-service portals to end users and the ability for applications to self-provision themselves.  These are the critical aspects that set a cloud platform apart from traditional virtualization.

What a cloud does not imply are features such as simplified whole-domain system management from a single pane of glass, large scale consolidation, easy migration between hardware systems, rapid provisioning of new systems, virtualization, high availability, resource over-commitment, etc.  These features are all available in other ways, primarily through or on top of standard platform virtualization (VMware vSphere, Microsoft’s HyperV, Xen, et. al.)  It is not that these features cannot be made available in a private cloud, but the features are not aspects of the cloud but rather of the underlying virtualization platform.  The cloud layer is above these and simply passes through the benefits of the underlying layers.

Often cloud is approached because of a misunderstanding that many of the features commonly associated with private clouds are not available in some other, simpler form.  This is rarely the case.  Normal virtualization platforms, most commonly VMware’s vSphere and Microsoft’s HyperV, offer all of these options.  They can be used to make robust clusters of physical servers, managed from a single interface, with incredibly high reliability and rapid provisioning of new systems that require minimal specialty knowledge from the IT department and maintain traditional business workflows.  Most times, when I am speaking with businesses that believe that they may be interested in pursuing the ownership of their own cloud, the features that they really want are not cloud features at all.

The term “cloud” has simply become so popular recently that people begin to assume that important features for nearly everyone must be attributed to it to explain the sudden surge in importance, but this is simply not the case.  Cloud remains, and will remain, a predominantly niche solution appropriate for only a very small number of companies to own themselves.  The use of public clouds or the use of hosted services delivered from cloud platforms will become, and indeed has already become, nearly ubiquitous   But ownership of a private cloud for the use of a single company is a long way from being a business need for most businesses or business units and in many cases, I suspect, never will become so.

Private clouds shine in two key areas.  The first is a business who needs a large number of temporary or ad hoc systems “spun up” on a regular basis.  This often occurs with large development teams and application testing groups, especially if these groups target multiple operating systems.  The ability to rapidly provision temporary testing systems or lab systems can be very advantageous and the nature of cloud computing to easily expose provisioning tools that allow business customers to create, manage and destroy their own system instances with, we would expect, built-in charge back mechanisms can be very beneficial to corporate efficiency as the interaction between the IT department and the end users becomes nearly frictionless for this transaction.  Responsibility for maintaining the cloud as a whole can easily be segregated from the responsibilities of maintaining individual systems.  Seldom used in this manner for production workloads, this allows a self-service approach that many business units desperately seek today.  Impractical on a small scale due to the overhead of creating and maintaining the cloud platform itself but on a large scale can be hugely productive.  In addition to technical advantages, this aspect of cloud computing can serve as a model for thinking of IT as an internal service provider and departments as customers.  We have long discussed IT and other business units in these terms but we rarely truly think of them in this way.

The second area where cloud computing really comes into its own and the one for which the concept was developed originally is to handle auto provisioning for horizontally scaling applications.  That is, application workloads that are able to increase in their capacity handling ability by spawning new instances for themselves.  On a small scale, many web applications, due to their stateless nature, do this within a single system by spawning new thread workers to handle additional connections.  An Apache web server might start with eight listeners ready to service requests but as those threads become exhausted it automatically starts new threads to handle additional incoming connections so that it is able to scale within the confines of a single server.  To expand on this concept, applied to cloud computing, that same application sensing thread exhaustion approaching on a system-wide level (or based on other metrics such as a lack of free memory or a loss of performance) would use an API exposed from the cloud computing platform to signal the cloud management system to provision a new copy of the system that was calling it – essentially cloning itself on the fly.  In a matter of seconds, a new virtual server, identical to the first, would be up and running and joining its parent in servicing incoming requests.  This child or clone system would likewise spawn new threads internally, as needed, and then if it too sensed exhaustion would call the cloud platform to create yet another new system to handle even more threads.  In this way the application can grow itself almost infinitely (within the hardware limits of the entire cloud platform) as needed, on the fly, automatically.  Then, as individual systems become idle, workloads die down, one at a time a system can signal that it is no longer needed to the cloud management system and the system will be powered off and destroyed as it was simply a stateless clone, freeing system capacity for other applications and workloads that may need to take advantage of the spare capacity.

As we can see, cloud computing is massively powerful, especially with the bulk of today’s public and enterprise applications being written in a stateless manner in order to take advantage of web protocols and end user interfaces. Web applications are especially adept at leveraging cloud computing’s scalability model and most large scale web applications leverage this elastic expanding and contracting of capacity today.  Many new NoSQL models are beginning to emerge that signal that databases, in addition to application front end processing nodes, may soon benefit from similar models on a large scale.  This can certainly be leveraged for internal applications as well as publicly facing ones, however internal applications rarely need to scale beyond a single system and so it is quite rare to find private clouds being leveraged in quite this way.

The dangers around cloud computing come in the form of additional complexity above and beyond normal virtualization.  There is the potential for complex storage needed to support the platform and more layers to learn and maintain.  Cloud computing’s ability to rapidly create and destroy systems can make it tempting for users to attempt to use cloud resources as if they were persistent systems, which they can be made to be, which can result in data loss from users receiving behavior very different from what is traditional and expected.  Possibly the biggest cloud concern is a human one and that is the increased likelihood of experiencing uncontrolled system sprawl as end users wildly spin up more and more new systems which, as they are created by end users and not IT, are probably not tightly controlled and monitored leaving systems in a rogue, and oft forgotten state.  This can lead to a maintenance and security nightmare as systems go unpatched and uncared for increasing risk and draining resources.  And most worrisome is the possibility that systems will be created and forgotten and potentially exist without proper licensing.  Tracking and reporting on auto provisioned systems carries process risk caused by the huge shift in how systems are created.  IT departments are accustomed to the heavy licensing processes necessary to maintain compliance but with cloud computing there is a potential for this process to be exposed to the business units in a way for which they are not at all equipped to handle.  There are accommodations for the licensing needs of cloud computing, but this is extra complexity and management that must be addressed.  Allowing systems to exist without direct IT department oversight clearly carries risk of a potentially unforeseen nature.

Private cloud ownership brings many exciting possibilities, but it is clear that these benefits and opportunities are not for everyone.  They cater to larger businesses, to those with good process control, to companies running especially adapted applications that are capable of taking advantage of the system-level elasticity of the resources and those needing large scale ad hoc system creation and destruction provided, as a service, for end users to self-provision.  Most large enterprises will find limited use for cloud computing in house.  Smaller organizations will rarely find cloud computing to be advantageous in the near future, if ever.

Stick to IT, Don’t Become Another Department

I see this very regularly, it seems to be a huge temptation of IT departments to overstep IT bounds and want to take on the roles and responsibilities of other company departments. In the SMB this might be a lot more true because there isn’t a clear demarcation of IT versus other departments, job roles are often shared, there aren’t good policies and procedures, there aren’t people doing those other jobs, etc. And there is always the possibility that these cross-domain responsibilities are truly assigned to IT. But nine times out of ten, this is not the case.

I believe that this behaviour stems from a few things:

  1. People tend to work in IT because they are “smarter” or at least “more interested” about most things than average people so we tend to carry a lot of general knowledge that allows us to act as a competent member of any department (IT can do HR’s job in a pinch, is the reverse commonly true?)
  2. IT tends to get thrown whatever work other departments don’t want to do and can get away with handing off (can you print this for us? can you fix my microwave? the fuse has blown!  have you any experience with sprinklers?) So we get into this mindset from other departments’ behaviors towards us.
  3. We have a broad view into the organization as a whole, moreso than almost any other department.
  4. We tend to be passionate about doing things the “right way” – which is often based on technical excellence or industry common practice but may not account for the specific business needs nor unique factors.

Put together, these, and other, factors make us tend to want to get involved in anything and everything in and around the businesses which we serve. Questions around involvement in other departments’ activities come up regularly. To establish just how skewed our thinking about this behavior tends to be – we see IT people asking IT people what their responsibility is rather than talking to their own business’ management who are the ones actually making that decision. This isn’t about best practice, it is about following your own company’s rules.

Some examples of places where IT people like to jump in and try to be other departments:

  • “People are surfing Facebook at work, I have to stop them.” – Do you? Is this a business decision or is IT just making HR or security decisions for those departments? IT bringing this up as a topic is great, but feeling a need to enforce personal work habit decisions should probably be left to the business owner, manager or designated department like HR, legal or security.
  • Spying on end users, capturing passwords, etc. – Did the legal department ask you to do this? If not, don’t take on legal and security responsibilities, especially ones that might carry fines or even  jail time in your local jurisdiction!  We risk turning the tables from suspecting someone else to being the culprits ourselves.
  • Pressuring the business about fire hazards, safety issues (that are not your own), etc. – See something, say something. Awesome. Don’t be the cause of bad behaviour yourself. But if the business isn’t concerned about these things once reported, unless it is a legal issue that you need to turn over to the police, don’t feel that this is IT’s job. The janitor doesn’t feel this way, HR doesn’t feel this way, IT shouldn’t either. If the business decides to not care, you shouldn’t either. (Example was AJ talking about stringing surge protectors together.)
  • The business can’t be down! – IT loves this one. This might be us pushing for high availability clusters or just overbuilt servers or who knows what. The reality is, this is 100% a financial decision that the accounting, financial and CFO teams should be making. IT has no idea how much the business can be or can’t be down – we just know how much it costs to mitigate how much risk. We feed data to the financial people who come back with the risk/reward evaluation. IT shouldn’t be making financial decisions on any scale.

I could go on and on. HR, finance, security, facilities management, legal – we want to get involved in all of these job roles. But is it our responsibility to do so? Maybe it is in your case, but normally, it is not. We take on personal and professional risk in order to push our ideas and opinions on businesses that often aren’t interested in our input (in those areas.)

Step back and look at your relationship to the business. Are you making suggestions and decisions that line up with your role within the business and with the business’ unique needs? Keep perspective. It is so easy to get caught up in IT doing things the “right” way that we forget that the business might not share our opinions of what is right and wrong for them – and we aren’t in IT just for the sake of being in IT, but for the purpose of supporting the business.

[Reprinted from a post in Spiceworks, January 8, 2013]

Is it Time to Move to Windows 8

Microsoft’s latest desktop reboot is out in the wild and lots of people are getting their hands on it and using it today.  Is it time to consider moving to Windows 8?  Absolutely.

That doesn’t mean that Windows 8 should be your main desktop later this afternoon, but considering a move to Windows 8 is important to do early.  It is a popular approach to hold off on new software updates until systems have been in production use for months or years and there is value to this concept – allowing others to vet, test and uncover issues while you sit back and remain stable on existing, well known software.  But there is a reason why so many businesses forge ahead and that is because using software early delivers the latest features and advantages as early as possible.

Unlike software coming from a small company with limited support and testing resources, Microsoft’s software is incredibly well tested both internally and by the community before it is available to end users.  Few software is more heavily vetted prior to release.  That doesn’t mean that release day rollouts are wise but beginning to evaluate new products early can have major advantages both because of the newest features are available to those that decide to use the new product but also the most time to find an alternative for those decide to migrate away.  Early decision making is important to success.

The reality is, that while many businesses should take the time to evaluate Windows 8 versus alternative solutions – a practice that should be done regularly regardless of new features or changes to environments to ensure that traditional choices remain the best current choices, nearly all businesses today will be migrating to Windows 8 and remaining in the Microsoft ecosystem for quite some time to come.

This means that many companies should be looking to make the jump to Windows 8 sooner, rather than later.  Windows 8, while seemingly shockingly new and innovative, is based on the same Windows NT 6 family kernel that began with Windows Vista and Windows Server 2008 and continued through the Windows 7 and Windows Server 2008 R2 era and is shared with Windows Server 2012.  This kernel is mature and robust and the vast majority of the code and features in Windows 8, user interface aside, are well tested and extremely stable.  Windows 8 uses fewer resources, on the same hardware, as Windows 7 which, in turn, was lighter and faster than Windows Vista.  The sooner that you move to Windows 8, the sooner you get more performance out of your existing hardware and the longer you have to leverage that advantage.

Windows 8 brings some big changes that will impact the end users, without a doubt.  These changes can be, in some cases, quite disruptive but with proper training and preparation users should return to regular productivity levels in a reasonable amount of time and often will be more productive once they are comfortable with the new environment and features.  Those that do not fall into one of these two categories are the smaller, niche user group that are prime candidates for moving to a completely different ecosystem where their needs can be more easily met.

If you are an organization destined to be running Windows 8, or its successors, “someday” then most likely you should be running Windows 8 today to start leveraging its advantages as soon as possible so that you can use them as long as possible.  If Windows truly is the platform that is best for you you should embrace it and accept the “hit” of transitioning to Windows 8 now, swallow that bitter pill and be done with it, and for the next several years while your competitors are whining about having to move to Windows 8 “someday” you will be happily leveraging your older hardware, your more efficient workflows and your more modern systems day after day, reaping the benefits of an early migration to a stable platform.

It is common for IT departments to take a “wait and see” approach to new system migrations.  I am convinced that this is created by a culture of hoping that IT staff will leave their current positions before a migration occurs and that they will land a new position elsewhere where they have already migrated.  Or perhaps they hope to avoid the migration completely awaiting a later version of Windows.  This second argument does carry some weight as many shops skip operating system revisions but doing so often brings extra overhead in security issues, application compatibility effort and other issues.

Windows 8 is unique in that it is a third release of the Windows NT 6 kernel series so it comes as a rare, very stable late release member of its family (the NT 6 family is sometimes called the “Vista Family.”)  Windows 8’s NT designation is 6.2.  The only other Microsoft NT operating system to reach the x.2 status was when Windows XP SP3 and Server 2003 R2 released with the NT 5.2 kernel – a part of the Windows 2000 family.  Late release kernels are important because they tend to deliver the utmost in reliability and represent an excellent point in which to invest in a very long term deployment strategy that can last for nearly a decade.

Whether you agree with Microsoft’s unified platform vision or the radical approach to user interface included in Windows 8, you need to decide if you are continuing down the path of the Microsoft platform and if so, embrace it rather than fight it and begin evaluating if a move to Windows 8 and, by extension, Windows Server 2012 are right for you.  Don’t avoid Windows 8, it isn’t going to go away.  For most shops making the decision to move today will sow the seeds for long term benefits that you can reap for years and years to come.

 

Hello, 1998 Calling….

Something magic seems to have happened in the Information Technology profession somewhere around 1998.  I know, from my own memory, that the late 90s were a special time to be working in IT.  Much of the architecture and technology that we have today stem from this era.  Microsoft moved from their old DOS products to Windows NT based, modern operating systems.  Linux became mature enough to begin appearing in business.  Hardware RAID became common, riding on the coattails of Intel’s IA32 processors as they finally begin to become powerful enough for many businesses to use seriously in servers.  The LAN became the business standard and all other models effectively faded away.  The Windows desktop became the one and only standard for regular computing and Windows servers were rapidly overtaking Novell as the principle player in LAN-based computing.

What I have come to realize over the last few years is that a large chunk of the communal wisdom of the industry appears to have been adopted during these formative and influential years of the IT profession and have since then passed into myth.  Much like the teachings of Aristotle who went for millennia considered to be the greatest thinker of all time and not to be questioned – stymieing scientific thought and providing a cornerstone for the dark ages.  A foundation of “rules of thumb” used in IT have passed from mentor to intern, from professor to student, from author to reader over the past fifteen or twenty years, many of them being learned by rote and treated as infallible truths of computing without any thought going into the reasoning and logic behind the initial decisions.  In many cases so much time has come and gone that the factors behind the original decisions are lost or misunderstood as those hoping to understand them today lack firsthand knowledge of computing from that era.

The codification of IT in the late nineties happened on an unprecedented scale driven primarily by Microsoft sudden lurching from lowly desktop maker to server and LAN ecosystem powerhouse.  When Microsoft made this leap with Windows NT 4 they reinvented the industry, a changing of the guard, with an entirely new generation of SMB IT Pros being born and coming into the industry right as this shift occurred.  This was the years leading up to the Y2K bubble with the IT industry swelling its ranks as rapidly as it could find moderately skilled computer-interested bodies.  This meant that everything had to be scripted (steps written on paper, that is) and best practices had to be codified to allow those with less technical backgrounds and training to work.  A perfect environment for Microsoft and their “never before seen” level of friendliness NT server product.  All at once the industry was full of newcomers without historical perspective, without the training and experience and with easy to use servers with graphical interfaces making them accessible to anyone.

Microsoft lept at the opportunity and created a tidal wave of documentation, best practices and procedures to allow anyone to get basic systems up and running quickly, easily and, more or less, reliably.  To do this they needed broad guidelines that were applicable in nearly all common scenarios, they needed it written in clear published form and they needed to guarantee that the knowledge was being assimilated.  Microsoft Press stepped in with the official publications of the Microsoft guidelines and right on its heels Microsoft MCSE program came into the spotlight totally changing the next decade of the profession.  There had been other industry certifications before the MCSE but the Windows NT 4 era and the MCP / MCSE certification systems were the game changing events of the era.  Soon everyone was getting boot camped through certification quickly memorizing Microsoft best practices and recommendations, learning them by rote and getting certified.

In the short term, the move did wonders for providing Microsoft an army of minimally skilled, but skilled nonetheless, supporters who had their own academic interests aligned with Microsoft’s corporate interest forming a symbiotic relationship that completely defined the era.  Microsoft was popular because nearly every IT professional was trained on it and nearly every IT professional encourage the adoption of Microsoft technologies because they had been trained and certified on it.

The rote guidelines of the era touched many aspects of computing, many are probably still unidentified to this day so strong was the pressure that Microsoft (and others) put on the industry at the time.  Most of today’s concepts of storage and disk arrays, filesystems, system security, networking, system architecture, application design, memory, swap space tuning and countless others all arose during this era and passed, rather quickly, into lore.  At the time we were aware that these were simply rules of thumb, subject to change just as they always had based on the changed in the industry.  Microsoft, and others, tried hard to make it clear what underlying principles created the rules of thumb.  It was not their intention to create a generation having learned by rote, but it happened.

That generation went on to be the effective founding fathers of modern LAN management.  In the small and medium business space the late 1990s represented the end of the central computer and remote terminals design, the Internet became ubiquitous (providing the underpinnings for the extensive propagation of the guidelines of the day), Microsoft washed away the memory of Novell and LANtastic, Ethernet over twisted pair completely abolished all competing technologies in LAN networking, TCP/IP beat out all layer three networking competitors and more.  Intel’s IA32 processor architecture began to steal the thunder from the big RISC processors of the previous era or the obscure sixteen and thirty two bit processors attempting to unseat Intel for generations.  The era was defining to a degree few who come since will ever understand.  Dial up networking gave way to always-on connections.  Disparate networks that could not communicate with each other lost to the Internet and a single, global networking standard.  Vampire taps and hermaphrodite connectors gave in as RJ45 connectors took to the field.  The LAN of 1992 looked nothing like the LAN of 1995.  But today, what we use, while faster and better polished, is effectively identical to the computing landscape as it was by around 1996.

All of this momentum, whether intentional or accidental, created an unstoppable force of myth driving the industry.  Careers were built on this industry wisdom taught around the campfire at night.  One generation clinging to their established beliefs, no longer knowing why they trusted those guidelines or if they applied, and another being taught them with little way to know that they were being taught distilled rules of thumb meant to be taught with coinciding background knowledge and understanding and having been designed not only for a very specific era, roughly the band from 1996 to 1999, but also, in a great many cases, for very specific implementations or products, generally Windows 95 and Windows NT 4 desktops and Windows NT 4 servers.

Today this knowledge is everywhere.  Ask enough questions and even young professionals still at university or doing a first internship are likely to have heard at least a few of the more common nuggets of conventional IT industry wisdom.  Sometimes the recommendations, applied to day, are nearly benign representing little more than inefficiency or performance waste.  In other cases they may represent pretty extreme degrees of bad practice today carrying significant risk.

It will be interesting to see just how long the late 1990s continue to so vastly influence our industry today.  Will the next generation of IT professionals finally issue a broad call to deep understanding and question the rote learning of the past eras?  Will misunderstood recommendations still be commonplace in the 2020s?  At the current pace of change, it seems unlikely that any significant change to the thinking of the industry is likely to change too much prior to 2030.  IT has been attempting to move from its wild west, everyone distilling raw knowledge into practical terms on their own to large scale codification like other, similar, fields like civil or electrical engineering, but the rate of change, while tremendously slowed since the rampant pace of the 70s and 80s, still remains so high that the knowledge of one generation is nearly useless to the next and only broad patterns, approaches and thought processes have great value to be taught mentor to student.  We may easily face another twenty years of the wild west before things begin to really settle down.