All posts by Scott Alan Miller

Started in software development with Eastman Kodak in 1989 as an intern in database development (making database platforms themselves.) Began transitioning to IT in 1994 with my first mixed role in system administration.

IT’s Most Needed Skills

IT does not exist in a bubble.  IT is a business enabler, a way of taking an existing business and making it more efficient, cost effective, nimble and capable.  Except for home hobbyist, and even there this isn’t quite true – IT is subject to the business that it supports.  It has a goal, an objective and a budget.  The business provides for context in which IT exists.

I speak with a wide array of IT professionals every day.  I work with both the enterprise and small business markets and I work with a review board overseeing IT and software development programs at a university.  On that board we were asked, “What is the single, most critical skill lacking in college graduates seeking jobs in IT today?”

The answer to that question was overwhelmingly “the ability to write and communicate effectively.”  No, it was not a technology skill.  It was not even a skill taught by the computer or technology department.  What we needed was for the English department to push these students harder and for the university to require more and harder English classes for non-majors and to demand that those skills be applied to classes taken in all disciplines and not relegate those skills purely for use in English-focused classes.

The ability to communicate effectively is critical in any profession but IT especially is a field where we need to be able to communicate large amounts of data, both technical and esoteric, rapidly, accurately and with extreme precision.  Most fields don’t penalize you to the same degree as IT for not knowing the correct use of white space or capitalization, spelling or context.  IT demands a level of attention to detail rare even in professional fields.

As a prime example I have seen the misuse of “Xen server” to mean “XenServer” no less than twenty times in an attempt to get technical assistance – which inevitably lead to useless advice since these are the proper names of two different products with unique configurations, vendors and troubleshooting procedures.  How many lost hours of productivity for each of those companies have happened just because someone cannot properly identify or communicate the software product with which they are seeking assistance?  Worse set, I’ve see this same product referred to as ZenServer or ZEN server – both of which are the correct names for other software products.  Yes, four different products that are all homonyms that require proper spelling, spacing and capitalization to reliably differentiate one from another.  The worse scenario is when someone writes “Xenserver” or “Xen Server”, neither being the exact name of any product, where the ambiguity means that there are at least two products equally far from matching what is given.  The person speaking often feels that the needs for precision is “annoying” but fails to understand why the advice that they receive doesn’t seem to apply to their situation.

I’ve seen confusion come from many written inaccuracies – mistaking versions of Windows or confusing “VMWare Server” for “VMWare ESXi” because someone refers to both or either simply as the name of the vendor and not of the product forgetting that that one vendor makes at least five or more different virtualization products.   These are basic written language skills necessary to successful work in IT.  Not only does lacking this skill create technical challenges in communicating to peers but it also implies an inability to document or search for information reliably – some of the most common and critical IT skills.  This, of course, also means that an IT professional in this position may be responsible for purchasing the wrong product from the wrong vendor simply because they did not take the time to be accurate in repeating product or vendor names or may cause system damage by following inappropriate advice or documentation.

Good communications skills go far beyond technical documentation and peer interactions – being able to communicate to the business or other support groups within the organization, to vendors or to customers is extremely important as well.  IT, more than nearly any other field, acquires, processes and disseminates information.  If an IT professional is unable to do so accurately their value diminishes rapidly.

The IT professional seeking to advance their career beyond pure technical pursuits needs the ability to interact with other departments most notably operations and business management in most cases.  These are areas within most companies where written word, as well as presentation, is highly valued and the IT team member able to present recommendations to management will have better visibility within the organization.  Technology departments need people with these skills in order to successfully present their needs to the business.  Without this skill within the ranks, IT departments often fail to push critical projects, secure funding or obtain necessary visibility to work effectively within the organization.

The second big skill needed in IT departments today is an understanding of business – both business in general and the business referring to the specific business of their own organization.  As I said at the beginning of this article, IT is a business enabler.  If IT professionals do not understand how IT relates to their business they will be poorly positioned to valuate IT needs and make recommendations in the context of the business.  Everything that IT does it does for the business, not for technology and not for its own purposes.

Within the IT ranks it is easy to become excited about new products and approaches – we love these things and likely this was a factor in our wanting to work in IT.  But finding the latest software release to be exciting or the latest, fastest hardware  to be “neat” are not sentiments that will cut muster with a business professional who needs to understand the ramifications of a technology investment.  IT professionals wishing to move beyond being purely technology implementers into being technology recommendors and advisers need to be able to speak fluently to business people in their own language and to frame IT decisions within the context of the business and its needs.

Using Certifications to Drive Education

There is little denying that Information Technology is a field driven by certification.  What other field is filled so completely with certifications for every approach, niche, vendor, product, revision and career path?

Some industries, but relatively few, have broad certifications.  Automotive repair being a key one to which we are often exposed where it is possible, and probably likely, that repair specialists will be certified by some combination of industry certifying body as well as individual vendor or even product certification authority.  But even in industries such as automotive repair it would be uncommon to find an individual technician with more than a handful of certifications.  In IT we often expect that even relatively junior IT staff may have a dozen or more certifications in addition to their experience and formal education from a collegiate level body.

Certifications are simply a way of life in IT and probably for good reason.  Few fields require the style of knowledge, blending vendor specifics and general best practices, as IT and very few are as broad of an industry at all; IT is exceptional in what it covers and is arguably not a single field, at all.  Certificates are one of the few ways of easily displaying areas of focus, level of skill, devotion and passion.

In my own career I have found another great use of certificates, as a driver for education.  We all know that in IT we need to learn new things all of the time.  Continuous self education is just part of what we do.  One of the challenges to this process, however, is remaining focused and setting goals.  It is easy to find a broad, sweeping goal, like “learning Windows Administration”, but be left with little idea where to begin, what avenues to head down, discover what knowledge is deemed to be foundational and where to end up when all is said and done.  And when finished, how do we convey this new found knowledge and experience to current or potential employers?

There are, of course, many ways to tackle this problem.  Finding good books on Windows Administration would be an excellent start.  Many great books exist and I think that most people would find themselves coming to this conclusion if asked to consider how to best go about learning a large scope topic such as this.  And for those that prefer, there are many free and paid video and computer based training options.  These can be wonderful as well.  Much of this would depend on the type of learner that you are.

One of the things that I like most about certifications is that they define a very clear goal and nearly always have a natural selection of training and educational materials associated with them either officially or unofficially.  Windows Administration is an excellent example, again.

If I am pursuing an education around Windows Administration, what better starting point could I have than to start with the recommended educational path from the vendor in question?  This does not mean that vendor education is the end all, be all of this process, but it is almost always the best starting point for developing a firm foundation in the basics, establishing a good, rounded knowledge base from which to grow and having a common base of knowledge shared with other professionals in a similar arena.  In this example, Microsoft themselves provides clear starting points into Windows Administration, guidance on common knowledge expected before beginning the process, ideas on how to do hands on experiential learning, paths for focusing on specific areas of interest and straightforward advancement from entry level certs to mid level certs to advanced certs.

Microsoft continues to be an excellent example because they publish widely available, high quality educational materials, primarily in book and online video form; and they even provide online labs and downloadable products for trial and education.  Using the official curriculum as a framework for learning is a great way to stay on target and drive forward.  Spending time digging into hands on experimentation with concepts as they are learned, treating the certification as mandatory and spending time on less interesting areas and concepts and learning the background and reasoning behind designs and practices are just some of the benefits.

Certifications are helpful from a purely career advancement standpoint, but I think looking at them in this manner does a disservice to the certification process.  Certifications are so much more valuable to you as an individual, and to any potential employer current or future, when they are not about the certification at the end, but rather about the education along the way.

Much of the best learning that I did in IT was done during my serious certification days.  Because I used books heavily, many hands on labs and worked through one certification after another, taking the time to practice and learn and not pushing to get the certificate as quickly as possible and just move on, I gained a firm foundation in the basics, learned many product specifics, became aware of published best practices and learned how specific vendors saw and sought to have me see the IT world.

Certificates themselves are not enough, of course.  It requires great dedication and desire to learn.  The value to the certifications, beyond getting to show them off once completed, is in being a guiding hand keeping the ship on course as you gain the fundamental knowledge of a topic.  Both on course within a narrow topic of a single certification but also, in many cases, for charting a more complex route through a series of certifications.  Or helping in providing a logical progression over time.

Certifications also provide insight into the skills and areas of study that vendors and certification authorities deem to be most important.  Once upon a time Microsoft offered foci in Exchange, Proxy Server, SQL Server, IIS and even Internet Explorer.  Today foci still include Exchange and SQL Server but other concepts like application platforms have moved into “trivial, basic skillsets” and new areas of note include virtualization, cloud platforms and hosted software-as-a-service management.

Certifications also tend to force a coordinated effort on learning the latest features and trends within products, not just new product areas themselves.  This is especially important as vendors are likely to use certifications to begin to guide their expert base towards mindsets that will be useful in the future.  If a vendor knows what is coming down the pike in a revision or two they will sometimes begin to set up foundational knowledge or start to encourage different types of designs via the certification process early so that their supporters in the field are as prepared as possible when new products or features are released.

Remember when looking at certifications, like any education, they are a means and not an end.  By changing our perspective on certifications from one of proving what we know to being a tool for learning we can make the ends that much greater and obtain far better value from the process.

 

Rethinking Long Term Support Releases

Traditionally Long Term Support operating system releases have been the bulwark of enterprise deployments.  This is the model used by IBM, Oracle, Microsoft, Suse and Red Hat and has been the conventional thinking around operating systems since the beginning of support offerings many decades ago.

It has been common in the past for both servers and desktop operating system releases to follow this model, but in the Linux space specifically we began to see this get shaken up where less formal products were free to experiment with more rapid, unsupported or simply unstructured releases.  In the primary product space, openSuse, Fedora and Ubuntu all provided short term support offerings or rapid release offerings. Instead of release cycles measured in years and support cycles closing in on a decade they shorted release cycles to months and support to just months or a few years at most.

In the desktop space, getting new features and applications sooner, instead of focusing primarily on stability as was common on servers, often made sense and brought the added benefit that new technologies or approaches could be tested on faster release cycle products before being integrated into long term support server products.  Fedora, for example, is a proving ground for technologies that will, after proving themselves, make their way into Red Hat Enterprise Linux releases.  By using Fedora, end users get features sooner, get to learn about RHEL technologies earlier and Red Hat gets to test the products on a large scale before deploying to critical servers.

Over time the stability of short term releases has improved dramatically and increasingly these systems are seen as viable options for server systems.  These systems get newer enhancements, features and upgrades sooner which is often seen as beneficial.

A major benefit of any operating system is their support ecosystem, including the packages and libraries that are supported and provided as part of the base operating system.  With long term releases, we often see critical packages aging dramatically throughout the life of the release which can cause problems with performance, compatibility and even security in extreme cases.  This obviously forces users of long term release operating systems to choose between continuing to live with the limitations of the older components or to integrate new components themselves which often breaks the fundamental value of the long term release product.

Because the goal of a long term release is to have stability and integration testing, replacing components within the product to “work around” the limitations of an LTS means that those components are not being treated in an LTS manner and that integration testing from the vendor is no longer happening, most likely, or if it is, not to the same degree.  In effect, what happens is that this becomes a self-built short term release product but with legacy core components and less oversight.

In reality, in most respects, doing this is worse than going directly to a short term release product.  Using a short term or rapid release product allows the vendor to maintain the assumed testing and integration, just with a faster release and support cycle, so that the general value of the long term release concept is maintained and with all components of the operating system, rather than just a few, being updated.  This allows for more standardization, industry testing and shared knowledge and integration than with a partial LTS model.

Maybe the time has come to rethink the value of long term support for operating systems.  For too long, it seems, the value of this approach was simply assumed and followed, and certainly it had and has merits; but the operating system world has changed since this approach was first introduced.  The need for updates has increased while the change rates of things like kernels and libraries have slowed dramatically.  More powerful servers have moved compatibility higher up the stack and instead of software being written to an OS it is often written for a specific version of a language or run time or other abstraction layer.

Shorter release cycles means that systems get features, top to bottom, more often.  Updates between “major” releases are smaller and less impactful.  Changes from updates are more incremental, providing a more organic learning and adaptation curve.  And most importantly the need for replacing system components that are carefully tested and integrated with third party provided versions becomes, effectively, unheard of.

Stability for software vendors remains a value for long term releases and will cause there to be a need for the use of long term releases for a long time to come.  But for the system administrator, the value to this approach seems to be decreasing and, I feel personally, has found an inflection point in recent years.  It used to seem expected and normal to wait two or three years for packages to be updated, but today this feels unnecessarily cumbersome.  It seems increasingly common that higher level components are built with a requirement of newer underlying components; an expectation that operating systems will either be more current or that portions of the OS will be updated separately from the rest.

A heavy reliance on containerization technologies may reverse this trend in some ways, but in ways that always reduce the value of long term releases at the same time.  Containerization reduces the need for extensive capabilities in the base operating system making it easier and more effective to update more frequently for improved kernel, filesystem, driver and container support while leaving libraries and other dependencies in the containers allowing applications that need long term support dependencies to be met in that way and applications that can benefit from newer components to be addressed in that manner.

Of course virtualization has played a role in reducing the value of long term support models by making rapid recovery and duplication of systems trivial.  Stability that we’ve needed long term support releases to address is partially addressed by the virtualization layer; hardware abstraction improves driver stability in very important ways.  In the same vein, devops style support models also reduce the need for long term support and make server ecosystems more agile and flexible.  Trends in system administration paradigms are tending to favour more modern operating systems.

Time will tell if trends continue in the direction that they are headed.  For myself, this past year has been an eye opening one that has seen me move my own workloads from a decade of staunch support for very long term support products to rapid release ones and I must say, I am very happy with the change.

Linux Distro Release Schedules

One of the aspects of the Linux work compared to the Windows one is the variety and challenges of different release schedules.  In the Windows world this is pretty simple, there is one product and it releases when it releases, which is roughly once every two years or so.  Everyone working on Windows is very aware of the upcoming releases, when they will happen, when they go into release candidate, when their end of life is and so forth .  It is very clear and very simple.

In the Linux world, this is very different.  Of course, the biggest difference is that Windows is one product, one thing coming from a single vendor.  In Linux we are talking about a “family” of related products from many vendors, some with multiple products.  This is on top of the kernel release schedule that comes from Linux itself – which we will not worry about here.

Each distro is unique and makes its own release decisions.  In fact, release schedule is often a key factor in what defines one distro from another.  For example, all three primary enterprise Linux vendors offer two different products and in all three cases, the differentiation is primarily around release schedule!  So the concept of release schedule is certainly an important one in this marketplace.

There are three primary release “styles” that we find across all operating systems, not only Linux distros: long term release, short term release and rolling release.  Each release style serves a different purpose, but all generally follow a similar set of rules.

The idea of a release is that the packages within a release will not change outside of security and stability patches.  Of course, this is predicated on the behaviour of enterprise vendors as they exist today, any given distro may chosen to follow established norms or not.  There is no inherent rules of the universe that make this behaviour as it is; but it is a strong convention and the concept of a release is based upon this convention.

Long Term Release

This release model is the most common in the general field of enterprise operating systems and is followed outside of Linux by systems like FreeBSD, Solaris, AIX, Mac OSX and Windows.  Long Term Releases, often referred to by LTS, are designed around slow system change rates providing years, sometimes many years, between major system releases allowing IT teams to avoid migrations for much longer and giving software vendors targets that are stable for a long time.

In the enterprise Linux world, all vendors offer at least one Long Term Release products.  These are the most commonly deployed.

From Red Hat the RHEL and CentOS products are long term release with extremely long release cycles – not on a set schedule but currently releasing every three to four years.

Suse has two LTS products: Suse Linux Enterprise Server and openSuse Leap.  SLES maintains a release schedule that is currently between three and five years and openSuse Leap is based relatively closely on the SLES releases.

Ubuntu’s LTS release is conveniently named LTS and releases every two years on the even years, in April, like clockwork.  Ubuntu currently has the shortest release cycle for any LTS product in this category.

All Long Term Releases have minor releases that come out between the major releases and bring small changes or adjustments to the operating systems that are larger than would be appropriate to release with a patch, but are not large enough to justify a release of a new operating system.  The idea of these minor releases is that they are small enough to not be “breaking” allowing software that is targeted at the major release to remain functional throughout the major release cycle.  Major releases are considered “breaking” with large changes such as significant new kernel features, changes in package choices, new compiler features, different libraries, and so forth.

Short Term or Rapid Release

Long term release schedules obviously create problems for those seeking more modern packages and features.  To address this, all enterprise Linux vendors offer a short term release product.

Red Hat provides the Fedora distribution which releases roughly every six months, but with a flexible schedule.  Fedora is not exactly a separate distribution than RHEL and CentOS, but instead every so often a Fedora release is picked to be the “base” for a future RHEL and CentOS release.  The basis is not direct and some packages from later Fedora releases are sometimes added in, some changes are made, but the basics closely match a Fedora release.  The Fedora release is frozen and put through extensive testing before turning into a long term RHEL release.

The Suse family does not use a short term release product and is unique in this.

Ubuntu has a somewhat different strategy from Red Hat.  Ubuntu releases a product every six months, on a very set schedule.  Every fourth release is earmarked as the long term release, the other three are short term releases.  This makes for a far more simple and straightforward system than how Red Hat works with short term release users and long term release users overlapping for six months every two years.

Rolling Release

The more rapid release schedule type is that of the rolling release, which happens basically continuously.  This release strategy is uncommon, but is beginning to be taken more seriously in recent times.  Only Suse with the openSuse Tumbleweed distribution provides an enterprise rolling release system today.  Updates can be as frequent as every couple of days.

Unlike other release schedules which take large groups of packages and “freeze” them as a single release, the rolling release has updates to individual packages coming as they are ready.  So updates are small, but constant.  This allows for simplified adaptation keeping changes to a micro scale, but makes creating a single, predictable target very difficult.

Those looking for the most up to date packages and cutting edge features will find rolling releases to be the best way to keep everything as up to date as possible.


An important understanding of release schedules is that this is not directly tied to either the length of support given to a release nor does it indicate the amount of testing that goes into each release.

Each release style plays an important role in the system ecosystem and by having different styles of release the enterprise Linux world has greater variety and flexibility for addressing a greater range of needs than would otherwise be feasible.

Currently, long term releases are the most prominent and popular in systems administration, but this trend seems unlikely to continue.  The overall stability in the overall enterprise Linux space has increased and the need for currency is so often a more critical concern that more rapid distros are increasingly desired.