Linux Distro Release Schedules

One of the aspects of the Linux work compared to the Windows one is the variety and challenges of different release schedules.  In the Windows world this is pretty simple, there is one product and it releases when it releases, which is roughly once every two years or so.  Everyone working on Windows is very aware of the upcoming releases, when they will happen, when they go into release candidate, when their end of life is and so forth .  It is very clear and very simple.

In the Linux world, this is very different.  Of course, the biggest difference is that Windows is one product, one thing coming from a single vendor.  In Linux we are talking about a “family” of related products from many vendors, some with multiple products.  This is on top of the kernel release schedule that comes from Linux itself – which we will not worry about here.

Each distro is unique and makes its own release decisions.  In fact, release schedule is often a key factor in what defines one distro from another.  For example, all three primary enterprise Linux vendors offer two different products and in all three cases, the differentiation is primarily around release schedule!  So the concept of release schedule is certainly an important one in this marketplace.

There are three primary release “styles” that we find across all operating systems, not only Linux distros: long term release, short term release and rolling release.  Each release style serves a different purpose, but all generally follow a similar set of rules.

The idea of a release is that the packages within a release will not change outside of security and stability patches.  Of course, this is predicated on the behaviour of enterprise vendors as they exist today, any given distro may chosen to follow established norms or not.  There is no inherent rules of the universe that make this behaviour as it is; but it is a strong convention and the concept of a release is based upon this convention.

Long Term Release

This release model is the most common in the general field of enterprise operating systems and is followed outside of Linux by systems like FreeBSD, Solaris, AIX, Mac OSX and Windows.  Long Term Releases, often referred to by LTS, are designed around slow system change rates providing years, sometimes many years, between major system releases allowing IT teams to avoid migrations for much longer and giving software vendors targets that are stable for a long time.

In the enterprise Linux world, all vendors offer at least one Long Term Release products.  These are the most commonly deployed.

From Red Hat the RHEL and CentOS products are long term release with extremely long release cycles – not on a set schedule but currently releasing every three to four years.

Suse has two LTS products: Suse Linux Enterprise Server and openSuse Leap.  SLES maintains a release schedule that is currently between three and five years and openSuse Leap is based relatively closely on the SLES releases.

Ubuntu’s LTS release is conveniently named LTS and releases every two years on the even years, in April, like clockwork.  Ubuntu currently has the shortest release cycle for any LTS product in this category.

All Long Term Releases have minor releases that come out between the major releases and bring small changes or adjustments to the operating systems that are larger than would be appropriate to release with a patch, but are not large enough to justify a release of a new operating system.  The idea of these minor releases is that they are small enough to not be “breaking” allowing software that is targeted at the major release to remain functional throughout the major release cycle.  Major releases are considered “breaking” with large changes such as significant new kernel features, changes in package choices, new compiler features, different libraries, and so forth.

Short Term or Rapid Release

Long term release schedules obviously create problems for those seeking more modern packages and features.  To address this, all enterprise Linux vendors offer a short term release product.

Red Hat provides the Fedora distribution which releases roughly every six months, but with a flexible schedule.  Fedora is not exactly a separate distribution than RHEL and CentOS, but instead every so often a Fedora release is picked to be the “base” for a future RHEL and CentOS release.  The basis is not direct and some packages from later Fedora releases are sometimes added in, some changes are made, but the basics closely match a Fedora release.  The Fedora release is frozen and put through extensive testing before turning into a long term RHEL release.

The Suse family does not use a short term release product and is unique in this.

Ubuntu has a somewhat different strategy from Red Hat.  Ubuntu releases a product every six months, on a very set schedule.  Every fourth release is earmarked as the long term release, the other three are short term releases.  This makes for a far more simple and straightforward system than how Red Hat works with short term release users and long term release users overlapping for six months every two years.

Rolling Release

The more rapid release schedule type is that of the rolling release, which happens basically continuously.  This release strategy is uncommon, but is beginning to be taken more seriously in recent times.  Only Suse with the openSuse Tumbleweed distribution provides an enterprise rolling release system today.  Updates can be as frequent as every couple of days.

Unlike other release schedules which take large groups of packages and “freeze” them as a single release, the rolling release has updates to individual packages coming as they are ready.  So updates are small, but constant.  This allows for simplified adaptation keeping changes to a micro scale, but makes creating a single, predictable target very difficult.

Those looking for the most up to date packages and cutting edge features will find rolling releases to be the best way to keep everything as up to date as possible.


An important understanding of release schedules is that this is not directly tied to either the length of support given to a release nor does it indicate the amount of testing that goes into each release.

Each release style plays an important role in the system ecosystem and by having different styles of release the enterprise Linux world has greater variety and flexibility for addressing a greater range of needs than would otherwise be feasible.

Currently, long term releases are the most prominent and popular in systems administration, but this trend seems unlikely to continue.  The overall stability in the overall enterprise Linux space has increased and the need for currency is so often a more critical concern that more rapid distros are increasingly desired.

 

The High Cost of On Premises Infrastructure

IT Infrastructure is a challenge for any company and especially companies that are not large enough to implement their own, full scale datacenters.  Like many things in IT, major challenges come in the form of lacking specific, seldom used expertise as well as lacking the scale to utilize singular resources effectively.

This lack of scale can come in many forms.  The obvious one is in man power.  Managing a physical computing infrastructure uses unique skills that are separate from IT itself and are often desired to be available “around the clock.”  This can vary from security to electrical to cooling and facilities to “datacenter technician” style staff.  Of course, smaller businesses simply do without these roles available to them, but this raises the cost incurred on a “per server” basis to maintain the infrastructure.  Large businesses and dedicated datacenters leverage an efficiency of scale to make the cost of physically housing an IT infrastructure lower – either by actually lowering the cost directly or by raising the quality and reliability of the equipment.

The cost effectiveness of delivering power, cooling and datacenter services is only one aspect of the cost of IT infrastructure in a business.  Where many businesses attack this problem, by reducing infrastructure investment and staff, may counteract some amount of the up front costs of the infrastructure, but generally does so to the detriment of availability and longevity of equipment.  Whether it is a lack of ISP redundancy, an absence of diesel electric generators or shaving a year or two of service off of a server’s service life, these costs generally add up, often in ways that are difficult to identify and track.

We see the effects of low qualify infrastructure often come out in the behaviour and expectations of smaller businesses.  For example in the enterprise datacenter an average server lifespan may be ten years or more, but smaller businesses often assume that a server is worn out and unreliable in seven or eight years.  This increase in failure rate also leads to more concern about system failure.  Smaller businesses often see a higher, rather than a lower, need to have redundant systems even when lower revenue would normally suggest otherwise.  Small businesses are prone to investing heavily in high availability mechanisms, often at great expense, to mitigate a perceived risk of high system fail rates that larger businesses may be less likely to see.  These factors can combine to create a high cost through more rapid system replacement and a tendency towards overbuying hardware – sometimes even doubling the otherwise necessary investment to protect against risks created by lower quality facilities management.

This concept is not unique to information infrastructure.  In the audiophile world, while huge investments in high quality audio equipment is common, it is a rule of thumb that fifty percent of audio quality comes from the equipment and fifty percent comes from the environment into which it is placed.  This lesson applies to information infrastructure.   Lower cost gear may run longer and more reliably in a high quality physical environment than more expensive, better engineered equipment will in a lower quality one.

Of course the most obvious components of lower reliability come from being unable to maintain redundant generators, independent power rails, adequate fuel supplies, uninterrupted power supply units, steady temperature and humidity, air filtration and, of course, highly redundant multi-path WAN access.  These aspects we think of all the time and are almost completely out of reach of all but the largest companies.   Even simple things like restricting access to only essential server room staff can be an insurmountable challenge in a small environment.

These challenges create an opportunity to find alternatives for the SME, SMB and SOHO business markets to look for ways to leverage combined scale.  While many companies today turn to ideas such as hosted cloud computing, the associated costs to elastically expanding capacity often make this impractical as this same market struggles the most to have the ability to utilize that type of functionality.  Cloud computing can be an answer in some cases, but normally only for the very smallest of companies for whom a single server is too much scale, or for those companies so large that they have a DevOps-style automation infrastructure capable of scaling elastically with load demands and workloads that make sense for this process.  But these companies are the exception, not the norm.  More often hosted cloud computing makes sense for only a specific subset of public-facing workloads and only in some cases.

For the majority of companies too small to create the scale necessary to build out their own full scale IT infrastructure, the answer is likely going to be found in colocation.  It must be noted that there are obviously potentially overarching locational or environmental factors that can make off-premises infrastructures impossible or at least impractical.  Most businesses, however, will not be subject to these limitations.

Colocation tackles the cost challenges of the smaller business market by generating the scale necessary to make high quality, dedicated information infrastructure facilities possible.  This includes staff, WAN connectivity, environmental controls, power, and expertise.  Cost savings can often come from surprising places including lower power cost per kilowatt hour, lower cost of cooling and power conditioning and higher real estate density.

It is often believed that colocation represents a cost premium service for businesses that have needs above and beyond the average, but in reality colocation is often and should often be chosen because it represents an opportunity to lower costs while also improving reliability.  Colocation, in most cases, will actually bring a cost savings on a month by month basis providing for an impressive return on investment potential over time as the initial cost can be equal or similar to other investments, but the ongoing monthly cost can be lower and, perhaps more importantly, the costs can become far more predictable with fewer risks and unexpected expenditures.

Because the cost of services are potentially very granular it is actually far easier for colocation lower the overall expenditure than is generally believed.  For example, a small business with just one or two servers would still need certain basics such as air conditioning and UPS support plus footprint space and security; all dedicated for only a very small amount of equipment.  In a colocation facility these servers may represent less than one percent of the cooling of a large, high efficiency cooling system, may use just a small fraction of a large UPS and so forth.

Colocation also frees IT staff from performing datacenter functions, at which they are generally untrained and poorly qualified, to focus on the tasks at which they are more valuable and intentioned.  Then the datacenter tasks can be performed by experienced, dedicated datacenter staff.

Calculating exactly ROI can be challenging because individual cases are very unique and depend heavily on the workloads, use cases, independent needs and environmental factors of an individual business and the colocation options considered.  But it should be approached with a mindset that colocation does not present only an opportunity for improvements in the quality or reliability of IT infrastructure services, not that it can represent a return on investment but that it may, in fact, do both of these things on top of fundamentally lowering costs overall.

All IT is External

In IT we often talk about internal and external IT, but this perspective is always one from that of the IT department itself rather than the one from the business and I feel that this is very misleading.   Different departments within a company are generally seen and feel as if they are external to one another; often every bit as much as an external company feels.  For example, an IT department will often see management, operations or human resources as “foreign” departments at best and adversaries at worst.  It is common to feel, and possibly rightfully so, that different departments fail to even share common overarching goals.  IT tends to be acutely aware of that and expresses it often.

What we need to appreciate is that to the business management or owners, the IT department generally appears to them like an external agencies regardless of whether the people working in it are staff or actually from a service provider.  There are exceptions to this, of course, but they are rare.  IT is generally held behind a barrier of sorts and is its own entity. IT commonly displays this in how it talks to or about management.  IT often thinks of system resources or the network as “belonging to IT”, clearly not thinking in terms of IT being just part of the company.  Both sides are commonly guilty of thinking of IT as a separate entity from the company itself.

This happens, of course, for any number of reasons.  Many IT workers choose IT because they are passionate about IT specifically, not the company or market that they are working in; their loyalty is to their IT career, not the business in question and would generally switch companies to advance their IT career rather than stay to advance their internal non-IT career.  IT professionals often struggle with interpersonal skills and so have a higher than average tendency to hide away avoiding unnecessary contact with other departments.  IT tends to be busy and overworked, making socializing problematic.  IT work demands focus and availability, again making it difficult to socialize and interface with other departments.  IT is often kept isolated for security reasons and IT is often seen as the naysayer of the organization – commonly delivering bad news or hindering projects.  IT typically has extremely high turnover rates and almost no IT staff, especially in smaller businesses, is expected to be around for the long haul.  IT is often a conduit to outside vendors and is seen as connected to them or associated with them in many ways.  IT is often behind a “blame barrier” where the organization (other than IT) on one side often seeks to blame IT for business decisions creating a stronger “us and them” mentality. IT exacerbates this with attitudes towards users and decisions makers that are often distancing.  It is also extremely common for IT workers to be staffed via an agency in such a way that there are contract obligations, restrictions or payrolls differences between IT and normal staff.

This creates a rather difficult situation for discussions involving the advantages of internal IT versus external IT.  For internal IT staff it is common to believe that by having IT internally that there are many benefits to the organization due to loyalty, closeness or the ties of payroll.  But is this really the case?

To the business, internal IT is already, in most cases, external to their organization.  The fears that are often stated about external IT service provides such that they may not work in the business’ interests, may suddenly close up shop and disappear, might be overworked and not have enough available resources, may charge for work when idle, may not have the needed expertise, may see the network and resources as their own and not act in the interests of the business, may fail to document the systems or might even hold critical access hostage for some reason – are all fears that businesses have about their own IT departments exactly the same as they have them with external IT service providers.

In fact, external service providers often provide a business with more legal recourse than employees do.  For example, internal IT employees can quit with zero notice and only suffer from acting “unprofessionally” in their lack of notice or can give only two weeks notice and not even have to worry about being unprofessional.  Yet replacing internal IT staff of any caliber will easily take months, and that is just before one can be hired let alone trained, indoctrinated and brought up to useful speed.  It is not uncommon, even in the enterprise, for a job search, hiring process and internal processes for access and so forth to take up to a year from the time the decision to begin interviewing has started until someone is a useful staff member.  But an external IT service provider may be obligated to provide resources for coverage regardless of if staff comes and goes.  There are far more possibilities for mitigating the staff turnover risks that employed IT staff present to a business.

Due to these factors, it is very common for a business to perceive both internal and external IT resources as roughly equal and primarily such that both are very much outsiders to the key organization.  Of course, in an ideal world, both would be treated very much as insiders and worked with as critical partners for planning, decision making, triage and so forth.  IT is critical to business thinking and the business is critical to IT thinking; neither is really functional without the other.

This context of the organizational management view of IT can be important for understanding how the business will react to IT as well as how IT should behave with management.  And it offers an opportunity for both to work on coming together, whether IT is ultimately internal or external, to behave more like a singular organization with a unified goal.