{"id":1134,"date":"2017-04-20T15:06:04","date_gmt":"2017-04-20T20:06:04","guid":{"rendered":"http:\/\/www.smbitjournal.com\/?p=1134"},"modified":"2017-04-20T15:40:33","modified_gmt":"2017-04-20T20:40:33","slug":"rethinking-long-term-support-releases","status":"publish","type":"post","link":"https:\/\/smbitjournal.com\/2017\/04\/rethinking-long-term-support-releases\/","title":{"rendered":"Rethinking Long Term Support Releases"},"content":{"rendered":"
Traditionally Long Term Support operating system releases have been the bulwark of enterprise deployments.\u00a0 This is the model used by IBM, Oracle, Microsoft, Suse and Red Hat and has been the conventional thinking around operating systems since the beginning of support offerings many decades ago.<\/p>\n
It has been common in the past for both servers and desktop operating system releases to follow this model, but in the Linux space specifically we began to see this get shaken up where less formal products were free to experiment with more rapid, unsupported or simply unstructured releases.\u00a0 In the primary product space, openSuse, Fedora and Ubuntu all provided short term support offerings or rapid release offerings. Instead of release cycles measured in years and support cycles closing in on a decade they shorted release cycles to months and support to just months or a few years at most.<\/p>\n
In the desktop space, getting new features and applications sooner, instead of focusing primarily on stability as was common on servers, often made sense and brought the added benefit that new technologies or approaches could be tested on faster release cycle products before being integrated into long term support server products.\u00a0 Fedora, for example, is a proving ground for technologies that will, after proving themselves, make their way into Red Hat Enterprise Linux releases.\u00a0 By using Fedora, end users get features sooner, get to learn about RHEL technologies earlier and Red Hat gets to test the products on a large scale before deploying to critical servers.<\/p>\n
Over time the stability of short term releases has improved dramatically and increasingly these systems are seen as viable options for server systems.\u00a0 These systems get newer enhancements, features and upgrades sooner which is often seen as beneficial.<\/p>\n
A major benefit of any operating system is their support ecosystem, including the packages and libraries that are supported and provided as part of the base operating system.\u00a0 With long term releases, we often see critical packages aging dramatically throughout the life of the release which can cause problems with performance, compatibility and even security in extreme cases.\u00a0 This obviously forces users of long term release operating systems to choose between continuing to live with the limitations of the older components or to integrate new components themselves which often breaks the fundamental value of the long term release product.<\/p>\n
Because the goal of a long term release is to have stability and integration testing, replacing components within the product to “work around” the limitations of an LTS means that those components are not being treated in an LTS manner and that integration testing from the vendor is no longer happening, most likely, or if it is, not to the same degree.\u00a0 In effect, what happens is that this becomes a self-built short term release product but with legacy core components and less oversight.<\/em><\/p>\n In reality, in most respects, doing this is worse than going directly to a short term release product.\u00a0 Using a short term or rapid release product allows the vendor to maintain the assumed testing and integration, just with a faster release and support cycle, so that the general value of the long term release concept is maintained and with all components of the operating system, rather than just a few, being updated.\u00a0 This allows for more standardization, industry testing and shared knowledge and integration than with a partial LTS model.<\/p>\n Maybe the time has come to rethink the value of long term support for operating systems.\u00a0 For too long, it seems, the value of this approach was simply assumed and followed, and certainly it had and has merits; but the operating system world has changed since this approach was first introduced.\u00a0 The need for updates has increased while the change rates of things like kernels and libraries have slowed dramatically.\u00a0 More powerful servers have moved compatibility higher up the stack and instead of software being written to an OS it is often written for a specific version of a language or run time or other abstraction layer.<\/p>\n Shorter release cycles means that systems get features, top to bottom, more often.\u00a0 Updates between “major” releases are smaller and less impactful.\u00a0 Changes from updates are more incremental, providing a more organic learning and adaptation curve.\u00a0 And most importantly the need for replacing system components that are carefully tested and integrated with third party provided versions becomes, effectively, unheard of.<\/p>\n Stability for software vendors remains a value for long term releases and will cause there to be a need for the use of long term releases for a long time to come.\u00a0 But for the system administrator, the value to this approach seems to be decreasing and, I feel personally, has found an inflection point in recent years.\u00a0 It used to seem expected and normal to wait two or three years for packages to be updated, but today this feels unnecessarily cumbersome.\u00a0 It seems increasingly common that higher level components are built with a requirement of newer underlying components; an expectation that operating systems will either be more current or that portions of the OS will be updated separately from the rest.<\/p>\n A heavy reliance on containerization technologies may reverse this trend in some ways, but in ways that always reduce the value of long term releases at the same time.\u00a0 Containerization reduces the need for extensive capabilities in the base operating system making it easier and more effective to update more frequently for improved kernel, filesystem, driver and container support while leaving libraries and other dependencies in the containers allowing applications that need long term support dependencies to be met in that way and applications that can benefit from newer components to be addressed in that manner.<\/p>\n Of course virtualization has played a role in reducing the value of long term support models by making rapid recovery and duplication of systems trivial.\u00a0 Stability that we’ve needed long term support releases to address is partially addressed by the virtualization layer; hardware abstraction improves driver stability in very important ways.\u00a0 In the same vein, devops style support models also reduce the need for long term support and make server ecosystems more agile and flexible.\u00a0 Trends in system administration paradigms are tending to favour more modern operating systems.<\/p>\n Time will tell if trends continue in the direction that they are headed.\u00a0 For myself, this past year has been an eye opening one that has seen me move my own workloads from a decade of staunch support for very long term support products to rapid release ones and I must say, I am very happy with the change.<\/p>\n","protected":false},"excerpt":{"rendered":" Traditionally Long Term Support operating system releases have been the bulwark of enterprise deployments.\u00a0 This is the model used by IBM, Oracle, Microsoft, Suse and Red Hat and has been the conventional thinking around operating systems since the beginning of support offerings many decades ago. It has been common in the past for both servers … Continue reading Rethinking Long Term Support Releases<\/span>