{"id":687,"date":"2015-01-14T11:51:34","date_gmt":"2015-01-14T16:51:34","guid":{"rendered":"http:\/\/www.smbitjournal.com\/?p=687"},"modified":"2017-02-19T04:35:30","modified_gmt":"2017-02-19T09:35:30","slug":"on-devops-and-snowflakes","status":"publish","type":"post","link":"https:\/\/smbitjournal.com\/2015\/01\/on-devops-and-snowflakes\/","title":{"rendered":"On DevOps and Snowflakes"},"content":{"rendered":"

One can hardly swing a proverbial cat in IT these days without hearing people talking about DevOps. \u00a0DevOps is the hot new topic in the industry picking up from where the talk of cloud left off and to hear people talk about it one might believe that traditional systems administration is already dead and buried.<\/p>\n

First we must talk about what we mean by DevOps. \u00a0This can be confusing because, like cloud, an older term is often being stolen to mean something different or, at best, related to something that already existed. \u00a0Traditional DevOps was the merging of developer and operational roles. \u00a0In the 1960s through the 1990s, this was the standard way of running systems. \u00a0In this world the people who wrote the software were generally the same ones who deployed and maintained it. \u00a0Hence the merging of “developer” and “operations”, operations being a semi-standard term for the role of system administrator. \u00a0These roles were not commonly separated until the rise of the “IT Department” in the 1990s and the 2000s. \u00a0Since then, the return to the merging of the two roles has started to rise in popularity again primarily because of the way that the two can operate together with great value in many modern, hosted, web application situations.<\/p>\n

Where DevOps is often talked about today is not as a strict merging of the developers and the operations staff but as a modification to the operations staff with a much higher focus on coding not the application itself but in defining application infrastructures as code as a natural extension of cloud architectures. \u00a0This can be rather confusing at first. \u00a0What is important to note is that traditional DevOps is not what is commonly occurring today but a new “fake” DevOps where developers remain developers and operations remains operations but operations has evolved into a new “code heavy” role that continues to focus on managing servers running code provided by the developers.<\/p>\n

What is significant today is that the role of the system administrator has begun to diverge into two related, but significantly different roles, one of which is improperly called DevOps by most of the industry today (most of the industry being too young to remember when DevOps was the norm, not the exception and certainly not something new and novel.) \u00a0I refer to these two aspects of the system administrator role here as the DevOps and the Snowflake approaches.<\/p>\n

I use the term Snowflake to refer to traditional architectures for systems because each individual server can be seen as a “unique Snowflake.” \u00a0They are all different, at least insofar as they are not somehow managed in such a way as to keep them identical. \u00a0This doesn’t mean that they have to be all unique, just that they retain the potential to\u00a0be. \u00a0In traditional environments a system administrator will log into each server individually to work on them. \u00a0Some amount of scripting is common to ease administration tasks but at its core the role involves a lot of time working on individual systems.<\/p>\n

Easing administration of Snowflake architectures often involved attempts to minimize differences between systems in reasonable ways. \u00a0This generally starts with things like choosing a single standard operating system and version (Windows 2012 R2 or Red Hat Enterprise Linux 7) rather than allowing every server installation to be a different OS or version. \u00a0This standardization may seem basic but many shops lack this standardization even today.<\/p>\n

A next step is commonly creating a standard deployment methodology or a gold master image that is used for making all systems so that the base operating system\u00a0and all base packages, often including system customization, monitoring packages, security packages, authentication configuration and similar modifications are standard and deployed uniformly. \u00a0This provides a common starting point for all systems to minimize divergence. \u00a0But technically they only ensure a standard starting point and over time divergence in configuration must be anticipated.<\/p>\n

Beyond these steps, Snowflake environments typically use custom, bespoke administration scripts or management tools to maintain some standardization between systems over time. \u00a0The more commonalities that exist between systems the easier they are to maintain and troubleshoot and the less knowledge is needed by the administration staff. \u00a0More standardization means fewer surprises, fewer unknowns and much better testing capabilities.<\/p>\n

In a single system administrator environment with good practices and tooling, Snowflake environments can take on a high degree of standardization. \u00a0But in environments with many system administrators, especially those supported around the clock from many regions, and with a large number of systems, standardization, even with very diligent practices, can become very difficult. \u00a0And that is even before we tackle the obvious issues surrounding the fact that different packages and possibly package versions are needed on systems that perform different roles.<\/p>\n

The DevOps approach grows organically out of the cloud architecture model. \u00a0Cloud architecture is designed around automatically created and automatically destroyed, broadly identical systems (at least in groups) that are controlled through a programmatic interface or API. \u00a0This model lends itself, quite obviously, to being controlled centrally through a management system rather than through the manual efforts of a system administrator. \u00a0Manual administration is effectively impossible and completely impractical under this model. \u00a0Individual systems are not unique like in the Snowflake model and any divergence will\u00a0create serious issues.<\/p>\n

The idea that has emerged from the cloud architecture world is one\u00a0that systems architecture should be defined centrally “in code” rather than on the servers themselves. \u00a0This sounds confusing at first but makes a lot of sense when we look at it more deeply. \u00a0In order to support this model a new type of systems management tool that has yet to take on a really standard name but is often called a systems automation tool, DevOps framework, IT automation tool or simply “infrastructure as code” tool has begun to emerge. \u00a0Common toolsets in this realm include Puppet, Chef, CFEngine and SaltStack.<\/p>\n

The idea behind these automation toolsets is that a central service is used to manage and control all systems. \u00a0This central authority manages individual servers by way of code-based descriptions of how the system should look and behave. \u00a0In the Chef world, these are called “recipes” to be cute but the analogy works well. \u00a0Each system’s code might include information such as a list of which packages and package versions should be installed, what system configurations should be modified and files to be copied to the box. \u00a0In many cases decisions about these deployments or modifications are handled through potentially complex logic and hence the need for actual code rather than something more simplistic such as markup or templates. \u00a0Systems are then grouped by role and managed as groups. \u00a0The “web server” role might tell a set of systems to install Apache and PHP and configure memory to swap very little. \u00a0The “SQL Server” role might install MS SQL Server and special backup tools only used for that application and configure memory to be tuned as desired for a pool of SQL Server machines. \u00a0These are just examples. \u00a0Typically an organization would have a great many roles, some may be generic such as “web server” and others much more specific to support very specific applications. \u00a0Roles can generally be layered, so a system might be both a “web server” and a “java server” getting the combined needs of both met.<\/p>\n

These standard definitions mean that systems, once designated as belonging to one role or another, can “build themselves” automatically. \u00a0A new system might be created by an administrator requesting a system or a capacity monitoring system might decide that additional capacity is needed for a role and spawn new server instances automatically without any human intervention whatsoever. \u00a0At the time that the system is requested, by a human or automatically, the role is designated and the system will, by way of the automation framework, transform itself into a fully configured and up to date “node.” \u00a0No human system administration intervention required. \u00a0The process is fast, simple and, most importantly, completely repeatable.<\/p>\n

Defining systems in code has some non-obvious consequences. \u00a0One is that backups of complete systems are no longer needed. \u00a0Why backup a system that you can recreate, with minimum effort, almost instantly? \u00a0Local data from database systems would need to be backed up but only the database data, not the entire system. \u00a0This can greatly reduce strain on backup infrastructures and make restore processes faster and more reliable.<\/p>\n

The amount of documentation needed for systems already defined in code is very minimal. \u00a0In Snowflake environments the system administrator needs to maintain documentation specific to every host and maintain that documentation manually. This is very time consuming and error prone. \u00a0 Systems defined by way of central code need little to no documentation and the documentation can be handled at a group level, not the individual node level.<\/p>\n

Testing systems that are defined in code is easy to do as well. \u00a0You can create a system via code, test it and know that when you move that definition into production that the production system will be created repeatably exactly as it was created in testing. \u00a0In Snowflake environments it is very common to have testing practices that attempt to do this but do so through manual efforts and are prone to being sloppy and not exactly repeatable and very often politics will dictate that it is faster to mimic repeatability than to actually strive for it. \u00a0Code defined systems bypass these problems making testing far more valuable.<\/p>\n

Outside of needing to define a number of nodes to exist within each role, the system can reprovision an entire architecture, from scratch, automatically. \u00a0Rebuilding after a disaster or bringing up a secondary site can be very quickly and easily done. \u00a0Also moving between locally hosted systems and remotely hosted systems including those from companies like Amazon, Microsoft, IBM, Rackspace and others is extremely easy.<\/p>\n

Of course, in the DevOps world there is a great value to using cloud architectures to enable the most extreme level of automation but using cloud architectures is unnecessary to leverage these types of tools. \u00a0And, of course, having a code defined architecture could be used partially while manual administration could be implemented too for a hybrid approach but this is rarely recommended on individual systems. \u00a0It is generally far better to have two environments, one that is managed as Snowflakes and one that is managed as DevOps when the two approaches are mandated. \u00a0This makes \u00a0a far better hybridization. \u00a0I have seen this work extremely well in an enterprise environment with more scores of thousands of “Snowflake” servers each very unique but with a dedicated environment of ten thousands nodes that was managed in a DevOps manner because all of the nodes were to be identical and interchangeable using one of two possible configurations. \u00a0Hybridization was very effective.<\/p>\n

The DevOps approach, however, comes with major caveats as well. \u00a0The skill sets necessary to manage a system in this way are far greater than those needed for traditional systems administration as, at a minimum, all traditional systems administration knowledge is still needed plus solid programming knowledge typically of modern languages like Python and Ruby and knowledge of the specific frameworks in question as well. \u00a0This extended knowledge base requirement means that DevOps practitioners are not only rare but expensive too. \u00a0It also means that university education, already far short of preparing either systems administrators or developers for the professional world are now farther still from preparing graduates to work under a DevOps model.<\/p>\n

System administrators working in each of these two camps have a tendency to see all systems as needing to fit into their own mold. New DevOps practitioners often believe that Snowflake systems are legacy and need to be updated. \u00a0Snowflake (traditional) admins tend to see the “infrastructure as code” movement as silly, filled with unnecessary overhead, overly complicated and very niche.<\/p>\n

The reality is that both approaches have a tremendous amount of merit and both are going to remain extremely viable. \u00a0Both make sense for very different workloads and large organizations, I suspect, will commonly see both in place via some form of hybridization. \u00a0In the SMB market where there are typically only a tiny number of servers, no scaling leverage to justify cloud architectures and a high disparity between systems, I suspect that DevOps will remain almost indefinitely outside of the norm as the overhead and additional skills necessary to make it function are impractical or even impossible to acquire. \u00a0Larger organizations have to look at their workloads. \u00a0Many traditional workloads and much of traditional software is not well suited to the DevOps approach, especially cloud automation, and will either require hybridization or an impractically high level of coding on a per system basis making the DevOps model impossible to justify. \u00a0But workloads built on web architectures or that can scale horizontally extremely well will benefit heavily from the DevOps model at scale. \u00a0This could apply to large enterprise companies or smaller companies likely producing hosted applications for external consumption.<\/p>\n

This difference in approach means that, in the United States for example, most of the US is comprised of companies that will remain focused on the Snowflake management model while some east coast companies could evaluate the DevOps model effectively and begin to move in that direction. \u00a0But on the west coast where more modern architectures and a much larger focus on hosted applications and applications for external consumption are the driving economic factors,\u00a0DevOps is already moving from newcomer to mature, established normalcy. \u00a0DevOps and Snowflake approaches will likely remain heavily segregated by regions in this way just as IT, in general, sees different skill sets migrate to different regions. \u00a0It would not be surprising to see DevOps begin to take hold in markets such as Austin where traditional IT has performed rather poorly.<\/p>\n

Neither approach is better or worse, they are two different approaches servicing two very different ways of provisioning systems and two different fundamental needs of those systems. \u00a0With the rise of cloud architectures and the DevOps model, however, it is critically important that existing system administrators understand what the DevOps model means and when it applies so that they can correctly evaluate their own workloads and unique needs. \u00a0A large portion of the traditional Snowflake system administration world will be migrating, over time, to the DevOps model. \u00a0We are very far from reaching a steady state in the industry as to the balance of these two models.<\/p>\n

Originally published on the StorageCraft Blog.<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"

One can hardly swing a proverbial cat in IT these days without hearing people talking about DevOps. \u00a0DevOps is the hot new topic in the industry picking up from where the talk of cloud left off and to hear people talk about it one might believe that traditional systems administration is already dead and buried. … Continue reading On DevOps and Snowflakes<\/span> →<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[132,85],"tags":[218,134],"class_list":["post-687","post","type-post","status-publish","format-standard","hentry","category-best-practices","category-career","tag-devops","tag-system-administration"],"_links":{"self":[{"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/posts\/687","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/comments?post=687"}],"version-history":[{"count":6,"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/posts\/687\/revisions"}],"predecessor-version":[{"id":712,"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/posts\/687\/revisions\/712"}],"wp:attachment":[{"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/media?parent=687"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/categories?post=687"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/tags?post=687"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}