{"id":423,"date":"2012-12-22T13:49:20","date_gmt":"2012-12-22T18:49:20","guid":{"rendered":"http:\/\/www.smbitjournal.com\/?p=423"},"modified":"2017-02-18T12:27:35","modified_gmt":"2017-02-18T17:27:35","slug":"the-history-of-array-splitting","status":"publish","type":"post","link":"https:\/\/smbitjournal.com\/2012\/12\/the-history-of-array-splitting\/","title":{"rendered":"The History of Array Splitting"},"content":{"rendered":"

Much of the rote knowledge of the IT field, especially that of the SMB field, arose in the very late 1990s based on a variety of factors. \u00a0The biggest factors were that suddenly smaller and smaller businesses were rushing to computerize, Microsoft had gotten Windows NT 4 so stable that there was a standard base for all SMB IT to center around, the Internet era had finally taken hold and Microsoft introduce their certification and training programs that reshaped knowledge dissemination in the industry. \u00a0Put together, this created both a need for new training and best practices and caused a massive burst of new thinking, writing, documentation, training, best practices, rules of thumb, etc.<\/p>\n

For a few years nearly the entire field was trained on the same small knowledge set and many rules of thumb became de facto standards and much of the knowledge of the time was learned by rote and passed on mentor to intern in a cycle that moved much of the technical knowledge of 1998 into the unquestioned, set-in-stone processes of 2012. \u00a0At the time this was effective because the practices were relevant but that was fifteen years ago, technology, economics, use cases and knowledge have changed significantly since that time.<\/p>\n

One of the best examples of this was the famous Microsoft SQL Server recommendation of RAID 1 for the operating system, RAID 5 for the database files and another RAID 1 for the logs. \u00a0This setup has endured for nearly the entire life of the product and was so well promoted that it has spread into almost all aspects of server design in the SMB space. \u00a0The use of RAID 1 for the operating system and RAID 5 for data is so pervasive that it is often simply assumed without any consideration as to why this was recommended at the time.<\/p>\n

Let’s investigate the history and see why R1\/5\/1 was good in 1998 and why it should not exist today. \u00a0Keep some perspective in mind, the gap between when these recommendations first came out (as early as 1995) compared to today is immense. \u00a0Go back, mentally, to 1995 and think about the equivalent gap at the time. \u00a0That would have been like using recommendations in the early Internet age based on home computing needs for the first round of Apple ][ owners! \u00a0The 8bit home computer era was just barely getting started in 1978. \u00a0Commodore was still two years away from releasing their first home computer (the VIC=20) and would go through the entire Commodore and Commodore Amiga eras and go bankrupt and vanish all before 1995. \u00a0The Apple ][+ was still a year away. \u00a0People were just about to start using analogue cassette drives as storage. \u00a0COBOL and Fortran were the only series business languages in use. \u00a0Basically, the gap is incredible. \u00a0Things change.<\/p>\n

First, we need to look at the factors that existed in the late 1990s that created the need for our historic setup.<\/p>\n

    \n
  1. Drives were small, very small. \u00a0A large database array might have been four 2.1GB SCSI drives in an R5 array for just ~6GB of usable storage space on a single array. \u00a0The failure domain for parity RAID failure was tiny (compared to things like URE fail rates.)<\/li>\n
  2. Drive connection technologies were parallel and slow. \u00a0The hard drives of the time were only slightly slower than drives are today but the connection technologies represented a considerable bottleneck. \u00a0It was common to split traffic to allow for reduced bus bottlenecks.<\/li>\n
  3. SCSI drive technology was the only one used for servers. \u00a0The use of a PATA (IDE it was called at the time) in a server was unthinkable.<\/li>\n
  4. Drives were expensive per gigabyte so cost savings was the key issue, while maintaining capacity, for effectively all businesses.<\/li>\n
  5. Filesystems were fragile and failed more often than drives.<\/li>\n
  6. Hardware RAID was required and only basic RAID levels of 1 and 5 were commonly available. \u00a0RAID 6 and RAID 10 were years away from being accessible to most businesses. \u00a0RAID 0 is discounted as it has no redundancy.<\/li>\n
  7. Storage systems were rarely, if ever, shared between servers so access was almost always dedicated to a single request queue.<\/li>\n
  8. Storage caches were tiny or did not exist making drive access limitations pass directly onto the operating system. \u00a0This meant having different arrays with different characteristics to handle different read\/write or random\/sequential access mixes.<\/li>\n
  9. Drive failure was common and the principle concern of storage system design.<\/li>\n
  10. Often drive array size was limited by physical limitations so often array splitting decisions were made out of necessity, not choice.<\/li>\n
  11. A combination of the above factors meant that RAID 1 was best for some parts of the system where small size was acceptable and access was highly sequential or write heavy and RAID 5 was best for others where capacity outweighed reliability and where access was highly random and read heavy.<\/li>\n<\/ol>\n

    In the nearly two decades since the original recommendations were released, all of these factors have changed. \u00a0In some cases the changes are cascading ones where the move from general use RAID 5 to general use RAID 10 has then causes what would have been the two common array types, RAID 1 and RAID 10, to share access\u00a0characteristics\u00a0so the need or desire to use one or the other depending on load type is gone.<\/p>\n

      \n
    1. Drives are now massive. \u00a0Rather than struggling to squeeze what we need onto them, we generally have excess capacity. \u00a0Single drives over a terabyte are common, even in servers. \u00a0Failure domains for parity are massive (compared to things like URE fail rates.)<\/span><\/li>\n
    2. Drive connections are serial and fast. \u00a0The drive connections are no longer a bottleneck.<\/li>\n
    3. SATA is now common on servers skewing potential risks for URE in a way that did not exist previously.<\/li>\n
    4. Capacity is now cheap but performance and reliability are now the key concerns for dollars spent.<\/li>\n
    5. Filesystems are highly robust today and filesystem failures are “background noise” in the greater picture of array reliability.<\/li>\n
    6. Hardware RAID and software RAID are both options today and available RAID levels include many options but, most importantly, RAID 10 is available ubiquitously.<\/li>\n
    7. Storage systems are commonly shared making sequential access even less common.<\/li>\n
    8. Storage caches are commonly and often very large. \u00a0512MB and 1GB caches are considered normal today making many arrays in 1995 fit entirely into memory on the RAID controller today. \u00a0With caches growing rapidly compared to storage capacity and the recent addition of solid state drives as L2 cache in storage in the last two years it is not out of the question for even a small business to have databases and other performance sensitive applications running completely from cache.<\/li>\n
    9. Drive failure is uncommon and of trivial concern to storage system design (compared to other failure types.)<\/li>\n
    10. Drive array size is rarely limited by physical limitations.<\/li>\n
    11. The use of RAID 1 and RAID 10 as the principle array types today means that there is no benefit to using different array levels for performance tuning.<\/li>\n<\/ol>\n

      These factors highlight why the split array system of 1995 made perfect sense at the time and why it does not make sense today. \u00a0OBR10, today’s standard, was unavailable at the time and cost prohibitive. \u00a0RAID 5 was relatively safe in 1995, but not today. \u00a0Nearly every factor involved in the decision process has changed dramatically in the last seventeen years and is going to continue to change as SSD becomes more common along with auto-tiering, even larger caches and pure SSD storage systems.<\/p>\n

      The change in storage design over the last two decades also highlights the dangers that IT faces as a large portion of the field learns, as is common in engineering, basic “rules of thumb” or “best practices” without necessarily understanding the underlying principles that drive those decisions making it difficult to know when not to apply those best practices or, even more importantly, when to recognize that the rule no longer applies. \u00a0Unlike traditional mechanical or civil engineering where new advances and significant factor changes may occur once or possibly never over the course of a career, IT still changes fast enough that complete “rethinks” of basic rules of thumb are required several times through a career. \u00a0Maybe not annually, but once per decade or more is almost always necessary.<\/p>\n

      The current move from uniprocessing to multithreaded architectures is another similar, significant change requiring the IT field to completely change how system design is handled.<\/p>\n","protected":false},"excerpt":{"rendered":"

      Much of the rote knowledge of the IT field, especially that of the SMB field, arose in the very late 1990s based on a variety of factors. \u00a0The biggest factors were that suddenly smaller and smaller businesses were rushing to computerize, Microsoft had gotten Windows NT 4 so stable that there was a standard base … Continue reading The History of Array Splitting<\/span> →<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[41],"tags":[161],"class_list":["post-423","post","type-post","status-publish","format-standard","hentry","category-storage-2","tag-arrays"],"_links":{"self":[{"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/posts\/423","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/comments?post=423"}],"version-history":[{"count":2,"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/posts\/423\/revisions"}],"predecessor-version":[{"id":1083,"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/posts\/423\/revisions\/1083"}],"wp:attachment":[{"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/media?parent=423"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/categories?post=423"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/tags?post=423"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}