{"id":257,"date":"2012-05-15T17:23:08","date_gmt":"2012-05-15T22:23:08","guid":{"rendered":"http:\/\/www.smbitjournal.com\/?p=257"},"modified":"2012-06-25T07:57:11","modified_gmt":"2012-06-25T12:57:11","slug":"when-no-redundancy-is-more-reliable","status":"publish","type":"post","link":"https:\/\/smbitjournal.com\/2012\/05\/when-no-redundancy-is-more-reliable\/","title":{"rendered":"When No Redundancy Is More Reliable – The Myth of Redundancy"},"content":{"rendered":"
Risk in a difficult concept and it requires a lot of training, thought and analysis to properly assess given scenarios. \u00a0Often, because risk assessments are so difficult, we substitute risk analysis with simply adding basic redundancy and assuming that we have appropriately mitigated risk. \u00a0But very often this is not the case. \u00a0The introduction of complexity or additional failure modes often accompany the addition of redundancy and these new forms of failure have the potential to add more risk than the added redundancy removes. \u00a0Storage systems are especially prone to these decision processes which is unfortunate as few, if any, systems are so susceptible to failure and more important to protect.<\/p>\n
RAID is a great example of where a lack of holistic risk thinking can lead to some strange decision making. \u00a0If we look at a not uncommon scenario we will see where the goal of protecting against drive failure can actually lead to an increase in risk even when additional redundancy is applied. \u00a0In this scenario we will compare a twelve drive array consisting of twelve three terabyte SATA hard drives in a single array. \u00a0It is not uncommon to hear of people choosing RAID 5 for this scenario to get “maximum capacity and performance” while having “adequate protection against failure.”<\/p>\n
The idea here is that RAID 5 protects against the loss of a single drive which can be replaced and the array will rebuild itself before a second drive fails. \u00a0That is great in theory, but the real risks of an array of this size, thirty six terabytes of drive capacity, come not from multiple drive failures as people generally suspect but from an inability to reliably rebuild the array after a single drive failure or from a failure of the array itself with no individual drives failing.\u00a0 The risk of a second drive failing is low, not non-existent, but quite low.\u00a0 Drives today are highly reliable. Once one drives fails it does increase the likelihood of a second drive failing, which is well documented, but I don’t want this risk to mislead us from looking at the true risks – the risk of a failed resilvering operation.<\/p>\n
What happens that scares us during a RAID 5 resilver operation is that an unrecoverable read error (URE) can occur.\u00a0 When it does the resilver operation halts and the array is left in a useless state – all data on the array is lost.\u00a0 On common SATA drives the rate of URE is 10^14, or once every twelve terabytes of read operations.\u00a0 That means that a six terabyte array being resilvered has a roughly fifty percent chance of hitting a URE and failing.\u00a0 Fifty percent chance of failure is insanely high.\u00a0 Imagine if your car had a fifty percent chance of the wheels falling off every time that you drove it.\u00a0 So with a small (by today’s standards) six terabyte RAID 5 array using 10^14 URE SATA drives, if we were to lose a single drive, we have only a fifty percent chance that the array will recover assuming the drive is replaced immediately.\u00a0 That doesn’t include the risk of a second drive failing, only the risk of a URE failure.\u00a0 It also assumes that the drive is completely idle other than the resilver operation.\u00a0 If the drives are busily being used for other tasks at the same time then the chances of something bad happening, either a URE or a second drive failure, begin to increase dramatically.<\/p>\n
With a twelve terabyte array the chances of complete data loss during a resilver operation begin to approach one hundred percent – meaning that RAID 5 has no functionality whatsoever in that case.\u00a0 There is always a chance of survival, but it is very low.\u00a0 At six terabytes you can compare a resilver operation to a game of Russian roulette with one bullet and six chambers and you have to pull the trigger three times.\u00a0 With twelve terabytes you have to pull it six times!\u00a0 Those are not good odds.<\/p>\n
But we are not talking about a twelve terabyte array.\u00a0 We are talking about a thirty six terabyte array – which sounds large but this is a size that someone could easily have at home today, let alone in a business.\u00a0 Every major server manufacturer, as well as nearly all low cost storage vendors, make sub $10,000 storage systems in this capacity range today.\u00a0 Resilvering a RAID 5 array with a single drive failure on a thirty six terabyte array is like playing Russian roulette, one bullet, six chambers and pulling the trigger eighteen times!\u00a0 Your data doesn’t stand much of a chance.\u00a0 Add to that the incredible amount of time needed to resilver an array of that size and the risk of a second disk failing during that resilver window starts to become a rather significant threat.\u00a0 I’ve seen estimates of resilver times climbing into weeks or months on some systems.\u00a0 That is a long time to run without being able to lose another drive.\u00a0 When we are talking hours or days the risks are pretty low, but still present.\u00a0 When we are talking weeks or months of continuous abuse, as resilver operations are extremely drive intensive, the failure rates climb dramatically.<\/p>\n
With an array of this size we can effectively assume that the loss of a single drive means the loss of the complete array leaving us with no drive failure protection at all.\u00a0 Now if we look at a drive of the same or better performance with the same or better capacity under RAID 0, which also has no protection against drive loss, we need only use eleven of the same drives that we needed twelve of for our RAID 5 array.\u00a0 What this means is that instead of twelve hard drives, each of which has a roughly three percent chance of annual failure, we have only eleven.\u00a0 That alone makes our RAID 0 array more reliable as there are fewer drives to fail.\u00a0 Not only do we have fewer drives but there is no need to write the parity block nor skip parity blocks when reading back lowering, ever so slightly, the mechanical wear and tear on the RAID 0 array for the same utilization giving it a very slight additional reliability edge.\u00a0 The RAID 0 array of eleven drives will be identical in capacity to the twelve drive RAID 5 array but will have slightly better throughput and latency.\u00a0 A win all around.\u00a0 Plus the cost savings of not needing an additional drive.<\/p>\n
So what we see here is that in large arrays (large in capacity, not in spindle count) that RAID 0 actually passes RAID 5 in certain scenarios.\u00a0 When using common SATA drives this happens at capacities experienced even by power users at home and by many small businesses.\u00a0 If we move to enterprise SATA drives or SAS drives then the capacity number where this occurs becomes very high and is not a concern today but will be in just a few years when drive capacities get larger still.\u00a0 But this highlights how dangerous RAID 5 is in the sizes that we see today.\u00a0 Everyone understands the incredible risks of RAID 0 but it can be difficult to put into perspective that RAID 5’s issues are so extreme that it might actually be less reliable than RAID 0.<\/p>\n
That RAID 5 might be less reliable than RAID 0 in an array of this size based on resilver operations alone is just the beginning.\u00a0 In a massive array like this the resilver time can take so long and exact such a toll on the drives that second drive failure starts to become a measurable risk as well.\u00a0 And then there are additional risks caused by array controller errors that can utilize resilver algorithms to destroy an entire array even when no drive failure has occurred.\u00a0 As RAID 0 (or RAID 1 or RAID 10) do not have resilver algorithms they do not suffer this additional risk.\u00a0 These are hard risks to quantify but what is important is that they are additional risks that accumulate when using a more complex system when a simpler system, without the redundancy, was more reliable from the outset.<\/p>\n
Now that we have established that RAID 5 can be less reliable than RAID 0 I will point out the obvious dangers of RAID 0.\u00a0 RAID in general is used to mitigate the risk of a single, lone hard drive failing.\u00a0 We all fear a single drive simply failing and all data being lost.\u00a0 RAID 0, being a large stripe of drives without any form of redundancy, takes the risk of data loss of a single drive failing and multiplies it across a number of drives where any drive failing causes total loss of data to all drives.\u00a0 So in our eleven disk example above, if any of the eleven disks fails all is lost.\u00a0 It is clear to see where this is dramatically more dangerous than just using a single drive, all alone.<\/p>\n
What I am trying to point out here is that redundancy does not mean reliability.\u00a0 Just because something is redundant, like RAID 5, provides no guarantee that it will always be more reliable than something that is not redundant.<\/p>\n
My favourite analogy here is to look at houses in a tornado.\u00a0 In one scenario we build a house of brick and mortar.\u00a0 On the second scenario we build two redundant house, east out of straw (our builders are pigs, apparently.)\u00a0 When the tornado (or big bad wolf) comes along which is more likely to leave us with a standing house?\u00a0 Clearing one brick and mortar house has some significant reliability advantages over redundant straw houses.\u00a0 Redundancy didn’t matter, reliability mattered in the end.<\/p>\n
Redundancy is often misleading because it is easy to quantify but hard to qualify.\u00a0 Redundancy is a black or white question: Is it redundant?\u00a0 Yes or no.\u00a0 Simple.\u00a0 Reliability is not so simple.\u00a0 Reliability is about failure rates and likelihoods.\u00a0 It is about statistics and analysis.\u00a0 As it is hard to quantify reliability in a meaningful way, especially when selling a project to the business people, redundancy often becomes a simple substitute for this complex concept.<\/p>\n
The concept of using redundancy to misdirect questions of reliability also ends up applying to subsystems in very convoluted ways.\u00a0 Instead of making a “system” redundant it has become common to make a highly reliable, and low cost, subsystem redundant and treat subsystem redundancy as applying to the whole system.\u00a0 The most common example of this is RAID controllers in SAN products.\u00a0 Rather than having a redundant SAN (meaning two SANs) manufacturers will often make that one component not often redundant in normal servers redundant\u00a0 and then calling the SAN redundant – meaning a SAN that contains redundancy, which is not at all the same thing.<\/p>\n
A good analogy here would be to compare having redundant cars meaning two complete, working cars and having a single car with a spare water pump in the trunk in case the main one fails.\u00a0 Clearly, a spare water pump is not a bad thing.\u00a0 But it is also a trivial amount of protection against car failure compared to having a second car ready to go.\u00a0 In one case the entire system is redundant, including the chassis.\u00a0 In the other we are making just one, highly reliable component redundant inside the chassis.\u00a0 It’s not even on par with having a spare tire which, at least, is a car component with a higher likelihood of failure.<\/p>\n
Just like the myth of RAID 5 reliability and system\/subsystem reliability, shared storage technologies like SANs and NAS often get treated in the same way, especially in regards to virtualization.\u00a0 There is a common scenario where a virtualization project is undertaken and people instinctively panic because a single virtualization host represents a single point of failure where, if it fails, many systems will all fail at once.<\/p>\n
Using the term “single point of failure” causes a panic feeling and is a great means of steering a conversation.\u00a0 But a SPOF, as we like to call it, while something we like to remove when possible may not be the end of the world.\u00a0 Think about our brick house.\u00a0 It is a SPOF.\u00a0 Our two houses of straw are not.\u00a0 Yet a single breeze takes out our redundant solutions faster than our reliable SPOF.\u00a0 Looking for SPOFs is a great way to find points of fragility in a system, but do not feel that every SPOF must be made redundant in every scenario.\u00a0 Most businesses will find their best value having many SPOFs in place.\u00a0 Our real goal is reliability at appropriate cost, redundancy, as we have seen, is no substitute for reliability, it is simply a tool that we can use to achieve reliability.<\/p>\n
The theory that many people follow when virtualizing is that they take their virtualization host and say “This host is a SPOF, so I need to have two of them and use High Availability<\/em> features to allow for transparent failover!”\u00a0 This is spurred by the leading virtualization vendor making their money firstly by selling expensive HA add on products and secondly by being owned by a large storage vendor – so selling unnecessary or even dangerous additional shared storage is a big monetary win for them and could easily be the reason that they have championed the virtualization space from the beginning.\u00a0 Redundant virtualization hosts with shared storage sounds great but can be extremely misguided for several reasons.<\/p>\n The first reason is that removing the initial SPOF, the virtualization host, is replaced with a new SPOF, the shared storage.\u00a0 This accomplishes nothing.\u00a0 Assuming that we are using comparable quality servers and shared storage all we’ve done is move where the risk is, not change how big it is.\u00a0 The likelihood of the storage system failing is roughly equal to the likelihood of the original server failing.\u00a0 But in addition to shuffling the SPOF around like in a shell game we’ve also done something far, far worse – we have introduced chained or cascading failure dependencies.<\/p>\n In our original scenario we had a single server.\u00a0 If the server stayed working we are good, if it failed we were not.\u00a0 Simple.\u00a0 Now we have two virtualization hosts, a single storage server (SAN, NAS, whatever) and a network connecting them together.\u00a0 We have already determined that the risk of the shared storage failing is approximately equal to our total system risk in the original scenario.\u00a0 But now we have the additional dependencies of the network and the two front end virtualization nodes.\u00a0 Each of these components is more reliable than the fragile shared storage (anything with mechanical drives is going to be fragile) but that they are lower risk is not the issue, the issue is that the risks are combinatorial.<\/p>\n If any of these three components (storage, network or the front end nodes) fail then everything fails.\u00a0 The solution to this is to make the shared storage redundant on its own and to make the network redundant on its own.\u00a0 With enough work we can overcome the fragility and risk that we introduced by adding shared storage but the shared storage on its own is not a form of risk mitigation but is a risk itself which must be mitigated.\u00a0 The spiral of complexity begins and the cost associated with bringing this new system up on par with the reliability of the original, single server system can be astronomic.<\/p>\n Now that we have all of this redundancy we have one more risk to worry about.\u00a0 Managing all of this redundancy, all of these moving parts, requires a lot more knowledge, skill and preparation than does managing a simple, single server.\u00a0 We have moved from a simple solution to a very complex one.\u00a0 In my own anecdotal experience the real dangers of solutions like this come not from the hardware failing but from human error.\u00a0 Not only has little been done to avoid human error causing this new system to fail but we’ve added countless points where a human might accidentally bring the entire system, redundancy and all, right down.\u00a0 I’ve seen it first hand; I’ve heard the horror stories.\u00a0 The more complex the system the more likely a human is going to accidentally break everything.<\/p>\n It is critical that as IT professionals that we step back and look at complete systems and consider reliability and risk and think of redundancy simply as a tool to use in the pursuit of reliability.\u00a0 Redundancy itself is not a panacea.\u00a0 Neither is simplicity.\u00a0 Reliability is a complex problem to tackle.\u00a0 Avoiding simplistic replacements is an important first step in moving from covering up reliability issues to facing and solving them.<\/p>\n <\/p>\n","protected":false},"excerpt":{"rendered":" Risk in a difficult concept and it requires a lot of training, thought and analysis to properly assess given scenarios. \u00a0Often, because risk assessments are so difficult, we substitute risk analysis with simply adding basic redundancy and assuming that we have appropriately mitigated risk. \u00a0But very often this is not the case. \u00a0The introduction of … Continue reading When No Redundancy Is More Reliable – The Myth of Redundancy<\/span>