In the late 1990s the standard rule of thumb for building a new server was to put the operating system onto its own, small, RAID 1 array and separate out applications and data into a separate RAID 5 array. This was done for several reasons, many of which have swirled away from us, lost in the sands of time. The main driving factors were that storage capacity was extremely expensive, disks were small, filesystems corrupted regularly and physical hard drives failed at a very high rate compared to other types of failures. People were driven by a need to protect against physical hard drive failures, protect against filesystem corruption and acquire enough capacity to meet their needs.
Today the storage landscape has changed. Filesystems are incredibly robust and corruption from the filesystem itself is almost unheard of and, thank to technologies like journalling, can almost always be corrected quickly and effectively protecting the end users from data loss. Almost no one worried about filesystem corruption today.
Modern filesystem are also able to handle far more capacity than they could previously. It was not uncommon in the late 1990s and early 2000s to have the ability to easily make a drive array larger than any single filesystem could handle. Today that is not reasonably the case as all common filesystems handle many terabytes at least and often petabytes, exabytes or more of data.
Hard drives are much more reliable than they were in the late 1990s. Failure rates for an entire drive failing are very low, even in less expensive drives. So low, in fact, that array failures (data loss in the entire RAID array) is concerned with failing arrays primarily, rather than the failure of hard drives. We no longer replace hard drives with wild abandon. It is not unheard of for large arrays to run their entire lifespans without losing a single drive.
Capacities have scaled dramatically. Instead of 4.3GB hard drives we are installing 3TB drives. Nearly one thousand times more capacity on a single spindle compared to less than fifteen years ago.
These factors come together to create a need for a dramatically different approach to server storage design and a change to the “rule of thumb” about where to start when designing storage.
The old approach can be written RAID 1 + RAID 5. The RAID 1 space was used for the operating system while the RAID 5 space, presumably much larger, was used for data and applications. This design split the two storage concerns putting maximum effort into protecting the operating system (which was very hard to recover in case of disaster and on which the data relied for accessibility) onto highly reliable RAID 1. Lower cost RAID 5, while somewhat riskier, was chosen, typically, for data because the cost of storing data on RAID 1 was too high in most cases. It was a tradeoff that made sense at the time.
Today, with our very different concerns, a new approach is needed, and this new approach is known as “One Big RAID 10” – meaning a single, large RAID 10 array with operating system, applications and data all stored together. Of course, this is just what we say to make it handy, in a system without the needs of performance or capacity beyond a single disk we would say “One Big RAID 1”, but many people include RAID 1 in the RAID 10 group so it is just easier to say the former.
To be even handier, we abbreviate this to OBR10.
Because the cost of storage has dropped considerably and instead of being at a premium is typically in abundance today, because filesystems are incredibly reliable, because RAID 1 and RAID 10 share performance characteristics and because non-disk failure triggered array failures have moved from background noise to primary causes of data loss the move to RAID 10 and to eliminate array splitting has become the new standard approach.
With RAID 10 we now have the highly available and resilient storage previously held only for the operating system available to all of our data. We get the benefit of mirrored RAID performance plus the benefit of extra spindles for all of our data. We get better drive capacity utilization and performance based on that improved utilization.
Even the traditional splitting of log files normally done with databases (the infamous RAID 1 + RAID 5 + RAID 1 approach) is no longer needed because RAID 10 keeps the optimum performance characteristics across all data. With RAID 10 we eliminate almost all of the factors that once caused us to split arrays.
The only significant factor, that has not been mentioned, for which split arrays were traditionally seen as beneficial is access contention – the need for different processes to need access to different parts of the disk at the same time causing the drive head to move around in a less than ideal pattern reducing drive performance. Contention was a big deal in the late 1990s when the old rule of thumb was developed.
Today, drive contention still exists but has been heavily mitigated by the use of large RAID caches. In the late 90s drive caches were a few megabytes at best and often non-existent. Today 256MB is a tiny cache and average servers are deployed with 1-2GB of cache on the RAID card alone. Some systems are beginning to integrate additional solid state drive based caches to add a secondary cache beyond the memory cache on the controller. These can easily add hundreds of gigabytes of extremely high speed cache that can buffer nearly any spindle operation from needing to worry about contention. So the issue of contention has been solved in other ways over the years but has, like other technology changes, effectively freed us from the traditional concerns requiring us to split arrays.
Like array contention, another, far less common reason for splitting arrays in the late 1990s was to improve communications bus performance because of the limitations of the era’s SCSI and ATA technologies. These, too, have been eliminated with the move to serial communications mechanisms, SAS and SATA, in modern arrays. We are no longer limited to the capacity of a single bus for each array and can grow much larger with much more flexibility than previously. Bus contention has been all but eliminated.
If there is a need to split off space for protection, such as log file growth, this can be achieved through partitioning rather than through physical array splitting. In general you will want to minimize partitioning as it increases overhead and lowers the ability of the drives to tune themselves but there are cases where it is the better approach. But it does not require that the underlying physical storage be split as it traditionally was. Even better than partitioning, when available, is logical volume management which makes partition-like separations without the limitations of partitions.
So at the end of the day, the new rule of thumb for server storage is “One Big RAID 10.” No more RAID 5, no more array splitting. It’s about reliability, performance, ease of management and moderate cost effectiveness. Like all rules of thumb, this does not apply to every single instance, but it does apply much more broadly than the old standard ever did. RAID 1 + RAID 5, as a standard, was always an attempt to “make due” with something undesirable and to make the best of a bad situation. OBR10 is not like that. The new standard is a desired standard – it is how we actually want to run, not something with which we have been “stuck”.
When designing storage for a new server, start with OBR10 and only move away from it when it specifically does not meet your technology needs. You should never have to justify using OBR10, only justify not using it.