Category Archives: Storage

Comparing SAN and NAS

One of the greatest confusions that I have seen in recent years is that between NAS and SAN.  Understanding what each is will go a long way towards understanding where they are useful and appropriate.

Our first task is to strip away the marketing terms and move on to technical ones.  NAS stands for Network Attached Storage but doesn’t mean exactly that and SAN stands for Storage Area Network but is generally used to refer to a SAN device, not the network itself.  In its most proper form, a SAN is any network dedicated to storage traffic, but in the real world, that’s not how it is normally used.  In this case we are hear to talk about NAS and SAN devices and how they compare so we will not use the definition that includes the network rather than the device.  In reality, both NAS and SAN are marketing terms and are a bit soft around the edges because of it.  They are precise enough to use in a normal technical conversation, as along as all parties know what they mean, but when discussing their meaning we should strip away the cool-sounding names and stick to the most technical descriptions.  Both terms, when used via marketing, are used to imply that they are a certain technology that has been “appliancized” which makes the use of the terms unnecessarily complicated but no more useful.

So our first task is to define what these two names mean in a device context.  Both devices are storage servers, plain and simple, just two different ways of exposing that storage to the outside world.

The simpler of the two is the SAN which is properly a block storage device.  Any device that exposes its storage externally as a block device falls into this category and can be used interchangeably based on how it is used.  The block storage devices are external hard drives, DAS (Direct Attach Storage) and SAN.  All of these are actually the same thing.  We call it an external hard drive when we attach it to a desktop.  We call it a DAS when we attach it to a server.  We call it a SAN when we add some form of networking, generally a switch, between the device and the final device that is consuming the storage.  There is no technological difference between these devices.  A traditional SAN can be directly attached to a desktop and used like an external hard drive.  An external hard drive can be hooked to a switch and used by multiple devices on a network.  The interface between the storage device and the system using it is the block.  Common protocols for block storage include iSCSI, Fibre Channel, SAS, eSATA, USB, Thunderbolt, IEEE1394 (aka Firewire), Fibre Channel over Ethernet (FCoE) and ATA over Ethernet (AoE.)  A device attaching to a block storage device will always see the storage presented as a disk drive, nothing more.

A NAS, also known as a “filer”, is a file storage device.  This means that it exposes its storage as a network filesystem.  So any device attaching to this storage does not see a disk drive but instead sees a mountable filesystem.  When a NAS is not packaged as an appliance, we simply call it a file server and nearly all computing devices from desktops to servers have some degree of this functionality included in them.  Common protocols for file storage devices include NFS, SMB / CIFS and AFP.  There are many others, however, and technically there are special case file storage protocols such as FTP and HTTP that should qualify as well.  As an extreme example, a traditional web server is a very specialized form of file storage device.

What separates block storage and file storage devices is the type of interface that they present to the outside world, or to think of it another way, where the division between server device and client device happens within the storage stack.

It has become extremely common today for storage devices to include both block storage and file storage from the same device.  Systems that do this are called unified storage.  With unified storage, whether you can say that it is behaving as block storage or file storage device (SAN or NAS in the common parlance) or both is based upon the behavior that you configure for the device not based on what you purchase.  This is important as it drives home the point that this is purely a protocol or interface distinction, not one of size, capability, reliability, performance, features, etc.

Both types of devices have the option, but not the requirement, of providing extended features beneath the “demarcation point” at which they hand off the storage to the outside.  Both may, or may not, provide RAID, logical volume management, monitoring, etc.  File storage (NAS) may also provide file system features such as Windows NTFS ACLs.

The key advantage to block storage is that the systems that attach to it are given an opportunity to manipulate the storage system as if it were a traditional disk drive.  This means that RAID and logical volume management, which may already have been doing in the “black box” of the storage device can now be done again, if desired, at a higher level.  The client devices are not aware what kind of device they are seeing, only that it appears as a disk drive.  So you can choose to trust it (assume that it has RAID of an adequate level, for example) or you can combine multiple block storage devices together into RAID just as if they were regular, local disks.  This is extremely uncommon but is an interesting option and there are products that are designed to be used in this way.

More commonly, logical volume management such as Linux LVM, Solaris ZFS or Windows Dynamic Disks is applied on top of the exposed block storage from the device and then, on top of that, a filesystem would be employed.  This is important to remember, with block storage devices the filesystem is created and managed by the client device, not by the storage device.  The storage device is blissfully unaware of how the block storage that it is presenting is used and allows the end user to use it however they see fit with total control.  This extends even to the point that you can chain block storage devices together with one providing the storage to the next being combines, perhaps, into RAID groups – block storage devices can be layered, more or less, indefinitely.

Alternatively, a file storage device contains all of the block portion of the storage so any opportunity for RAID, logical volume management and monitoring must be handled by the file storage device.  Then, on top of the block storage, a filesystem is applied.  Commonly this would be Linux’ EXT4, FreeBSD and Solaris’ ZFS, Windows NTFS but other filesystems such as WAFL, XFS, JFS, BtrFS, UFS and more are certainly possible.   On this filesystem, data will be stored.  To them share this data with the outside world a network file system (also known as a distributed file system) is used which provides a file system interface that is network enabled – NFS, SMB and AFP being the most common but, like in any protocol family, there are numerous special case and exotic possibilities.

A remote device wanting to use storage on the file storage device would see it over the network the same as it would see a local filesystem and is able to mount it in an identical manner.  This makes file storage especially easy and obvious for end consumer to use as it is very natural in every aspect.  We use network file systems every day for normal desktop computing.  When we “map a drive” in Windows, for example, we are using a network file system.

One critical differentiation between block storage and file storage that must be differentiated between is that, while both potentially can sit on a network and allow multiple client machines to attach to them, only file storage devices have the ability arbitrate that access.  This is very important and cannot be glossed over.

Block storage appears as a disk drive.  If you simply attach a disk drive to two or more computers at once, you can imagine what will happen – each will know nothing of the other and will be unaware of new files being created, others changing and they systems will rapidly begin to overwrite each other.  If your file system is read only on all nodes, this is not a problem.  But if any system is writing or changing the data, the others will have problems.  This generally results in data corruption very quickly, typically on the order of minutes.  To see this in extreme action, imagine having two or three client system all believe that they have exclusive access to a disk drive and have them all defragment it at the same time.  All data on the drive will be scrambled in seconds.

A file storage device, on the other hand, has natural arbitration as the network file system handles the communications for access to the real file system and filesystems, by their nature, are multi-user naturally.  So if one system attached to a file storage device makes a change, all systems are immediately aware of the change and will not “step on each others toes.”  Even if they attempt to do the the file storage device’s filesystem arbitrates access and has the final say and does not let this happen.  This makes sharing data easy and transparent to end users.  (I use the term “end users” here to include system administrators.)

This does not mean that there is no means of sharing storage from a block device, but the arbitration of it cannot be handled by the block storage device itself.  Block storage devices are be made “shareable” by using what is known as a clustered file system.  These types of file systems originated back when server clusters shared storage resources by having two servers attached with a SCSI controller on either end of a single SCSI cable and having the shared drives attached in the middle of the cable.  The only means by which the servers could communicate was through the file system itself and so special clustered file systems were developed that allowed there to be communications between the devices, alerting each to changes made by the other, through the file system itself.  This actually works surprisingly well but clustered file systems are relatively uncommon with Red Hat’s GFS and Oracle’s OCFS being some of the best well known in the traditional server world and VMWare’s much newer VMFS having become extremely well known through its use for virtualization storage.  Normal users, including system administrators, may not have access to clustered file systems or may have needs that do not allow their use.  Of important note is also that the arbitration is handled through trust, not through enforcement, like with a file storage device.  With a file storage device, the device itself handles the access arbitration and there is no way around it.  With block storage devices using a clustered file system, any device that attaches to the storage can ignore the clustered file system and simply bypass the passive arbitration – this is so simple that it would normally happen accidentally.  It can happen when mounting the filesystem and specifying the wrong file system type or through a drive misbehaving or any malicious action.  So access security is critical at the network level to protect block level storage.

The underlying concept being exposed here is that block storage devices are dumb devices (think glorified disk drive) and file storage devices are smart devices (think traditional server.)  File storage devices must contain a full working “computer” with CPU, memory, storage, filesystem and networking.  Block storage devices may contain these things but need not.  At their simplest, block storage devices can be nothing more than a disk drive with a USB or Ethernet adapter attached to them.  It is actually not uncommon for them to be nothing more than a RAID controller plus Ethernet or Fiber Channel adapters to be attached.

In both cases, block storage device and file storage devices, we can scale down to trivially simple devices or can scale up to massive “mainframe class” ultra-high-availability systems.  Both can be either fast or slow.  One is not better or worse, one is not higher or lower, one is not more or less enterprise – they are different and serve generally different purposes.  And there are advanced features that either may or may not contain.  The challenge comes in knowing which is right for which job.

I like to think of block storage protocols as being a “standard out” stream, much like on a command line.  So the base level of any storage “pipeline” is always a block device and numerous block devices or transformations can exist with each being piped one to another as long as the output remains a block storage protocol.  We only terminate the chain when we apply a file system.   In this way hardware RAID, network RAID, logical volume management, etc. can be applied in multiple combinations as needed.  Block storage is truly not just blocks of data but building blocks of storage systems.

One point that is very interesting is that since block storage devices can be chained and since network storage devices must accept block storage as their “input” it is actually quite common for a block storage device (SAN) to be used as the backing storage for a file storage device (NAS), especially in high end systems.  They can coexist within a single chassis or they can work cooperatively on the network.

The History of Array Splitting

Much of the rote knowledge of the IT field, especially that of the SMB field, arose in the very late 1990s based on a variety of factors.  The biggest factors were that suddenly smaller and smaller businesses were rushing to computerize, Microsoft had gotten Windows NT 4 so stable that there was a standard base for all SMB IT to center around, the Internet era had finally taken hold and Microsoft introduce their certification and training programs that reshaped knowledge dissemination in the industry.  Put together, this created both a need for new training and best practices and caused a massive burst of new thinking, writing, documentation, training, best practices, rules of thumb, etc.

For a few years nearly the entire field was trained on the same small knowledge set and many rules of thumb became de facto standards and much of the knowledge of the time was learned by rote and passed on mentor to intern in a cycle that moved much of the technical knowledge of 1998 into the unquestioned, set-in-stone processes of 2012.  At the time this was effective because the practices were relevant but that was fifteen years ago, technology, economics, use cases and knowledge have changed significantly since that time.

One of the best examples of this was the famous Microsoft SQL Server recommendation of RAID 1 for the operating system, RAID 5 for the database files and another RAID 1 for the logs.  This setup has endured for nearly the entire life of the product and was so well promoted that it has spread into almost all aspects of server design in the SMB space.  The use of RAID 1 for the operating system and RAID 5 for data is so pervasive that it is often simply assumed without any consideration as to why this was recommended at the time.

Let’s investigate the history and see why R1/5/1 was good in 1998 and why it should not exist today.  Keep some perspective in mind, the gap between when these recommendations first came out (as early as 1995) compared to today is immense.  Go back, mentally, to 1995 and think about the equivalent gap at the time.  That would have been like using recommendations in the early Internet age based on home computing needs for the first round of Apple ][ owners!  The 8bit home computer era was just barely getting started in 1978.  Commodore was still two years away from releasing their first home computer (the VIC=20) and would go through the entire Commodore and Commodore Amiga eras and go bankrupt and vanish all before 1995.  The Apple ][+ was still a year away.  People were just about to start using analogue cassette drives as storage.  COBOL and Fortran were the only series business languages in use.  Basically, the gap is incredible.  Things change.

First, we need to look at the factors that existed in the late 1990s that created the need for our historic setup.

  1. Drives were small, very small.  A large database array might have been four 2.1GB SCSI drives in an R5 array for just ~6GB of usable storage space on a single array.  The failure domain for parity RAID failure was tiny (compared to things like URE fail rates.)
  2. Drive connection technologies were parallel and slow.  The hard drives of the time were only slightly slower than drives are today but the connection technologies represented a considerable bottleneck.  It was common to split traffic to allow for reduced bus bottlenecks.
  3. SCSI drive technology was the only one used for servers.  The use of a PATA (IDE it was called at the time) in a server was unthinkable.
  4. Drives were expensive per gigabyte so cost savings was the key issue, while maintaining capacity, for effectively all businesses.
  5. Filesystems were fragile and failed more often than drives.
  6. Hardware RAID was required and only basic RAID levels of 1 and 5 were commonly available.  RAID 6 and RAID 10 were years away from being accessible to most businesses.  RAID 0 is discounted as it has no redundancy.
  7. Storage systems were rarely, if ever, shared between servers so access was almost always dedicated to a single request queue.
  8. Storage caches were tiny or did not exist making drive access limitations pass directly onto the operating system.  This meant having different arrays with different characteristics to handle different read/write or random/sequential access mixes.
  9. Drive failure was common and the principle concern of storage system design.
  10. Often drive array size was limited by physical limitations so often array splitting decisions were made out of necessity, not choice.
  11. A combination of the above factors meant that RAID 1 was best for some parts of the system where small size was acceptable and access was highly sequential or write heavy and RAID 5 was best for others where capacity outweighed reliability and where access was highly random and read heavy.

In the nearly two decades since the original recommendations were released, all of these factors have changed.  In some cases the changes are cascading ones where the move from general use RAID 5 to general use RAID 10 has then causes what would have been the two common array types, RAID 1 and RAID 10, to share access characteristics so the need or desire to use one or the other depending on load type is gone.

  1. Drives are now massive.  Rather than struggling to squeeze what we need onto them, we generally have excess capacity.  Single drives over a terabyte are common, even in servers.  Failure domains for parity are massive (compared to things like URE fail rates.)
  2. Drive connections are serial and fast.  The drive connections are no longer a bottleneck.
  3. SATA is now common on servers skewing potential risks for URE in a way that did not exist previously.
  4. Capacity is now cheap but performance and reliability are now the key concerns for dollars spent.
  5. Filesystems are highly robust today and filesystem failures are “background noise” in the greater picture of array reliability.
  6. Hardware RAID and software RAID are both options today and available RAID levels include many options but, most importantly, RAID 10 is available ubiquitously.
  7. Storage systems are commonly shared making sequential access even less common.
  8. Storage caches are commonly and often very large.  512MB and 1GB caches are considered normal today making many arrays in 1995 fit entirely into memory on the RAID controller today.  With caches growing rapidly compared to storage capacity and the recent addition of solid state drives as L2 cache in storage in the last two years it is not out of the question for even a small business to have databases and other performance sensitive applications running completely from cache.
  9. Drive failure is uncommon and of trivial concern to storage system design (compared to other failure types.)
  10. Drive array size is rarely limited by physical limitations.
  11. The use of RAID 1 and RAID 10 as the principle array types today means that there is no benefit to using different array levels for performance tuning.

These factors highlight why the split array system of 1995 made perfect sense at the time and why it does not make sense today.  OBR10, today’s standard, was unavailable at the time and cost prohibitive.  RAID 5 was relatively safe in 1995, but not today.  Nearly every factor involved in the decision process has changed dramatically in the last seventeen years and is going to continue to change as SSD becomes more common along with auto-tiering, even larger caches and pure SSD storage systems.

The change in storage design over the last two decades also highlights the dangers that IT faces as a large portion of the field learns, as is common in engineering, basic “rules of thumb” or “best practices” without necessarily understanding the underlying principles that drive those decisions making it difficult to know when not to apply those best practices or, even more importantly, when to recognize that the rule no longer applies.  Unlike traditional mechanical or civil engineering where new advances and significant factor changes may occur once or possibly never over the course of a career, IT still changes fast enough that complete “rethinks” of basic rules of thumb are required several times through a career.  Maybe not annually, but once per decade or more is almost always necessary.

The current move from uniprocessing to multithreaded architectures is another similar, significant change requiring the IT field to completely change how system design is handled.

RAID Notation Examples

As the new Network RAID Notation Standard (SAM RAID Notation) is a bit complex, I felt that it would be useful to provide a list of common use scenarios and specific implementation examples and how they would be notated.

  • Scenario: Netgear ReadyNAS Pro 2 with XRAID mirror.  Notation: R1
  • Scenario: Two Netgear ReadyNAS Ultra units with local RAID 1 sync’d over the network using rsync.  Notation: R1{1}
  • Scenario: Two Drobo B800fs NAS devices each loaded with single parity RAID sync’d using DroboSync. Notation: R5{1}
  • Scenario: Two Drobo B800fs NAS devices each with dual parity RAID sync’d using DroboSync.  Notation: R6{1}
  • Scenario: Two Linux servers with R6 locally using DRBD Mode A or B (asynchronous.)  Notation: R6[1]
  • Scenario: Two Linux servers with R5 locally using DRBD Mode C (synchronous.)  Notation: R6(1)
  • Scenario: Three node VMware vSphere VSA cluster with local R10.  Notation: R10(1)3
  • Scenario: Windows server with two four disk R0 stripes mirrored.  Notation: 8R01
  • Scenario: Two FreeBSD servers with R10 using HAST with memsync.  Notation: R10[1]
  • Scenario: Two FreeBSD servers with R1 using HAST with sync.  Notation: R1(1)
  • Scenario: Two Windows file servers with R10 using Robocopy to synchronize file systems. Notation: R10{1}
  • Scenario: Single Netgear SC101 SAN* using ZSAN drivers on Windows with two disks. Notation: R(1)

Technology References:

HAST: http://wiki.freebsd.org/HAST

DRBD: http://www.drbd.org/users-guide/s-replication-protocols.html

DroboSync: http://www.drobo.com/solutions/for-business/drobo-sync.php

Rsync: http://rsync.samba.org/

Robocopy: http://technet.microsoft.com/en-us/library/cc733145%28v=ws.10%29.aspx

Notes:

*The Netgear SC101 SAN is interesting in that while it can hold two PATA drives internally and exposes them to the network as block devices, via the ZSAN protocol, through a single Ethernet interface but there is no internal communications between the devices so all mirroring of the array happens in Windows which actually sees each disk as an entirely separate SAN device each with its own IP address.  Windows has no way to know that the two devices are related. The RAID 1 mirroring is handled one hundred percent in software RAID on Windows and the SAN itself is always two independent PATA drives exposed raw to the network.  A very odd, but enlightening device.

Network RAID Notation Standard (SAM RAID Notation)

As the RAID landscape becomes more complex with the emergence of network RAID there is an important need for a more complex and concise notation system for RAID levels involving a network component.

Traditional RAID comes in single digit notation and the available levels are 0, 1, 2, 3, 4, 5, 6, 7.  Level 7 is unofficial but widely accepted as triple parity RAID (the natural extension of RAID 5 and RAID 6) and RAID 2 and RAID 3 are effectively disused today.

Nested RAID, one RAID level within another, is handled by putting single digit RAID levels together such as RAID 10, 50, 61, 100, etc.  These can alternatively be written with a plus sign separating the levels like RAID 1+0, 5+0, 6+1, 1+0+0, etc.

There are two major issues with this notation system, beyond the obvious issue that not all RAID types or extensions are accounted for by the single digit system with many of the aspects of proprietary RAID systems such as ZRAID, XRAID and BeyondRAID being unaccounted for in the notation system.  The first is a lack of network RAID notation and the second is a lack of specific denotation of intra-RAID configuration.

Network RAID comes in two key types, synchronous and asynchronous.  Synchronous network RAID operates effectively identically to its non-networked counterpart.  Asynchronous functions the same but brings extra risks as data may not be synchronized across devices at the time of a device failure.  So the differences between the two need to be visible in the notation.

Synchronous RAID should be denoted with parenthesis.  So two local RAID 10 systems mirrored over the network (a la DRBD) would be denoted RAID 10(1).  The effective RAID level for risk and capacity calculations would be the same as any RAID 101 but this informs all parties at a glance that the mirror is over a network.

Asynchronous RAID should be denoted with brackets.  So two local RAID 10 systems mirrored over the network asynchronously would be denoted as RAID 10[1] making it clear that there is a risky delay in the system.

There is an additional need for a different type of replication at a higher, filesystem level (a la rsync) that, while not truly related to RAID, provides a similar function for cold data and is often used in RAID discussions and I believe that storage engineers need the ability to quite denote this as well.  This asynchronous file-system level replication can be denoted by braces.  Only one notation is needed as file-system level replication is always asynchronous.  So as an example, two RAID 6 arrays synced automatically with a block-differential file system replication system would be denoted as RAID 6{1}.

To further simplify RAID notation and to shorten the obvious need to write the word “RAID” repeatedly as well as to remove ourselves from the traditional distractions of what the acronym stands for so that we can focus on the relevant replication aspects of it, a simple “R” prefix should be used.  So RAID 10 would simply be R10.  Or a purely networked mirror might be R(1).

This leaves one major aspect of RAID notation to address and that is the size of each component of the array.  Often this is implied but some RAID levels, especially those that are nested, can have complexities missed by traditional notation.  Knowing the total number of drives in an array does not always denote the setup of a specific array.  For example a 24 drive R10 is assumed to be twelve pairs of mirrors in a R0 stripe.  But it could be eight sets of triple mirrors in a R0 stripe.  Or it could even be six quad mirrors.  Or four sext mirrors.  Or three oct mirrors.  Or two dodeca mirrors.  While most of these are extremely unlikely, there is a need to notate it.  For the set size we use a superscript number to denote the size of that set.  Generally this is only needed for one aspect of the array, not all, as others can be derived, but when in down it can be denoted explicitly.

So an R10 array using three-way mirror sets would be R130.  Lacking the ability to write a superscript you could also write it as R1^3+0.  This notation does not state the complete array size, only its configuration type.  If all possible superscripts are included a full array size can be calculated using nothing more.  If we have an R10 of four sets of three-way mirrors we could write it R1304 which would inform us that the entire array consists of twelve drives – or in the alternate notation R1^3+0^4.

Superscript notation of sets is only necessary when non-obvious.  R10 with no other notation implies that the R1 component is mirror pairs, for example.  R55 nearly always requires additional notation except when the array consist of only nine members.

One additional aspect to consider is notating array size.  This is far simpler than the superscript notation and is nearly always complete adequate.  This alleviates the need to write in long form “A four drive RAID 10 array.”  Instead we can use a prefix for this.  4R10 would denote a four drive RAID 10 array.

So to look at our example from above, the twelve disk RAID 10 with the three-way mirror sets could be written out as 12R1304.  But the use of all three numbers becomes redundant.  Any one of the numbers can be dropped.  Typically this would be the final one as it is the least likely to be useful.  The R1 set size is useful in determining the basic risk and the leading 12 is used for capacity and performance calculations as well as chassis sizing and purchasing.  The trailing four is implied by the other two numbers and effectively useless on its own.  So the best way to write this would be simply 12R130.  If that same array was to use the common mirror pair approach rather than the three-way mirror we would simply write 12R10 to denote a twelve disk, standard RAID 10 array.