{"id":436,"date":"2013-06-30T01:26:22","date_gmt":"2013-06-30T06:26:22","guid":{"rendered":"http:\/\/www.smbitjournal.com\/?p=436"},"modified":"2016-04-05T02:30:23","modified_gmt":"2016-04-05T07:30:23","slug":"when-to-consider-a-san","status":"publish","type":"post","link":"https:\/\/smbitjournal.com\/2013\/06\/when-to-consider-a-san\/","title":{"rendered":"When to Consider a SAN?"},"content":{"rendered":"

Everyone seems to want to jump into purchasing a SAN, sometimes quite passionately. \u00a0SANs are, admittedly, pretty cool. \u00a0They are one of the more fun and exciting, large scale hardware items that most IT professionals get a chance to have in their own shop. \u00a0Often the desire to have a SAN of ones own is a matter of “keeping up with the Jones” as using a SAN has become a bit of a status symbol – one of those last bastions of big business IT that you only see in a dedicated server closet and never in someone’s home (well, almost never.) \u00a0SANs are pushed heavily, advertised and sold as amazing boxes with internal redundancy making them infallible, speed that defies logic and loaded with features that you never knew that you needed. \u00a0When speaking to IT pros designing new systems, one of the most common design aspects that I hear is “well we don’t know much about our final design, but we know that we need a SAN.”<\/p>\n

In the context of this article, I use SAN in its most common context, that is to mean a “block storage device” and not to refer to the entire storage network itself. \u00a0A storage network can exist for NAS but not use a SAN block storage device at all. So for this article SAN refers exclusively to SAN as a device, not SAN as a network. \u00a0SAN is a soft term used to mean multiple things at different times and can become quite confusing. \u00a0A SAN configured without a network becomes DAS. \u00a0DAS that is networked becomes SAN.<\/em><\/p>\n

Let’s stop for a moment. \u00a0SAN is your back end storage. \u00a0The need for it would be, in all cases, determined by other aspects of your architecture. \u00a0If you have not yet decided upon many other pieces, you simply cannot know that a SAN is going to be needed, or even useful, in the final design. \u00a0Red flags. Red flags everywhere. \u00a0Imagine a Roman chariot race with the horses pushes the chariots (if you know what I mean.)<\/p>\n

It is clear that the drive to implement a SAN is so strong that often entire projects are devised with little purpose except, it would seem, to justify the purchase of the SAN. \u00a0As with any project, the first question that one must ask is “What is the business need that we are attempting to fill?”<\/i>\u00a0 \u00a0And work from there, not\u00a0“We want to buy a SAN, where can we use it?”<\/em> \u00a0SANs are complex, and with complexity comes fragility.\u00a0 Very often SANs carry high cost. \u00a0But the scariest aspect of a SAN is the widespread lack of deep industry knowledge concerning them. \u00a0SANs pose huge technical and business risk that must be overcome to justify their use. \u00a0SANs are, without a doubt, very exciting and quite useful, but that is seldom good enough to warrant the desire for one.<\/p>\n

We refer to SANs as “the storage of last resort.” \u00a0What this means is, when picking types of storage, you hope that you can use any of the other alternatives such as local drives, DAS (Direct Attach Storage) or NAS (Network Attached Storage) rather than SAN. \u00a0Most times, other options work wonderfully. \u00a0But there are times when the business needs demand requirements that can only reasonably be met with a SAN. \u00a0When those come up, we have no choice and must use a SAN. \u00a0But generally it can be avoided in favor of simpler and normally less costly or risky options.<\/p>\n

I find that most people looking to implement a SAN are doing so under a number of misconceptions.<\/p>\n

The first is that SANs, by their very nature, are highly reliable. \u00a0While there are certainly many SAN vendors and specific SAN products that are amazingly reliable, the same could be said about any IT product. \u00a0High end servers in the price range of high end SANs are every bit as reliable as SANs.\u00a0 Since SANs are made from the same hardware components as normal servers, there is no magic to making them more reliable. \u00a0Anything that can be used to make a SAN reliable is a trickle down of server RAS (Reliability, Availability and Serviceability) technologies. \u00a0Just like SAN, NAS and DAS, as well as local disks, can be made incredibly reliable. \u00a0SAN only refers to the device being used to serve block storage rather than perform some other task. \u00a0A SAN is just a very simple server. \u00a0SANs encompass the entire range of reliability with mainframe-like reliability at the top end to devices that are nothing more than external hard drives – the most unreliable network devices on your network – on the bottom end. \u00a0So rather than SAN meaning reliability, it actually offers a few special cases of being the lowest reliability you can imagine. \u00a0But, for all intents and purposes, all servers share roughly equal reliability concerns. \u00a0SANs gain a reputation for reliability because often businesses put extreme budgets into their SANs that they do not put into their servers so that the comparison is a relatively high end SAN to a relatively budget server.<\/p>\n

The second is that SAN means “big” and NAS means “small.” \u00a0There is no such association. \u00a0Both SANs and NASs can be of nearly any scale or quality. \u00a0They both run the gamut and there isn’t the slightest suggestion from the technology chosen whether a device is large or not. \u00a0Again, as above, SAN actually can technically come “smaller” than a NAS solution due to its possible simplicity but this is a specialty case and mostly only theoretical although there are SAN products on the market that are in this category, just very rare to find them in use.<\/p>\n

The third is that SAN and NAS are dramatically different inside the chassis. \u00a0This is certainly not the case as the majority of SAN and NAS devices today are what is called “unified storage” meaning a storage appliance that acts simultaneously as both SAN and NAS. \u00a0This highlights that the key difference between the two is not in backend technology or hardware or size or reliability but the defining difference is the protocols used to transfer storage. \u00a0SANs are block storage exposing raw block devices onto the network using protocols like fibre channel, iSCSI, SAS, ZSAN, ATA over Ethernet (AoE) or Fibre Channel over Ethernet (FCoE.) \u00a0NAS, on the other hand, uses a network file system and exposes files onto the network using application layer protocols like NFS, SMB, AFP, HTTP and FTP which then ride over TCP\/IP.<\/p>\n

The fourth is that SANs are inherently a file sharing technology.\u00a0 This is NAS.\u00a0 SAN is simply taking your block storage (hard disk subsystem) and making it remotely available over a network.\u00a0 The nature of networks suggests that we can attach that storage to multiple devices at once and indeed, physically, we can.\u00a0 Just as we used to be able to physically attach multiple controllers to opposite ends of a SCSI ribbon cable with hard drives dangling in the middle.\u00a0 This will, under normal circumstances, destroy all of the data on the drives as the controllers, which know nothing about each other, overwrite data from each other causing near instant corruption.\u00a0 There are mechanisms available in special clustered filesystems and their drivers to allow for this, but this requires special knowledge and understanding that is far more technical than many people acquiring SANs are aware that they need for what they often believe is the very purpose of the SAN – a disaster so common that I probably speak to someone who has done just this almost weekly.\u00a0 That the SAN puts at risk the very use case that most people believe it is designed to handle and not only fails to deliver the nearly magic protection sought but is, to the contrary, the very cause of the loss of data exposes the level of risk that implemented misunderstood storage technology carrier with it.<\/p>\n

The fifth is that SANs are fast.\u00a0 SANs can be fast; they can also be horrifically slow.\u00a0 There is no intrinsic speed boost from the use of SAN technology on its own.\u00a0 It is actually fairly difficult for SANs to overcome the inherent bottlenecks introduced by the network on which they sit.\u00a0 As some other storage options such as DAS use all the same technologies as SAN but lack the bottleneck and latency of the actual network an equivalent DAS will also be just a little faster than its SAN complement.\u00a0 SANs are generally a little faster than a hardware-identical NAS equivalent, but even this is not guaranteed.\u00a0 SAN and NAS behave differently and in different use cases either may be the better performing.\u00a0 SAN would rarely be chosen as a solution based on performance needs.<\/p>\n

The sixth is that by being a SAN that the inherent problems associated with storage choices no longer apply.\u00a0 A good example is the use of RAID 5.\u00a0 This would be considered bad practice to do in a server, but when working with a SAN (which in theory is far more critical than a stand alone server) often careful storage subsystem planning is eschewed based on a belief that being a SAN that it has somehow fixed those issues or that they do not apply.\u00a0 It is true that some high end SANs do have some amount of risk mitigation features unlikely to be found elsewhere, but these are rare and exclusively relegated to very high end units where using fragile designs would already be uncommon.\u00a0 It is a dangerous, but very common practice, to take great care and consideration when planning storage for a physical server but when using a SAN that same planning and oversight is often skipped based on the assumption that the SAN handles all of that internally or that it is simply no longer needed.<\/p>\n

Having shot down many misconceptions about SAN one may be wondering if SANs are ever appropriate.\u00a0 They are, of course, quite important and incredibly valuable when used correctly.\u00a0 The strongest points of SANs come from consolidation and special types of shared storage.<\/p>\n

Consolidation was the historical driver bringing customers to SAN solutions.\u00a0 A SAN allows us to combine many filesystems into a single disk array allowing far more efficient use of storage resources.\u00a0 Because SAN is block level it is able to do this anytime that a traditional, local disk subsystem could be employed.\u00a0 In many servers, and even many desktops, storage space is wasted due to the necessities of growth, planning and disk capacity granularity.\u00a0 If we have twenty servers each with 300GB drive arrays but each only using 80GB of that capacity, we have large waste.\u00a0 With a SAN would could consolidate to just 1.6TB plus a small amount necessary for overhead and spend far less on physical disks than if each server was maintaining its own storage.<\/p>\n

Once we begin consolidating storage we begin to look for advanced consolidation opportunities.\u00a0 Having consolidated many server filessytems onto a single SAN we have the chance, if our SAN implementation supports it, to deduplicate and compress that data which, in many cases such as server filesystems, can potentially result in significant utilization reduction.\u00a0 So out 1.6TB in our example above might actually end up being only 800GB or less.\u00a0 Suddenly our consolidation numbers are getting better and better.<\/p>\n

To efficiently leverage consolidation it is necessary to have scale and this is where SANs really shine – when scale but in capacity and, more importantly, in the number of attaching nodes become very large.\u00a0 SANs are best suited to large scale storage consolidation.\u00a0 This is their sweet spot and what makes them nearly ubiquitous in large enterprises and very rare in small ones.<\/p>\n

SANs are also very important for certain types of clustering and shared storage that requires single shared filesystem access.\u00a0 These is actually a pretty rare need outside of one special circumstance – databases.\u00a0 Most applications are happy to utilize any type of storage provided to them, but databases often require low level block access to be able to properly manipulate their data most effectively.\u00a0 Because of this they can rarely be used, or used effectively, on NAS or file servers.\u00a0 Providing high availability storage environments for database clusters has long been a key use case of SAN storage.<\/p>\n

Outside of these two primary use cases, which justify the vast majority of SAN installations, SAN also provides for high levels of storage flexibility in making it potentially very simple to move, grow and modify storage in a large environment without needing to deal with physical moves or complicated procurement and provisioning.\u00a0 Again, like consolidation, this is an artifact of large scale.<\/p>\n

In very large environments, the use of SAN can also be used to provide a point a demarcation between storage and system engineering teams allowing there to be a handoff at the network layer, generally of fibre channel or iSCSI. \u00a0This clear separation of duties can be critical in allowing for teams to be highly segregated in companies that want highly discrete storage, network and systems teams. \u00a0This allows the storage team to do nothing but focus on storage and the systems team to do nothing but focus on the systems without any need for knowledge of the other team’s implementations.<\/p>\n

For a long time SANs also presented themselves as a convenient means to improve storage performance.\u00a0 This is not an intrinsic component of SAN but an outgrowth of their common use for consolidation.\u00a0 Similarly to virtualization when used as consolidation, shared SANs will have a nature advantage of having better utilization of available spindles, centralized caches and bigger hardware than the equivalent storage spread out among many individual servers.\u00a0 Like shared CPU resources, when the SAN is not receiving requests from multiple clients it has the ability to dedicate all of its capacity to servicing the requests of a single client providing an average performance experience potentially far higher than what an individual server would be able to affordably achieve on its own.<\/p>\n

Using SAN for performance is rapidly fading from favor, however, because of the advent of SSD storage becoming very common.\u00a0 As SSDs with incredibly low latency and high IOPS performance drop in price to the point where they are being added to stand alone servers as local cache or potentially even being used as mainline storage the bottleneck of the SANs networking becomes a larger and larger factor making it increasingly difficult for the consolidation benefits of a SAN to offset the performance benefits of local SSDs.\u00a0 SSDs are potentially very disruptive for the shared storage market as they bring the performance advantage back towards local storage – just the latest in the ebb and flow of storage architecture design.<\/p>\n

The most important aspect of SAN usage to remember is that SAN should not be a default starting point in storage planning.\u00a0 It is one of many technology choices and one that often does not fit the bill as intended or does so but at an unnecessarily high price point either in monetary or complexity terms.\u00a0 Start by defining business goals and needs.\u00a0 Select SAN when it solves those needs most effectively, but keep an open mind and consider the overall storage needs of the environment.<\/p>\n","protected":false},"excerpt":{"rendered":"

Everyone seems to want to jump into purchasing a SAN, sometimes quite passionately. \u00a0SANs are, admittedly, pretty cool. \u00a0They are one of the more fun and exciting, large scale hardware items that most IT professionals get a chance to have in their own shop. \u00a0Often the desire to have a SAN of ones own is … Continue reading When to Consider a SAN?<\/span> →<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[41],"tags":[60,48,32,33],"class_list":["post-436","post","type-post","status-publish","format-standard","hentry","category-storage-2","tag-das","tag-nas","tag-san","tag-storage"],"_links":{"self":[{"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/posts\/436","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/comments?post=436"}],"version-history":[{"count":10,"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/posts\/436\/revisions"}],"predecessor-version":[{"id":911,"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/posts\/436\/revisions\/911"}],"wp:attachment":[{"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/media?parent=436"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/categories?post=436"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/smbitjournal.com\/wp-json\/wp\/v2\/tags?post=436"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}