Tag Archives: nas

New Hyperconvergence, Old Storage

We all dream of the day that we get to build a new infrastructure from the ground up without any existing technical debt to hold us back.  A greenfield deployment where we pick what is best, roll it out fresh and enjoy.  But most of us live in the real world where that is not very realistic and what we actually face is a world where we have to plan for the future but work with what we already have, as well.

Making do with what we have is nearly an inevitable fact of life in IT and when approaching storage when moving from an existing architecture to hyperconvergence things will be no different.  In a great many cases we will be facing a situation where an existing investment in storage will be in place that we do not want to simply discard but does not necessarily fit neatly into our vision of a hyperconverged future.

There are obvious options to consider, of course, such as returning leased gear, retiring older equipment or selling still useful equipment outright.  These are viable options and should be considered.  Eliminating old gear or equipment that does not fit well into the current plans can be beneficial as we can simplify our networks, reduce power consumption and possible even recoup some degree of our investments.

In reality, however, these options are rarely financially viable and we need to make more productive use of our existing technology investments.  What options are available to us, of course, depend on a range of factors.  But we will look at some examples of how common storage devices can be re-purposed in a new hyperconverged-based system in order to maintain their utility either until they are ready to retire or even, in some cases, indefinitely.

The easiest re-purposing of existing storage, and this applies equally to both NAS and SAN in most cases, is to designate them as backup or archival targets.  Traditional NAS and SAN devices are excellent backup hardware and are generally usable by nearly any backup mechanism, regardless of approach or vendor.  And because they are generic backup targets if a mixture of backup mechanisms are used, such as agent based, agentless and custom scripts, these can all work to the same target.  Backups so rarely get the attention and investment that they deserve that this is not just the easiest but often the most valuable use of pre-existing storage infrastructure.

Of course anything that is appropriate for backups can also be used for archival storage.  Archival needs are generally less needed (only a percentage of firms need archival storage while all need backups) and are of lower priority, so this is more of an edge reuse case, but still one to consider, especially for organizations that may be working to re-purpose a large number of possibly disparate storage devices.  However it is worth noting that moving to hyperconvergence does tend to “flatten” the compute and storage space in a way that may easily introduce a value to lower performance, lower priority archival storage that may not have existed or existed so obviously prior to the rearchitecting of the environment.

NAS has the unique advantageous use cases of being usable as general purpose network storage, especially for home directories of end users.  NAS storage can be used in so many places on the network, it is very easy to continue to use, after moving core architectures.  The most popular case is for users’ own storage needs with the NAS connected directly to end user devices which allows for storage capacity, performance and network traffic to be offloaded from the converged infrastructure to the NAS.    It would actually be very rare to remove a NAS from a hyperconverged network as its potential utility is so high and apparent.

Both SAN and NAS have the potential to be attached directly to the virtual machines running on top of a hyperconverged infrastructure as well.  In this way they can continue to be utilized in a traditional manner until such time as they are no longer needed or appropriate.  While not often the recommended approach, attaching network storage to a VM directly, there are use cases for this and it allows systems to behave as they always have in a physical realm into the future.  This is especially useful for mapped drives and user directories via a NAS, much as we mentioned for end user devices, but the cases are certainly not limited to this.

A SAN can provide some much needed functionality in some cases for certain workloads that require shared block storage that otherwise is not available or exposed on a platform.  Workloads on a VM will use the SAN as they always have and not even be aware that they are virtualized or converged.  Of course we can also attach a SAN to a virtualized file server or NAS head running on our hyperconverged infrastructure if the tiering for that kind of workload is deemed appropriate as well.

Working with existing infrastructure when implementing a new one does present a challenge, but one that we can tackle with creativity and logical approach.  Storage is a nearly endless challenge and having existing storage to re-purpose may easily end up being exceptionally advantageous.

SMBs Must Stop Looking to BackBlaze for Guidance

I have to preface this article, because people often take these things out of context and react strongly to things that were never said, with the disclaimer that I think that BackBlaze does a great job, has brilliant people working for them and has done an awesome job of designing and leveraging technology that is absolutely applicable and appropriate for their needs.  Nothing, and I truly mean nothing, in this article is ever to be taken out of context and stated as a negative about BackBlaze.  If anything in this article appears or feels to state otherwise, please reread and confirm that such was said and, if so, inform me so that I may correct it.  There is no intention of this article to imply, in any way whatsoever, that BackBlaze is not doing what is smart for them, their business and their customers.  Now on to the article:

I have found over the past many years that many small and medium business IT professionals have become enamored by what they see as a miracle of low cost, high capacity storage in what is know as the BackBlaze POD design.  Essentially the BackBlaze POD is a low cost, high capacity, low performance nearly whitebox storage server built from a custom chassis and consumer parts to make a disposable storage node used in large storage RAIN arrays leveraging erasure encoding.  BackBlaze custom designed the original POD, and released its design to the public, for exclusive use in their large scale hosted backup datacenters where the PODs functions as individual nodes within a massive array of nodes with replication between them.  Over the years, BackBlaze has updated its POD design as technology has changed and issues have been addressed.  But the fundamental use case has remained the same.

I have to compare this to the SAM-SD approach to storage which follows a similar tact but does so using enterprise grade, supported hardware.  These differences sometimes come off as trivial, but they are anything but trivial, they are key underpinnings to what makes the different solutions appropriate in different situations.  The idea behind the SAM-SD is that storage needs to be reliable and designed from the hardware up to be as reliable as possible and well supported for when things fail.  The POD takes the opposite approach making the individual server unreliable and ephemeral in nature and designed to be disposed of rather than repaired at all.  The SAM-SD design assumes that the individual server is important, even critical – anything but disposable.

The SAM-SD concept, which is literally nothing more than an approach to building open storage, is designed with the SMB storage market in mind.  The BackBlaze POD is designed with an extremely niche, large scale, special case consumer backup market in mind.  The SAM-SD is meant to be run by small businesses, even those without internal IT.  The POD is designed to be deployed and managed by a full time, dedicated storage engineering team.

Because the BackBlaze POD is designed by experts, for experts in the very largest of storage situations it can be confusing and easily misunderstood by non-storage experts in the small business space.  In fact, it is so often misunderstood that objections to it are often met with “I think BackBlaze knows what they are doing” comments, which demonstrates the extreme misunderstanding that exists with the approach.  Of course BackBlaze knows what they are doing, but they are not doing what any SMB is doing.

The release of the POD design to the public causes much confusion because it is only one part of a greater whole.  The design of the full data center and the means of redundancy and mechanisms for such between the PODs is not public, but is proprietary.  So the POD itself represents only a single node of a cluster (or Vault) and does not reflect the clustering itself, which is where the most important work takes place.  In fact the POD design itself is nothing more than the work done by the Sun Thumper and SAM-SD projects of the past decade without the constraints of reliability.  The POD should not be a novel design, but an obvious one.  One that has for decades been avoided in the SMB storage space because it is so dramatically non-applicable.

Because the clustering and replication aspects are ignored when talking about the BackBlaze POD some huge assumptions tend to be made about the capacity of a POD that has much lower overhead than BackBlaze themselves get for the POD infrastructure, even at scale.  For example, in RAID terms, this would be similar to assuming that the POD is RAID 6 (with only 5% overhead) because that is the RAID of an individual component when, in fact, RAID 61 ( 55% overhead) is used!  In fact, many SMB IT Professionals when looking to use a POD design actually consider simply using RAID 6 in addition to only using a single POD.  The degree to which this does not follow BackBlaze’s model is staggering.

BackBlaze: “Backblaze Vaults combine twenty physical Storage Pods into one virtual chassis. The Vault software implements our own Reed-Solomon encoding to spread data shards across all twenty pods in the Vault simultaneously, dramatically improving durability.

To make the POD a consideration for the SMB market it is required that the entire concept of the POD be taken completely out of context.  Both its intended use case and its implementation.  What makes BackBlaze special is totally removed and only the most trivial, cursory aspects are taken and turned into something that in no way resembles the BackBlaze vision or purpose.

Digging into where the BackBlaze POD is differing in design from the standard needs of a normal business we find these problems:

  • The POD is designed to be unreliable, to rely upon a reliability and replication layer at the super-POD level that requires a large number of PODs to be deployed and for data to be redundant between them by means of custom replication or clustering.  Without this layer, the POD is completely out of context.  The super-POD level is known internally as the BackBlaze Vault.
  • The POD is designed to be deployed in an enterprise datacenter with careful vibration dampening, power conditioning and environmental systems.  It is less resilient to these issues as standard enterprise hardware.
  • The POD is designed to typically be replaced as a complete unit rather than repairing a node in situ.  This is the opposite of standard enterprise hardware with hot swap components designed to be repaired without interruption, let alone without full replacement.  We call this a disposable or ephemeral use case.
  • The POD is designed to be incredibly low cost for very slow, cold storage needs.  While this can exist in an SMB, typically it does not.
  • The POD is designed to be a single, high capacity storage node in a pool of insanely large capacity.  Few SMBs can leverage even the storage potential of a single POD let alone a pool large enough to justify the POD design.
  • The BackBlaze POD is designed to use custom erasure encoding, not RAID.  RAID is not effective at this scale even at the single POD level.
  • An individual POD is designed for 180TB of capacity and a Vault scale of 3.6PB.

Current reference of the BackBlaze POD 5: https://www.backblaze.com/blog/cloud-storage-hardware/

In short, the BackBlaze POD is a brilliant underpinning to a brilliant service that meets a very specific need that is as far removed from the needs of the SMB storage market as one can reasonably be.  Respect BackBlaze for their great work, but do not try to emulate it.

 

Drive Appearance

One of the more common, yet more tricky fundamental concepts in computing today is the concept of drive appearance or, in other words, something that appears to be a hard drive.  This may sound simple, and mostly it is, but it can be tricky.

First, what is a hard drive.  This should be simple.  We normally mean a traditional spinning disk Winchester device such have been made for decades in standard three and a half inch as well as two and a half inch form factors.  They contain platters that spin, a drive head that moves forward and backward and they connect using something like ATA or SCSI connectors.  Most of us can pick up a hard drive with our hands and be certain that we have a hard drive.  This is what we call the physical manifestation of the drive.

To the computer, though, it does not see the casing of the drive nor the connectors.  The computer has to look through its electronics and “see” the drive digitally.  This is very, very different from how humans view the physical drive.  To the computer, a hard drive appears as an ATA, SCSI or Fibre Channel device at the most basic physical level and are generally abstracted at a higher level as a block device.  This is what we would call a logical appearance, rather than a physical one.  For our purposes here, we will think of all of these drive interfaces as being block devices.  They do differ, but only slightly and not consequentially to the discussion.  What is important is that there is a standard interface or set of closely related interfaces that are seen by the computer as being a hard drive.

Another way to think of the logical drive appearance here is that anything that looks like a hard drive to the computer is something on which the computer format with a filesystem.  Filesystems are not drives themselves, but require a drive on which to be placed.

The concept of the interface is the most important one here.  To the computer, it is “anything that implements a hard drive interface” that is truly seen as being a hard drive.  This is both a simple as well as a powerful concept.

It is because of the use of a standard interface that we were able to take flash memory, attach it to a disk controller that would present it over a standard protocol (both SATA and SAS implementations of ATA and SCSI are common for this today) and create SSDs that look and act exactly like traditional Winchester drives to the computer yet have nothing physically in common with them.  They may or may not come in a familiar physical form factor, but they definitely lack platters and a drive head.  Looking at the workings of a traditional hard drive and a modern SSD we would not guess that they share a purpose.

This concept applies to many devices.  Obviously SD cards and USB memory sticks work in the same way.  But importantly, this is how partitions on top of hard drives work.  The partitioning system uses the concept of drive impression interface on one side to be able to be applied to a device, and on the other side it presents a drive impression interface to whatever wants to use it; normally a filesystem.  This idea of something that using the drive impression interface on both sides is very important.  By doing this, we get a uniform and universal building block system for making complex storage systems!

We see this concept of “drive in; drive out” in many cases.  Probably the best know is RAID.  A RAID system takes an array of hard drives, applies one of a number of algorithms to make the drives act as a team, and then present them as a single drive impression to the next system up the “stack.”  This encapsulation is what gives RAID its power: systems further up the stack looking at a RAID array see literally a hard drive.  They do not see the array of drives, they do not know what is below the RAID.  They just see the resulting drive(s) that the RAID system present.

Because a RAID system takes an arbitrary number of drives and presents them as a standard drive we have the theoretical ability to layer RAID as many times as we want.  Of course this would be extremely impractical to do to any great degree.  But it is through this concept that nested RAID arrays are possible.   For example, if we had many physical hard drives split into pairs and each pair in a RAID 1 array.  Each of those resulting arrays gets presented as a single drive.  Each of those resulting logical drives can be combined into another RAID array, such as RAID 0.  Doing this is how RAID 10 is built.  Going further we could take a number of RAID 10 arrays, present them all to another RAID system that puts them in RAID 0 again and get RAID 100 and so forth indefinitely.

Similarly the logical volume layer uses the same kind of encapsulation as RAID to work its magic.  Logical Volume Managers, such as LVM on Linux and Dynamic Disks on Windows, sit on top of logical disks and provide a layer where you can do powerful management such as flexibly expanding devices or enabling snapshots, and then present logical disks (aka drive impression interface) to the next layer of the stack.

Because of the uniform nature of drive impressions the stack can happen in any order.  A logical volume manager can sit on top of RAID, or RAID can sit on top of a logical volume manager and of course you can skip or the other or both!

The concept of drive impressions or logical hard drives is powerful in its simplicity and allows us great potential for customizing storage systems however we need to make them.

Of course there are other uses of the logical drive concept as well.  One of the most popular and least understood is that of a SAN.  A SAN is nothing more than a device that takes one or more physical disks and presents them as logical drives (this presentation of a logical drive from a SAN is called a LUN) over the network.  This is, quite literally, all that a SAN is.  Most SANs will incorporate a RAID layer and likely a logical volume manager layer before presenting the final LUNs, or disk impressions, to the network, but that is not required to be a SAN.

This means, of course, that multiple SAN LUNs can be combined in a single RAID or controlled via a logical volume layer.  And of course it means that a SAN LUN, a physical hard drive, a RAID array, a logical volume, a partition…. can all be formatted with a filesystem as they are all different means of achieving the same result.  They all behave identically.  They all share the drive appearance interface.

To give a real world example of how you would often see all of these parts come together we will examine one of the most common “storage stacks” that you will find in the enterprise space.  Of course there are many ways to build a storage stack so do not be surprised if yours is different.  At the bottom of the stack is nearly always physical hard drives, which could include solid state drives.  This are located physically within a SAN.  Before leaving the SAN the stack will likely include the actual storage layer of the drives, then a RAID layer combining those drives into a single entity.  Then a logical volume layer to allow for features like growth and snapshots.  Then there is the physical demarcation between the SAN and the server which is presented as the LUN.  The LUN then has a logical volume manager applies to it on the server / operating system side of the demarcation point.  Then on top of that LUN is a filesystem which is our final step as the filesystem does not continue to present a drive appearance interface but a file interface, instead.

Understanding drive appearance, or logical drives, and how these allows components to interface with each other to build complex storage subsystems is a critical building block to IT understanding and is widely applicable to a large number of IT activities.

The Jurassic Park Effect

“If I may… Um, I’ll tell you the problem with the scientific power that you’re using here, it didn’t require any discipline to attain it. You read what others had done and you took the next step. You didn’t earn the knowledge for yourselves, so you don’t take any responsibility for it. You stood on the shoulders of geniuses to accomplish something as fast as you could, and before you even knew what you had, you patented it, and packaged it, and slapped it on a plastic lunchbox, and now …” – Dr. Ian Malcolm, Jurassic Park

When looking at building a storage server or NAS, there is a common feeling that what is needed is a “NAS operating system.”  This is an odd reaction, I find, since the term NAS means nothing more than a “fileserver with a dedicated storage interface.”  Or, in other words, just a file server with limited exposed functionality.  The reason that we choose physical NAS appliances is for the integrated support and sometimes for special, proprietary functionality (NetApp being a key example of this offering extensive SMB and NFS integration and some really unique RAID and filesystem options or Exablox offering fully managed scale out file storage and RAIN style protection.)  Using a NAS to replace a traditional file server is, for the most part, a fairly recent phenomenon and one that I have found is often driven by misconception or the impression that managing a file server, one of the  most basic IT workloads, is special or hard.  File servers are generally considered the most basic form of server and traditionally what people meant when using the term server unless additional description was added and the only form commonly integrated into the desktop (every Mac, Windows and Linux desktop can function as a file server and it is very common to do so.)

There is, of course, nothing wrong with turning to a NAS instead of a traditional file server to meet your storage needs, especially as some modern NAS options, like Exablox, offer scale out and storage options that are not available in most operating systems.  But it appears that the trend to use a NAS instead of a file server has led to some odd behaviour when IT professionals turn back to considering file servers again.  A cascading effect, I suspect, where the reasons for why NAS are sometimes preferred and the goal level thinking are lost and the resulting idea of “I should have a NAS” remains, so that when returning to look at file server options there is a drive to “have a NAS” regardless of whether there is a logical reason for feeling that this is necessary or not.

First we must consider that the general concept of a NAS is a simple one, take a traditional file server, simplify it by removing options and package it with all of the necessary hardware to make a simplified appliance with all of the support included from the interface down to the spinning drives and everything in between.  Storage can be tricky when users need to determine RAID levels, drive types, monitor effectively, etc.  A NAS addresses this by integrating the hardware into the platform.  This makes things simple but can add risk as you have fewer support options and less ability to fix or replace things yourself.  A move from a file server to a NAS appliance is truly about support almost exclusively and is generally a very strong commitment to a singular vendor.  You chose the NAS approach because you want to rely on a vendor for everything.

When we move to a file server we go in the opposite direction.  A file server is a traditional enterprise server like any other.  You buy your server hardware from one vendor (HP, Dell, IBM, etc.) and your operating system from another (Microsoft, Red Hat, Suse, etc.)  You specify the parts and the configuration that you need and you have the most common computing model for all of IT.  With this model you generally are using standard, commodity parts allowing you to easily migrate between hardware vendors and between software vendors. You have “vendor redundancy” options and generally everything is done using open, standard protocols.  You get great flexibility and can manage and monitor your file server just like any other member of your server fleet, including keeping it completely virtualized.  You give up the vertical integration of the NAS in exchange for horizontal flexibility and standardization.

What is odd, therefore, is when returning to the commodity model but seeking, what is colloquially known as, a NAS OS.  Common examples of these include NAS4Free, FreeNAS and OpenFiler.  This category of products is generally nothing more than a standard operating system (often FreeBSD as it has ideal licensing, or Linux because it is well known) with a “storage interface” put onto it and no special or additional functionality that would not exist with the normal operating system.  In theory they are a “single function” operating system that does only one thing.  But this is not reality.  They are general purpose operating systems with an extra GUI management layer added on top.  One could say the same thing about most physical NAS products themselves, but they typically include custom engineering even at the storage level, special features and, most importantly, an integrated support stack and true isolation of the “generalness” of the underlying OS.  A “NAS OS” is not a simpler version of a general purpose OS, it is a more complex, yet less functional version of it.

What is additionally odd is that general OSes, with rare exception, already come with very simple, extremely well known and fully supported storage interfaces.  Nearly every variety of Windows or Linux servers, for example, have included simple graphical interfaces for these functions for a very long time.  These included GUIs are often shunned by system administrators as being too “heavy and unnecessary” for a simple file server.  So it is even more unusual that adding a third party GUI, one that is not patched and tested by the OS team and not standardly known and supported, would then be desired as this goes against the common ideals and practices of using a server.

And this is where the Jurassic Park effect comes in – the OS vendors (Red Hat, Microsoft, Oracle, FreeBSD, Suse, Canonical, et. al.) are giants with amazing engineering teams, code review, testing, oversight and enterprise support ecosystems.  While the “NAS OS” vendors are generally very small companies, some with just one part time person, who stand on the shoulders of these giants and build something that they knew that they could but they never stopped to ask if they should.  The resulting products are wholly negative compared to their pure OS counterparts, they do not make systems management easier nor do they fill a gap in the market’s service offerings. Solid, reliable, easy to use storage is already available, more vendors are not needed to fill this place in the market.

The logic often applied to looking at a NAS OS is that they are “easy to set up.”   This may or may not be true as easy, here, must be a relational term.  For there to be any value a NAS OS has to be easy in comparison to the standard version of the same operating system.  So in the case of FreeNAS, this would mean FreeBSD.  FreeNAS would need to be appreciably easier to set up than FreeBSD for the same, dedicated functions.  And this is easily true, setting up a NAS OS is generally pretty easy.  But this ease is only a panacea and one of which IT professionals need to be quite aware.  Making something easy to set up is not a priority in IT, making something that is easy to operate and repair when there are problems is what is important.  Easy to set up is nice, but if it comes at a cost of not understanding how the system is configured and makes operational repairs more difficult it is a very, very bad thing.  NAS OS products routinely make it dangerously easy to get a product into production for a storage role, which is almost always the most critical or nearly the most critical role of any server in an environment, that IT has no experience or likely skill to maintain, operate or, most importantly, fix when something goes wrong.  We need exactly the opposite, a system that is easy to operate and fix.  That is what matters.  So we have a second case of “standing on the shoulders of giants” and building a system that we knew we could, but did not know if we should.

What exacerbates this problem is that the very people who feel the need to turn to a NAS OS to “make storage easy” are, by the very nature of the NAS OS, the exact people for whom operational support and the repair of the system is most difficult.  System administrators who are comfortable with the underlying OS would naturally not see a NAS OS as a benefit and avoid it, for the most part.  It is uniquely the people for whom it is most dangerous to run a not fully understood storage platform that are likely to attempt it.  And, of course, most NAS OS vendors earn their money, as we could predict, on post-installation support calls for customers who deployed and got stuck once they were in production so that they are at the mercy of the vendors for exorbitant support pricing.  It is in the interest of the vendors to make it easy to install and hard to fix.  Everything is working against the IT pro here.

If we take a common example and look at FreeNAS we can see how this is a poor alignment of “difficulties.”  FreeNAS is FreeBSD with an additional interface on top.  Anything that FreeNAS can do, FreeBSD an do.  There is no loss of functionality by going to FreeBSD.  When something fails, in either case, the system administrator must have a good working knowledge of FreeBSD in order to exact repairs.  There is no escaping this.  FreeBSD knowledge is common in the industry and getting outside help is relatively easy.  Using FreeNAS adds several complications, the biggest being that any and all customizations made by the FreeNAS GUI are special knowledge needed for troubleshooting on top of the knowledge already needed to operate FreeBSD.  So this is a large knowledge set as well as more things to fail.  It is also a relatively uncommon knowledge set as FreeNAS is a niche storage product from a small vendor and FreeBSD is a major enterprise IT platform (plus all use of FreeNAS is FreeBSD use but only a tiny percentage of FreeBSD use is FreeNAS.)  So we can see that using a NAS OS just adds risk over and over again.

This same issue carries over into the communities that grow up around these products.  If you look to communities around FreeBSD, Linux or Windows for guidance and assistance you deal with large numbers of IT professionals, skilled system admins and those with business and enterprise experience.  Of course, hobbyists, the uninformed and others participate too, but these are the enterprise IT platforms and all the knowledge of the industry is available to you when implementing these products.  Compare this to the community of a NAS OS.  By its very nature, only people struggling with the administration of a standard operating system and/or storage basics would look at a NAS OS package and so this naturally filters the membership in their communities to include only the people from whom we would be best to avoid getting advice.  This creates an isolated culture of misinformation and misunderstandings around storage and storage products.  Myths abound, guidance often becomes reckless and dangerous and industry best practices are ignored as if decades of accumulated experience had never happened.

A NAS OS also, commonly, introduces lags in patching and updates.  A NAS OS will almost always and almost necessarily trail its parent OS on security and stability updates and will very often follow months or years behind on major features.  In one very well known scenario, OpenFiler, the product was built on an upstream non-enterprise base (RPath Linux) which lacked community and vendor support, failed and was abandoned leaving downstream users, included everyone on OpenFiler, abandoned without the ecosystem needed to support them.  Using a NAS OS means trusting not just the large, enterprise and well known primary OS vendor that makes the base OS but trusting the NAS OS vendor as well.  And the NAS OS vendor is orders of magnitude more likely to fail if they are basing their products on enterprise class base OSes.

Storage is a critical function and should not be treated carelessly and should not be ignored as if its criticality did not exist.  NAS OSes tempt us to install quickly and forget, hoping that nothing ever goes wrong or that we can move on to other roles or companies completely before bad things happen.  It sets us up for failure where failure is most impactful.  When a typical application server fails we can always copy the files off of its storage and start fresh.  When storage fails, data is lost and systems go down.

“John Hammond: All major theme parks have delays. When they opened Disneyland in 1956, nothing worked!

Dr. Ian Malcolm: Yeah, but, John, if The Pirates of the Caribbean breaks down, the pirates don’t eat the tourists.”

When storage fails, businesses fail.  Taking the easy route to setting up storage and ignoring the long term support needs and seeking advice from communities that have filtered out the experienced storage and systems engineers increases risk dramatically.  Sadly, the nature of a NAS OS, is that the very reason that people turn to it (lack of deep technical knowledge to build the systems) is the very reason they must avoid it (even greater need for support.)  The people for whom NAS OSes are effectively safe to use, those with very deep and broad storage and systems knowledge would rarely consider these products because for them they offer no benefits.

At the end of the day, while the concept of a NAS OS sounds wonderful, it is not a panacea and the value of a NAS does not carry over from the physical appliance world to the installed OS world and the value of standard OSes is far too great for NAS OSes to effectively add real value.

“Dr. Alan Grant: Hammond, after some consideration, I’ve decided, not to endorse your park.

John Hammond: So have I.”