“If I may… Um, I’ll tell you the problem with the scientific power that you’re using here, it didn’t require any discipline to attain it. You read what others had done and you took the next step. You didn’t earn the knowledge for yourselves, so you don’t take any responsibility for it. You stood on the shoulders of geniuses to accomplish something as fast as you could, and before you even knew what you had, you patented it, and packaged it, and slapped it on a plastic lunchbox, and now …” – Dr. Ian Malcolm, Jurassic Park
When looking at building a storage server or NAS, there is a common feeling that what is needed is a “NAS operating system.” This is an odd reaction, I find, since the term NAS means nothing more than a “fileserver with a dedicated storage interface.” Or, in other words, just a file server with limited exposed functionality. The reason that we choose physical NAS appliances is for the integrated support and sometimes for special, proprietary functionality (NetApp being a key example of this offering extensive SMB and NFS integration and some really unique RAID and filesystem options or Exablox offering fully managed scale out file storage and RAIN style protection.) Using a NAS to replace a traditional file server is, for the most part, a fairly recent phenomenon and one that I have found is often driven by misconception or the impression that managing a file server, one of the most basic IT workloads, is special or hard. File servers are generally considered the most basic form of server and traditionally what people meant when using the term server unless additional description was added and the only form commonly integrated into the desktop (every Mac, Windows and Linux desktop can function as a file server and it is very common to do so.)
There is, of course, nothing wrong with turning to a NAS instead of a traditional file server to meet your storage needs, especially as some modern NAS options, like Exablox, offer scale out and storage options that are not available in most operating systems. But it appears that the trend to use a NAS instead of a file server has led to some odd behaviour when IT professionals turn back to considering file servers again. A cascading effect, I suspect, where the reasons for why NAS are sometimes preferred and the goal level thinking are lost and the resulting idea of “I should have a NAS” remains, so that when returning to look at file server options there is a drive to “have a NAS” regardless of whether there is a logical reason for feeling that this is necessary or not.
First we must consider that the general concept of a NAS is a simple one, take a traditional file server, simplify it by removing options and package it with all of the necessary hardware to make a simplified appliance with all of the support included from the interface down to the spinning drives and everything in between. Storage can be tricky when users need to determine RAID levels, drive types, monitor effectively, etc. A NAS addresses this by integrating the hardware into the platform. This makes things simple but can add risk as you have fewer support options and less ability to fix or replace things yourself. A move from a file server to a NAS appliance is truly about support almost exclusively and is generally a very strong commitment to a singular vendor. You chose the NAS approach because you want to rely on a vendor for everything.
When we move to a file server we go in the opposite direction. A file server is a traditional enterprise server like any other. You buy your server hardware from one vendor (HP, Dell, IBM, etc.) and your operating system from another (Microsoft, Red Hat, Suse, etc.) You specify the parts and the configuration that you need and you have the most common computing model for all of IT. With this model you generally are using standard, commodity parts allowing you to easily migrate between hardware vendors and between software vendors. You have “vendor redundancy” options and generally everything is done using open, standard protocols. You get great flexibility and can manage and monitor your file server just like any other member of your server fleet, including keeping it completely virtualized. You give up the vertical integration of the NAS in exchange for horizontal flexibility and standardization.
What is odd, therefore, is when returning to the commodity model but seeking, what is colloquially known as, a NAS OS. Common examples of these include NAS4Free, FreeNAS and OpenFiler. This category of products is generally nothing more than a standard operating system (often FreeBSD as it has ideal licensing, or Linux because it is well known) with a “storage interface” put onto it and no special or additional functionality that would not exist with the normal operating system. In theory they are a “single function” operating system that does only one thing. But this is not reality. They are general purpose operating systems with an extra GUI management layer added on top. One could say the same thing about most physical NAS products themselves, but they typically include custom engineering even at the storage level, special features and, most importantly, an integrated support stack and true isolation of the “generalness” of the underlying OS. A “NAS OS” is not a simpler version of a general purpose OS, it is a more complex, yet less functional version of it.
What is additionally odd is that general OSes, with rare exception, already come with very simple, extremely well known and fully supported storage interfaces. Nearly every variety of Windows or Linux servers, for example, have included simple graphical interfaces for these functions for a very long time. These included GUIs are often shunned by system administrators as being too “heavy and unnecessary” for a simple file server. So it is even more unusual that adding a third party GUI, one that is not patched and tested by the OS team and not standardly known and supported, would then be desired as this goes against the common ideals and practices of using a server.
And this is where the Jurassic Park effect comes in – the OS vendors (Red Hat, Microsoft, Oracle, FreeBSD, Suse, Canonical, et. al.) are giants with amazing engineering teams, code review, testing, oversight and enterprise support ecosystems. While the “NAS OS” vendors are generally very small companies, some with just one part time person, who stand on the shoulders of these giants and build something that they knew that they could but they never stopped to ask if they should. The resulting products are wholly negative compared to their pure OS counterparts, they do not make systems management easier nor do they fill a gap in the market’s service offerings. Solid, reliable, easy to use storage is already available, more vendors are not needed to fill this place in the market.
The logic often applied to looking at a NAS OS is that they are “easy to set up.” This may or may not be true as easy, here, must be a relational term. For there to be any value a NAS OS has to be easy in comparison to the standard version of the same operating system. So in the case of FreeNAS, this would mean FreeBSD. FreeNAS would need to be appreciably easier to set up than FreeBSD for the same, dedicated functions. And this is easily true, setting up a NAS OS is generally pretty easy. But this ease is only a panacea and one of which IT professionals need to be quite aware. Making something easy to set up is not a priority in IT, making something that is easy to operate and repair when there are problems is what is important. Easy to set up is nice, but if it comes at a cost of not understanding how the system is configured and makes operational repairs more difficult it is a very, very bad thing. NAS OS products routinely make it dangerously easy to get a product into production for a storage role, which is almost always the most critical or nearly the most critical role of any server in an environment, that IT has no experience or likely skill to maintain, operate or, most importantly, fix when something goes wrong. We need exactly the opposite, a system that is easy to operate and fix. That is what matters. So we have a second case of “standing on the shoulders of giants” and building a system that we knew we could, but did not know if we should.
What exacerbates this problem is that the very people who feel the need to turn to a NAS OS to “make storage easy” are, by the very nature of the NAS OS, the exact people for whom operational support and the repair of the system is most difficult. System administrators who are comfortable with the underlying OS would naturally not see a NAS OS as a benefit and avoid it, for the most part. It is uniquely the people for whom it is most dangerous to run a not fully understood storage platform that are likely to attempt it. And, of course, most NAS OS vendors earn their money, as we could predict, on post-installation support calls for customers who deployed and got stuck once they were in production so that they are at the mercy of the vendors for exorbitant support pricing. It is in the interest of the vendors to make it easy to install and hard to fix. Everything is working against the IT pro here.
If we take a common example and look at FreeNAS we can see how this is a poor alignment of “difficulties.” FreeNAS is FreeBSD with an additional interface on top. Anything that FreeNAS can do, FreeBSD an do. There is no loss of functionality by going to FreeBSD. When something fails, in either case, the system administrator must have a good working knowledge of FreeBSD in order to exact repairs. There is no escaping this. FreeBSD knowledge is common in the industry and getting outside help is relatively easy. Using FreeNAS adds several complications, the biggest being that any and all customizations made by the FreeNAS GUI are special knowledge needed for troubleshooting on top of the knowledge already needed to operate FreeBSD. So this is a large knowledge set as well as more things to fail. It is also a relatively uncommon knowledge set as FreeNAS is a niche storage product from a small vendor and FreeBSD is a major enterprise IT platform (plus all use of FreeNAS is FreeBSD use but only a tiny percentage of FreeBSD use is FreeNAS.) So we can see that using a NAS OS just adds risk over and over again.
This same issue carries over into the communities that grow up around these products. If you look to communities around FreeBSD, Linux or Windows for guidance and assistance you deal with large numbers of IT professionals, skilled system admins and those with business and enterprise experience. Of course, hobbyists, the uninformed and others participate too, but these are the enterprise IT platforms and all the knowledge of the industry is available to you when implementing these products. Compare this to the community of a NAS OS. By its very nature, only people struggling with the administration of a standard operating system and/or storage basics would look at a NAS OS package and so this naturally filters the membership in their communities to include only the people from whom we would be best to avoid getting advice. This creates an isolated culture of misinformation and misunderstandings around storage and storage products. Myths abound, guidance often becomes reckless and dangerous and industry best practices are ignored as if decades of accumulated experience had never happened.
A NAS OS also, commonly, introduces lags in patching and updates. A NAS OS will almost always and almost necessarily trail its parent OS on security and stability updates and will very often follow months or years behind on major features. In one very well known scenario, OpenFiler, the product was built on an upstream non-enterprise base (RPath Linux) which lacked community and vendor support, failed and was abandoned leaving downstream users, included everyone on OpenFiler, abandoned without the ecosystem needed to support them. Using a NAS OS means trusting not just the large, enterprise and well known primary OS vendor that makes the base OS but trusting the NAS OS vendor as well. And the NAS OS vendor is orders of magnitude more likely to fail if they are basing their products on enterprise class base OSes.
Storage is a critical function and should not be treated carelessly and should not be ignored as if its criticality did not exist. NAS OSes tempt us to install quickly and forget, hoping that nothing ever goes wrong or that we can move on to other roles or companies completely before bad things happen. It sets us up for failure where failure is most impactful. When a typical application server fails we can always copy the files off of its storage and start fresh. When storage fails, data is lost and systems go down.
“John Hammond: All major theme parks have delays. When they opened Disneyland in 1956, nothing worked!
Dr. Ian Malcolm: Yeah, but, John, if The Pirates of the Caribbean breaks down, the pirates don’t eat the tourists.”
When storage fails, businesses fail. Taking the easy route to setting up storage and ignoring the long term support needs and seeking advice from communities that have filtered out the experienced storage and systems engineers increases risk dramatically. Sadly, the nature of a NAS OS, is that the very reason that people turn to it (lack of deep technical knowledge to build the systems) is the very reason they must avoid it (even greater need for support.) The people for whom NAS OSes are effectively safe to use, those with very deep and broad storage and systems knowledge would rarely consider these products because for them they offer no benefits.
At the end of the day, while the concept of a NAS OS sounds wonderful, it is not a panacea and the value of a NAS does not carry over from the physical appliance world to the installed OS world and the value of standard OSes is far too great for NAS OSes to effectively add real value.
“Dr. Alan Grant: Hammond, after some consideration, I’ve decided, not to endorse your park.
John Hammond: So have I.”
7 thoughts on “The Jurassic Park Effect”
Great write up. I love how you tied in the Jurassic Park effect motif with the actual topic. This was something I had never realized about NAS OSes. It was viewing insightful and has a very Spock-like (logical) approach to justifying what you THINK you need. It really speaks to me because a pet peeve of mine is people doing something just because they can, disregarding whether they SHOULD. For example, frying candy bars and putting bacon sundaes. You CAN do it, but WHY?
A further developement of this trend for NAS OSes is having a NAS OS running on very restrictive and underpowered hardware, like a Raspbery Pie running a file server and maybe a media streamer library and duties of that ilk to boot. I find this ultimately absurd, suffering from the all the restrictions of the worst capabilities of hardware and software combined.
I thoroughly enjoyed reading that. Thanks.
Anything that FreeNAS can do, FreeBSD an do.
I assume there should be “can do”.
Great article. Thank you.
Thanks for sharing those thoughts. Now I suddenly understand why I’m always doubtfull about picking the right solution to “solve a problem” or fullfill this kind of “needs”.
Even with a NAS solution bought from a vendor (like Qnap, Netgear, Synology) I’m always wondering the “what if the hardware breaks down, disks still readable in newer hardware”?
For that reason I also looked into FreeNAS and others, but still doubting about those solutions. This article surtenly makes me think about this again and might get me to go an entirely other direction (FreeBSD for example).
I appreciate the article and understand what you are saying. I do have file servers on real operating systems at present. However, I have a case in which I simply need two chunks of simple storage and less time getting them running.
I have a backup software that runs throughout the day getting delta block snapshots and I having lost a CentOS 7 RAID 5 and then RAID6 as the storage had thought of throwing a NAS device at two locations connected by fiber and putting the backup software on a third machine so it writes to both in a redundant fashion. The only purpose of these two boxes would be storage space for this one backup system and time saved of getting things up and running. In this scenario, where it is not a filesystem and not the only backup strategy in place, would you recommend 2 hardware NAS boxes for simply saving time?