Tag Archives: linux

Choosing an Open Storage Operating System

It is becoming increasingly common to forgo traditional, proprietary storage devices, both NAS and SAN, and instead using off the shelf hardware and installing a storage operating system on it for, what many call, “do it yourself” storage servers.  This, of course, is a misnomer since no one calls a normal file server “do it yourself” just because you installed Windows yourself.  Storage has a lot of myth and legend swirling around it and people often panic when the they think of installing Windows and calling it NAS rather than calling it a file server.  So, if it makes you feel better, use terms like file server or storage server rather than NAS and SAN – problem solved.  This is a part of the “open storage” movement – moving storage systems from proprietary to standard.

Choosing the right operating system for a storage server is important and not always that easy.  I work extensively in this space and people often ask me what I recommend and the recommendations vary, based on scenario, and often seem confusing.  But the factors are actually relatively easy, if you just know the limitations that create the choices and paths in the decision tree.

Before choosing an OS we must stop and consider what our needs are going to be.  Some areas that need to be considered are: capacity, performance, ease of administration, budget, connection technology, cost and clustering.  There are two main categories of systems that we will consider as well, standard operating system or storage appliance operating system.  The standard operating systems are Windows, Linux, Solaris and FreeBSD.  The storage appliance operating systems are FreeNAS, OpenFiler and NexentaStor.  There are others in both categories but these are the main players currently.

The first decision to be made is whether or not you or your organization is comfortable supporting a normal operating system operating in a storage server role.  If you are looking at NAS then simply ask yourself if you could administer a file server.  Administrating a block storage server (SAN) is a little more complex or, at least, unusual, so this might induce a small amount of concern but is really in line with other administration tasks.  If the answer is yes, that using normal operating system tools and interfaces is acceptable to you, then simply rule out the “appliance” category right away.  The appliance approach adds complexity and slows development and support cycles, so unless necessary is undesirable.

Storage appliance operating systems exist only to provide a pre-packaged, “easy to use” view into running a storage server.  In concept this is nice, but there are real problems with this method.  The biggest problems come from the packaging process which pulls you a step away from the enterprise OS vendors themselves making your system more fragile, further behind in updates and features and less secure than the traditional OS counterparts.  It also leaves you at the mercy of a very small company for OEM-level support when something goes wrong rather than with a large enterprise vendor with a massive user base and community.  The appliancization process also strips features and options from the systems by necessity.  In the end, you lose.

Appliances are nice because you get a convenient web interface from which “anyone” can administer your storage.  At least in theory.  But in reality there are two concerns.  The first is that there is always a need to drop into the operating system itself and fix things every once in a while.  Having the custom web interface of the appliance makes this dramatically harder than normal so at the time when you most need the appliance nature of the system is when you do not have it.  The second is that making something as critical as storage available for “anyone” to work on is a terrifying thought.  There are few pieces of your infrastructure where you want more experience, planning and care taken than in storage.  Making the system harder to use is not always a bad thing.

If you are in need of the appliance system then primarily you are looking at FreeNAS and OpenFiler.  NexentaStor offers a compelling product but it is not available in a free version and the cost can be onerous.  The freely downloadable version appears to be free for the first 18TB of raw storage but the license states otherwise making this rarely the popular choice.  (The cost of NexentaStor is high enough that purchasing a fully supported Solaris system would be less costly and provides full support from the original vendor rather than Nexenta which is essentially repackaging old versions of Solaris and ZFS.  More modern code and updates are available less expensively from the original source.)

FreeNAS, outside of clustering, is the storage platform of choice in an appliancized package.  It has the much touted ZFS filesystem which gives it flexibility and ease of use lacking in OpenFiler and other Linux-based alternatives.  It also has a working iSCSI implementation so you can use FreeNAS safely as either a NAS or a SAN.  Support for FreeNAS appears to be increasing with new developments being made regularly and features being retained.  FreeNAS offers a large range of features and supported protocols.  It is believed that clustering will be coming to FreeNAS in the future as well as this has recently been added to the underlying FreeBSD operating system.  If so, FreeNAS will completely eliminate the need for OpenFiler in the marketplace.  FreeNAS is completely free.

OpenFiler lacks a reliable iSCSI SAN implementation (unless you pay a fortune to have that part of the system replaced with a working component ) and is far more out of date than its competitors but does offer full block-level real-time replication allowing it to operate in a clustered mode for reliability .  The issue here being that the handy web interface of the NAS appliance does not address this scenario and if you want to do this you will need to get your hands dirty on the command line, very dirty indeed.  This is expert level stuff and anyone capable of even considering a project to make OpenFiler into a reliable cluster will be just as comfortable, and likely far more comfortable, building the entire cluster from scratch on their Linux distribution of choice.  OpenFiler is built on the rather unpopular, and now completely discontinued, rPath Linux using the Conary packaging system both which are niche players, to say the least, in the Linux world.  You’ll find little rPath support from other administrators and many packages and features that you may wish access to are unavailable.  OpenFiler’s singular advantage of any significance is the availability of DRBD for clustering, which as stated above, in nonsensical.  Support for OpenFiler appears to be waning with new features being non-existant and, in fact, key features like the AFP have been dropped rather than new features having been added.  OpenFiler is free but key features, like reliable iSCSI, are not.  Recent reports from OpenFiler users are that even non-iSCSI storage has become unstable in the latest release and losing data is a regular occurrence.  OpenFiler remains very popular in the mindshare of this industry segment but should be avoided completely.

If you do not need to have your storage operating system appliancized then you are left with more and better choices, but a far more complex decision tree.    Unlike the appliance OS market which is filled with potholes (NexentaStor has surprise costs, OpenFiler appears to support iSCSI but causes data loss, features get removed from new versions) all four operating systems mentioned here are extremely robust and feature rich.  Three of them have OEM vendor support which can be a major deciding factor and all have great third party support options far broader than what is available for the appliance market.

The first decision is whether or not Windows only features, notably NTFS ACLs, are needed.  It is common for new NAS users to be surprised when the SMB protocol does not provide all of the granular filesystem control that they are used to in Windows.  This is because those controls are actually handled by the filesystem, not the network protocol, and Windows alone provides these via NTFS.  So if that granular Windows file control is needed, Windows is your only option.

The other three entrants, Linux, Solaris and FreeBSD, all share basic capabilities with the notable exception of clustering.  All have good software RAID, all have powerful and robust filesystems, all have powerful logical volume management and all provide a variety of NAS and SAN connection options.  Many versions of Linux and FreeBSD are available completely freely.  Solaris, while free for testing, is not available for free for production use.

The biggest differentiator between these three OS options is clustering.  Linux has had DRBD for a long time now and this is a robust filesystem clustering technology.  FreeBSD has recently (as of 9.0) added HAST to serve the same purpose.  So, in theory, FreeBSD has the same clustering options as Linux but this is much newer and much less well known.  Solaris lacks filesystem clustering in the base OS and requires commercial add-ons to handle this at this time.

Solaris and FreeBSD share the powerful and battle tested ZFS filesystem.  ZFS is extremely powerful and flexible and has long been the key selling point of these platforms.  Linux’ support for filesystems is more convoluted.  Nearly any Linux distribution (we care principally about RHEL/CentOS, Oracle Unbreakable Linux, Suse/OpenSuse and Ubuntu here) supports EXT4 which is powerful and fast but lacks some of the really nice ZFS features.  However, Linux is rapidly adopting BtrFS which is very competitive with ZFS but is nascent and currently only available in Suse and Oracle Linux distros.  We expect to see it from the others soon for production use but at this time it is still experimental.

Outside of clustering, likely the choice of OS of these three will come down primarily to experience and comfort.  Solaris is generally known for providing the best throughput and FreeBSD the worst.  But all three are quite close.  Once BtrFS is widely available and stable on Linux, Linux will likely become the de facto choice as it has been in the past.

Without external influence, my recommendation for storage platform are FreeBSD and then Linux with Solaris eliminated on the basis that rarely is anyone looking for commercial support and so it is ruled out automatically.  This is based almost entirely on the availability of Copy-on-Write filesystems and assuming no clustering which is not common.  If clustering is needed then Linux first then FreeBSD and Solaris is ruled out, again.

Linux and FreeBSD are rapidly approaching each other in functionality.  As BtrFS matures on Linux and HAST matures on FreeBSD they seem to be meeting in the middle with the choice being little more than a toss up.

There is no single, simple answer.  Choosing a storage OS is all about balancing myriad factors from performance, resources, features, support, stability, etc.  There are a few factors that can be used to rule out many contenders and knowing these hard delimiters is key.  Knowing exactly how you plan to use the system and what factors are important to you are important in weeding through the available options.

Even once you pick a platform there are many decisions to make.  Some platforms include multiple file systems.  There is SAN and NAS.  There are multiple SAN and NAS protocols.  There is network bonding (or teaming, the Windows world.)  There is Multipathing.  There are snapshots, volumes, RAID.  The list goes on and on.

 

Linux Virtualization Deployment Advantage

As more and more businesses begin to deploy virtualization broadly, we must begin to step back and reconsider the opportunities presented to us by this shift in datacenter architecture.  Virtualization comes with new challenges and potential not only for cost savings but for aggressive project implementation.  Small businesses, especially, when using virtualization tend to prepare themselves for projects that they could never have envisioned doing during the era of physical-only servers.

The big winners in this space of emerging virtualization opportunity are the open source operating systems such as Linux, OpenSolaris and FreeBSD.  The reason that these particular operating systems have unique opportunities that Windows and Mac OSX do not is because of the way that they are, or can be, licensed.  Each of these operating systems has an option by which they are available completely for free – something that cannot be done with Windows or Mac OSX.

Traditionally, when purchasing a new server a business would price out expensive hardware with relatively inexpensive software.  An enterprise operating system, such as Windows, would typically represent a relatively small percentage of the cost of a new server.  Even a small server would cost a few thousand dollars and Windows Server can easily be purchased for less than one thousand dollars.  In this scenario a business looking to purchase a new server would see only a very small cost savings in opting for a “free” operating system since introducing a new OS has its own risks and the bulk of the cost of the new server is in the hardware which would still need to be purchased.

Given that equation, only a rare small business would consider the purchase of a non-Windows-based server.  The opportunity for failure is too high given the risk of change and the cost savings are too small.  Today, though, virtualization is commonplace and becoming more ubiquitous every day.  Businesses virtualizing their infrastructure typically have excess capacity on their servers that is going unused.  As these businesses and their IT departments begin to look to utilize this spare capacity they will increasingly find that the cost of deploying virtualized Windows Server remains high while the cost of deploying a virtualized Linux or OpenSolaris server is nominal – generally nothing more than the effort to do so without any capital expenditure or its associated risk.

The ability to deploy new servers, at any time, without any cost is a significant advantage that companies have not begun to truly comprehend.  If a business wants a new web server, for instance, they can have one provisioned and built in thirty minutes without buying any licenses.  Having redundant virtualization hardware means that a redundant web server can be had as well – again without any capital cost.  Unlike Windows (or other commercial operating systems) there is no need to purchase a second license just to have a backup server.

This means that for the first time many businesses can begin to consider clusters as well.  Typically the cost of licensing software for clustering was prohibitive but if that licensing becomes free then suddenly clusters become very attractive options.

Of course, as open source proponents will point out, the low cost of Linux and other free and open source solutions have long been reasons to move to these platforms, but this discounts the incredible shift in pricing structure that occurs only when spare usable capacity meets the previously existing free licenses.  It is only because so many business have already implemented virtualization strategies, or are in the process of doing so, that this new opportunity truly presents itself.

The first challenge will be in getting businesses to begin to think of operating systems and application platforms as being free.  The ways in which businesses may take advantage of this has yet to be seen.  Businesses are so used to being hamstrung by the need to buy new hardware and expensive server software licenses for every new system deployment that the widespread availability of spare server images is quite novel indeed.

Of course, as with many new technology changes, it is the small and medium business space where the greatest change will likely take place.  Large enterprises are already doing datacenter consolidation and do not necessarily have spare capacity available to them as their capacity plan already takes into account virtualization.  But in the smaller business space where capacity planning is a practically non-existent practice we see a different type of opportunity.

What we typically see in small businesses moving to virtualization is an over-purchasing of hardware.  This generally comes from a misunderstanding of how capacity planning and virtual guest interaction will occur in the virtualized environment but also from a desire to err on the side of overpowered versus underpowered and the nature of virtualization capacity planning being a bit of a “black art”.  Because of this, however, many small businesses have server resources sitting idle.  It is not uncommon to see a powerful server virtualizing just two server instances when there is capacity to virtualize a dozen or more.

It is this overprovisioning of hardware that offers unique opportunity.  Many small businesses, and even medium sized businesses, may manage to effectively virtualize their entire existing server infrastructure leaving no further opportunity for cost savings through consolidation.  At this point the spare capacity of the existing servers offers no further cost savings and can now be viewed as capacity for growth instead.

This begs the question of “What new deployment opportunities exist given these opportunities?”  This question is difficult to answer as it will be different for nearly every business, but we can look at some commonalities to build a rough picture of where we may see new value presenting itself.

The most obvious new opportunity is in new web applications.  Small businesses often would like to take advantage of free web-based applications but do not want to risk deploying new, low-priority applications to their existing Windows-based web server of do not even have a server available to do so.  Creating one or more open source application servers is incredibly simple.  Deploying a wiki, corporate web portal, a blogging engine or news site, bug or incident tracking application, microblogging platform (a la laconi.ca,) CRM, ERP or any of thousands of similar applications can be done quickly and easily with minimal cost using only “spare” time from the existing IT resources.  Any number of internal applications such as these could bring value to the company and produce very little impact on a virtualization platform so many could be deployed utilizing only a small amount of excess capacity.

Beyond obvious web apps there are more feature-rich systems that could be deployed for no cost.  A great example is the OpenFire instant messaging and presence server.  Companies can suddenly roll out complete enterprise class, secure, internal instant messaging applications at no cost whatsoever.  Another example is in monitoring systems such as Nagios, Zenoss or Zabbix – all of which are available for free and represent a real benefit for companies that currently have no such system.  Enterprise monitoring completely for free.

Beyond new applications there is also an “environmental” benefit to be had.  In an enterprise environment changes going into production go through a series of testing.  Typically big businesses will maintain a development server environment, a user acceptance testing environment and then the production environment.  For a small business to do this with Windows is extremely cost prohibitive as the servers in each environment need to be licensed.  But with open source servers being virtualized using spare capacity deploying virtual servers for each of these environments is completely free and allows small businesses to test their own processes before making production changes giving them added stability previously unaffordable to them.

After all of these growth benefits there is one additional benefit to consider – flexibility.  Because these new systems can be deployed and tested with no cost it provides a new opportunity for small shops to deploy open source solutions that may replace expensive Windows solutions that they are currently using.  This could include replacing Exchange with Zimbra or replacing IIS with Apache or Active Directory with an LDAP server.  Doing a project like this would be risky and potentially costly if the hardware and software had to be purchased up front.  But if the project can be done, only using free time from the existing IT department, and can be done as a free “proof of concept” before looking to do a pilot and then full production replacement then risk can be minimized and the entire project can be effectively free.

While a full architectural replacement may be very aggressive for an average small business it is also a very significant potential cost savings.  Moving completely to open source systems is not for everyone and should be evaluated carefully.  The ability to evaluate a project of this magnitude, for free, is very important and small businesses should consider doing so to be sure that they are using the systems that make the most sense for their business model and needs rather than simply using the solutions with which they are already familiar or are already in place.

There are many additional ways in which free and open source products, deployed using existing, excess server capacity, can be used to expand the IT infrastructure of small businesses.  Learning to seek out opportunities rather than seeking cost savings from IT is a new process for most small businesses and requires some relearning, but those that take the time to pursue these opportunities have many benefits to be gained.

Considering NetBooks for Small Business

There really is not any question about whether or not NetBooks will be an important tool for businesses of all sizes – they will be.  The upsides to NetBooks are too big to overlook: highly portable, generally more rugged that laptop counterparts due to size, light weight, easier to store and transport and mostly quite inexpensive compared to traditional laptops.  There are exceptions to any rule but the prototypical NetBook is dramatically smaller than a traditional laptop, weighs only one to two pounds (under a kilogram) and often costs no more than seventy percent as much as a laptop (any price comparison is massively subjective for obvious reasons.)

The question is not whether or not NetBooks are a good idea, but whether or not the NetBook market is ready for the enterprise (or, in our case, the SMB.)  While the idea of NetBooks has been around for quite some time that realization of the market has only begun to take effect within the past two years.  The NetBook was originally developed by Psion in 2000 but they exited the market in 2003.  The next big player was the United Nations with the OLPC (One Laptop Per Child) which was an extremely low cost, ruggedized, Linux-based NetBook available for just $199USD.  With the development of the OLPC and the ecosystem of suppliers and developers that it fostered the low-cost, portable Internet device market was set to explode.

The big news for normal consumers came in 2007 when Asus, a major Taiwanese manufacturer famous for their high-quality motherboards, released their EEE PC line of NetBooks and, later, NetTops.  The EEE PC proved to be a major hit with consumers because of its low price tag, attractice looks and size.  Once the market was identified many manufactures jumped in with top-tier manufacturers like Acer, Lenovo, Dell and HP finally in the market now as well albeit generally from their consumer divisions and not from their commercial divisions.

Today we are in a rapidly maturing consumer NetBook market.  This means that NetBooks are well established, widely available and stable but, thus far, only in configurations designed for consumer use.  This presents our first barrier when considering these devices for the workplace.

With only rare exception, NetBooks ship with either consumer versions of Microsoft Windows (i.e. XP Home, Vista Home) or with non-enterprise versions of Linux (i.e. Linpus, Mandriva.)  To be sure, there are a few machines that ship with appropriately enterprise class operating systems like Vista Business or SUSE Linux but mostly the operating system that you find on the NetBooks are not the same as you would require in your business.  (Many niche NetBook manufactures do ship with Ubuntu or Fedora which are acceptable to many businesses but these are rare as well.)

In some cases, such as the very popular Acer Aspire One, it is quite easy for an IT department to establish their own operating system image and to apply it to the NetBook.  This is hardly a cost effective approach for a small shop to take, however.  This is only an effective approach under very specific circumstances or for very large orgazations who will be rolling out a large number of identically imaged machines and can spread the cost out over the group.

In the case of the Acer Aspire One we have a very well built unit that runs either Linpus Linux (a derivative of Fedora 8) or Windows XP Home.  Windows Home editions are not able to be integrated into business environments so we can rule out that option completely.  The cost of obtaining an additional XP Pro license would be very prohibitive on hardware that is so inexpensive.

The Linpus model is significantly less expensive than the Windows XP Home model and can be outfitted with a custom build of Fedora 10 replacing the including system at no additional external expense.  This does require a rather knowledgable Linux engineer to do and takes many hours to perfect and test.  Most likely a few days of labor at a minimum.  Only large shops with good internal Linux expertise or smaller shops with IT outsourcing partners with the necessary expertise should attempt to go down this path as it leaves you completely without any form of vendor support.  It also requires your IT department to monitor and support an additional operating system image unless you have already standardized on Fedora – which is not very common.  There are other options, such as installing OpenSUSE or an Ubuntu variant but these require additional work as Fedora is used to create the Linpus base and installs so easily onto the device.

Using Linux-based NetBooks often presents another problem.  On a normal corporate desktop running Linux it is most common to find either KDE or Gnome running as the desktop.  These are the two most popular, full featured desktop environments for the UNIX platforms and, to most users, it is the choice of KDE or Gnome that establishes the familiarity with the environment and not the underlying operating system.  Because of this, users who have used KDE on SUSE Linux can often be switched to KDE on PC-BSD without the user even realizing that the operating system has changed (Linux to FreeBSD.)  But NetBooks are often underpowered when it comes to running these heavy desktops and so alternatives are generally recommended.  Most commonly today we see XFCE chosen as a lightweight desktop environment alternative but even lighter options exist such as IceWM.  These environments can make NetBooks very usable instead of being slow and cumbersome but they do cause users to face potentially unfamiliar interfaces that can lead to additional support needs and possibly even training.

Having NetBooks available for a certain class of highly mobile or continuously on-call personnel can make a lot of sense.  The advantages are very real and, while some users are put off by the small screens and keyboards and dislike the lack of high-performance hardware, many users adore the portability and easy of use of these small devices.  If having a NetBook makes the difference between staff being able to work or having to disconnect from the office then the NetBooks will easily pay for themselves.

For most businesses I feel that we are still in a phase of early-adoption when it comes to NetBooks.  The hardware itself is well tested and widely available but the software is mostly not ready at this time.  In the next two years I expect that we will see a lot of advances in the market, especially as AMD and NVidia are expected to begin entering the market in force during this time allow with other potential players who currently have had very little input to the market such as Freescale.

Currently, and for the near future, businesses looking to NetBooks need to almost across the board make a commitment to using Linux rather than Windows.  The Windows operating system is just not ready to handle the NetBook market and will likely wait until NetBooks catch up to modern laptops in performance before really looking to enter the enterprise NetBook market.  During the mean time, however, alternative architectures, such as PowerPC, ARM and MIPS, are being experimented with within the market and their adoption poses a technological barrier to running Windows on these devices.  Microsoft may find that the NetBook could be a critical loss of market for them as Linux vendors like Novell, Red Hat and Canonical will see it as an inroad into the enterprise desktop space.  It is not coincidence that Red Hat has just announced its official return to competiting in this market.

At this particular time I feel that it is good to begin investigating NetBooks and seeing how they may or may not fit into your business IT strategy.  Most small businesses will find, like their large enterprise cousins, that the NetBook is inexpensive to obtain but expensive to support in a corporate environment.  This will be changing rapidly as the NetBook format becomes more common and business begin to clamour more and more to get these provided, in business-ready configurations, from the top vendors.