All posts by Scott Alan Miller

Started in software development with Eastman Kodak in 1989 as an intern in database development (making database platforms themselves.) Began transitioning to IT in 1994 with my first mixed role in system administration.

Choosing a RAID Level by Drive Count

In addition to all other factors, the number of drives available to you plays a significant role in choosing what RAID level is appropriate for you.  Ideally RAID is chosen ahead of time in conjunction with chassis and drives in a holistic approach so that the entire system is engineered for the desired purpose, but even in these cases, knowing how drive counts can affect useful RAID choices can be very helpful.

To simplify the list, RAID 0 will be left off of it.  RAID 0 is a viable choice for certain niche business scenarios in any count of drives.  So there is no need to display it on the list.  Also, the list assumes that a hot spare, if it exists, is not included in the count as that is “outside” of the RAID array and so would not be a part of the array drive count.

2 Drives: RAID 1

3 Drives: RAID 1 *

4 Drives: RAID 10

5 Drives: RAID 6

6 Drives: RAID 6 or RAID 10

7 Drives: RAID 6 or RAID 7

8 Drives: RAID 6 or RAID 7 or RAID 10 **

9 Drives: RAID 6 or RAID 7

10 Drives: RAID 6 or RAID 7 or RAID 10 or RAID 60/61

11 Drives: RAID 6 or RAID 7 

12 Drives: RAID 6 or RAID 7 or RAID 10 or RAID 60/61

13 Drives: RAID 6 or RAID 7

14 Drives: RAID 6 or RAID 7 or RAID 10 or RAID 60/61or RAID 70/71

15 Drives: RAID 6 or RAID 7 or RAID 60

16 Drives: RAID 6 or RAID 7 or RAID 10 or RAID 60/61 or RAID 70/71

17 Drives: RAID 6 or RAID 7

18 Drives: RAID 6 or RAID 7 or RAID 10 or RAID 60/61 or RAID 70/71

19 Drives: RAID 6 or RAID 7

20 Drives: RAID 6 or RAID 7 or RAID 10 or RAID 60/61 or RAID 70/71

21 Drives: RAID 6 or RAID 7 or RAID 60 or RAID 70

22 Drives: RAID 6 or RAID 7 or RAID 10 or RAID 60/61 or RAID 70/71

23 Drives: RAID 6 or RAID 7

24 Drives: RAID 6 or RAID 7 or RAID 10 or RAID 60/61 or RAID 70/71

25 Drives: RAID 6 or RAID 7 or RAID 60

………

* RAID 1 is technically viable at any drive count of two or more.  I have included it only up to three drives because using it beyond that point is generally considered absurd and is completely unheard of in the real world.  But technically it would continue to provide equal write performance while continuing to increase in read performance and reliability as more drives are added to the mirror.  But for reasons of practicality I have included it only twice on the list where it would actually be useful.

** At six drives and higher both RAID 6 and RAID 10 are viable options for arrays of even drive counts and RAID 6 alone is a viable option for odd numbered drive array counts.

For this list I have only considered the standard RAID levels of 0, 1, 4, 5, 6 and 10.  I left 0 off of the list because it is always viable for certain use cases.  RAID 5 never appears because there is no time on spindle hard drives today that it should be used, as RAID 5 is an enhancement of RAID 4, it too does not appear on the list.  Non-standard double parity RAID solutions such as Netapp’s RAID-DP and Oracle’s RAIDZ2 can be treated as derivations of RAID 6 and apply accordingly.  Oracle’s triple parity RAIDZ3 (sometimes called RAID 7) would apply at seven drives and higher but is a non-standard level and extremely rare so I included it in italics.

More commonly, RAID 6 makes sense at six drives or more and RAID 7 at eight drives or more.

Like RAID 4 and 5, RAID levels based on them (RAID 40, 50, 41, 51, 55, etc.) are not appropriate any longer due to the failure and fragility modes of spindle-based hard drives.  Complex RAID levels based on RAID 6 and 7 (60, 61, 70, 71, etc.) have a place but are exceedingly rare as they generally have very little cost savings compared to RAID 10 but suffer from performance issues and increased risk.  RAID 61 and 71 are almost exclusively effective when the highest order RAID, the mirror component, is over a network rather than local on the system.

Hardware and Software RAID

RAID, Redundant Array of Inexpensive Disks, systems are implemented in one of two basic ways: software or dedicated hardware.  Both methods are very viable and have their own merits.

In the small business space, where Intel and AMD architecture systems and Windows operating systems rule, hardware RAID is so common that a lot of confusion has arisen around software RAID due, as we will see, in no small part to the wealth of scam software RAID products touted as dedicated hardware and known colloquially as “Fake RAID.”

When RAID was first developed, it was used, in software, on high end enterprise servers running things like proprietary UNIX where the systems were extremely stable and the hardware was very powerful and robust making software RAID work very well.  Early RAID was primarily focused on mirrored RAID or very simplistic parity RAID (like RAID 2) which had little overhead.

As the need for RAID began to spill into the smaller server space and as parity RAID began to grow in popularity requiring greater processing power to support it became an issue that the underpowered processors in the x86 space were significantly impacted by the processing load of RAID, especially RAID 5.  This, combined with almost no operating systems heavily used on these platforms having software RAID implementations, lead to the natural development of hardware RAID – an offload processor board (similar to a GPU for graphics) that had its own complete computer on board with CPU and memory and firmware all of its own.

Hardware RAID worked very well at solving the RAID overhead problem in the x86 server space.  As CPUs gained more power and memory became less scarce popular x86 operating systems like Windows Server began to offer software RAID options.  Specifically Windows software RAID was known as a poor RAID implementation and was available only on server operating system versions causing a lack of appreciation for software RAID in the community of system administrators working primarily with Windows.

Because of historical implementations in the enterprise server space and the commodity x86 space there became a natural separation between the two markets supported initially by technology and later purely by ideology.  If you talk to a system administrator in the commodity space you will almost universally hear that hardware RAID is the only option.  Conversely if you talk to a system administrator in the mainframe, RISC (Sparc, Power, ARM) or EPIC (Itanium) server (sometimes called UNIX server) space you will often be met with surprise as hardware RAID isn’t available for those classes of systems – software RAID is simply a forgone conclusion.  Neither camp seems to have a real knowledge of the situation in the opposite one and crossovers in skill sets between these two is relatively rare until recently as enterprise UNIX platforms like Linux, Solaris and FreeBSD have started to become very popular and well understood on commodity hardware platforms.

To make matters more confusing for the commodity server space, in order to fill the vacuum left by the dominate operating system vendor’s lack of software RAID for the non-server operating system market while attempting to market to a less technically savvy target audience, a large number of vendors began selling non-RAID controller cards along with a “driver” that was actually software RAID and pretending that the resulting product was actually hardware RAID.  This created a large amount of confusion at best and an incredible disdain for software RAID at worse as almost universally any system whose core function is to protect data whose market is built upon deception and confusion will result in disaster.  Fake RAID systems routinely have issues with performance and reliability.  While, in theory, a third party software RAID package is a reasonable option, the reality of the software RAID market is that essentially all quality software RAID implementations are native components of either the operating system itself (Linux, Mac OSX, Solaris, Windows)  or of the filesystem (ZFS, VxFS, BtrFS) and are provided and maintained by primary vendors leaving little room or purpose for third party products outside of the Windows desktop space where a few, small legitimate software RAID players do exist but are often overshadowed by the Fake RAID players.

Today there is almost no need for hardware RAID as commodity platforms are incredibly powerful and there is almost always a dramatic excess of both computational and memory resources.  Hardware RAID instead competes mostly based on features rather than on reducing resource load.  Selection of hardware RAID versus software RAID in the commodity server space is almost completely one of preference and market momentum rather than of specific performance or features – both platforms essentially are equal with individual implementations being far more important in considering product options rather than hardware and software approaches are on their own.

Today hardware RAID offerings tend to be more “generic” with rather vanilla implementations of standard RAID levels.  Hardware RAID tends to earn its value through resource utilization reduction (CPU and memory offload), ability to “blind swap” failed drives, simplified storage management, block level storage agnostically abstracted from the operating system, fast cache close to the drives and battery or flash backed cache.  Software RAID tends to earn its value through lower power consumption, lower cost of acquisition, integrated management with the operating system, unique or advanced RAID features (such as ZFS’ RAIDZ that doesn’t suffer from the standard RAID 5 write hole) and generally better overall performance.  It is truly not a discussion of better or worse but of better or worse for a very specific situation with the most important factor often being familiarity and comfort and/or default vendor offering.

One of the most overlooked but important differentiators between hardware and software RAID is the change in the job role associated with RAID array management.  Hardware RAID moves the handling of the array to the server administrator (the support role that works on the physical server and is  stationed in the datacenter) whereas software RAID moves the handling of the array to the system administrator (the support role working on the operating system and above and rarely sitting in the datacenter.)  In the SMB market this factor might be completely overlooked but in a Fortune 500 the difference in job role can be very significant.  In many cases with hardware RAID disk replacements and system setup can be done without the need for system administrator intervention.  Datacenter server administrators can discover failed drives either through alerts or by looking for “amber lights” during walkthroughs and do replacements on the fly without needing to contact anyone or know what the server is even running.  Software RAID almost always would require the system administrator to be involved in managing the offlining of a failed disk, coordinating the replacement process with the datacenter and onlining the new one once the replacement process was completed.

Because of the way that CPU offloading and performance works and because of some advantages in the way that non-standard RAID implementations often handle parity RAID reconstruction there is a tendency for mirrored RAID levels to favor hardware RAID and software RAID levels to favor parity RAID.  Parity RAID is drastically more CPU intensive and so having access to the high power central CPU resources can be a major factor in speeding up RAID calculations.  But with mirrored RAID where RAID reconstruction is far safer than with parity RAID and where automated rebuilds are more important then hardware RAID brings the benefit of allowing blind drive replacement very easily.

One aspect of the hardware and software RAID discussion that is extremely paradoxical is that the same market that often dismisses software RAID out of hand as being inferior to hardware RAID is almost completely overlapping (you can picture the Venn Diagram in your head here) with the market that feels that file servers are inferior to commodity NAS appliances yet those NAS appliances in the SMB range are almost universally based on the same software RAID implementations being casually dismissed.  So it is often considered both inferior and superior simultaneously.  Some NAS devices in the SMB range, and NAS appliance software, that are software RAID based include: Netgear ReadyNAS, Netgear ReadyData, Buffalo Terastation, QNAP, Synology, OpenFiler FreeNAS, Nexenta and NAS4Free.

There is truly no “always use one way or the other” with hardware and software RAID.  Even giant, six figure enterprise NAS and SAN appliances are undecided as to which to use with part of the industry going each direction.  The real answer is it depends on your specific situation – your job role separation, your technical needs, your experience, your budget, etc.  Both options are completely viable in any organization.

You Aren’t Gonna Need It

I’m lucky that I work in IT but come from a software engineering background, this gives me a bit of a different perspective on the world of IT both in understanding much of what is happening behind the scenes with release cycles and features and also in applying knowledge gained from that industry to this one.

In the software engineering community in recent years the concept of “You Aren’t Gonna Need It” or YAGNI has become a popular one.  YAGNI arose from the Extreme Programming (XP) group of Agile developers and is stated as this rule: “Always implement things when you actually need them, never when you just foresee that you need them.”

I like to rephrase YAGNI in development to “Don’t invest in something until you know that you need it.”  But the concept is the same – if you spend time and money building pieces that you aren’t sure that you will ever need you take on risks such as not getting value as early as possible (by focusing on the things that don’t matter yet while neglecting the things that do) and investing in technology that will never be used (because requirements change, project gets canceled, etc.)

This concept ports over to IT extremely well.  Design and purchasing are both heavily influences, or should be, by YAGNI.  Storage is a great example.  Don’t invest in storage today that you think that you will use tomorrow.  We can list a lot of reasons for why early storage investment is bad: the business has little to no ability to accurately predict its own growth, IT is poor at predicting storage growth based on business growth, the time-value of money and buying storage today is more costly than buying the same storage tomorrow.  Anytime that we buy based on predictions we take on risk.  Predictions rarely come true.

If we over buy storage today we are paying a premium for that storage because storage costs drop dramatically over time.  If we buy with 100% headroom and it takes three years or more before we use that headroom we are paying too much for the storage and getting older technology when buying later would give us better insight into what we actually need at that time (not just capacity but speed, reliability, features, etc.), lower cost and more options.

Overbuying is one risk, underbuying is another.  Underbuying is, obviously, less of a risk but still a concern.  If you buy today for needs three years out and at two years suddenly have a burst of need you may have overinvested in a platform or technology that cannot meet your needs.

Storage is one example but this can apply anywhere from software licenses, to CPU capacity, memory, high availability technologies even desktops.  Few shops would over buy desktops by a hundred percent just to be ready for a predicted head count increase in three years, but strangely they won’t hesitate doing it elsewhere.

By buying what is required for the immediate need and holding purchasing decisions until later there is a significant opportunity for cost savings and technology improvements.  In some cases it may be that the future need never arises whether because of bad predictions, changes in market or strategy or a change in technology direction either internally or externally.

Beyond purchasing, YAGNI can apply to network design.  It is not uncommon for large, complex designs to be proposed and implemented based on anticipated growth often years away and, to be honest, seldom very likely in a realistic world.  Building, for example, a complex high availability environment with expensive licensing, complex networking, lots of storage for an expected company growth in the future when just two servers and a nice backup plan is all that is cost justified today is dangerous.  Not only must the necessary growth happen to justify the IT spend but it must happen so quickly that the time-value of the money is justified and the cost of the technology does not drop so much as to have made implementing two systems more cost effective.  It is surprising how easily it can happen that putting in a smaller, stop-gap system and then implementing a larger scale system when needed can be far cheaper just because the cost of building the larger, more complex system has dropped in price so much since the first system was put in place and that is before taking into account the risk of bad predictions.

Spending early has an additional risk – it ties up corporate finances in unused architecture.  That money could be invested in other parts of the business in order to grow the business.  In extreme cases, overinvestment in infrastructure could be a contributor to a company failing completely – a self fulfilling situation where not using YAGNI in and of itself created the situation where YAGNI most applied.  The architected solution was never needed as the company failed.

YAGNI is a risk mitigation process.  Working with the needs that you know versus the needs that you anticipate.

Maybe IT shops over buy today because they are given specific budgets.  This is understandable that IT ends up in a technology grab attempting to implement whatever they can when the whims of the business smile upon them.  This, however, is an extremely poor business practice.  Businesses need to realize that large sums of money are being wasted on IT because IT is forced to implement systems with the assumption of clairvoyancey based on arbitrary budgets from the business with no basis in the real world.  IT is stuck buying what they can “sell” to the business based on often very unclear factors and the business often funds IT quite capriciously.  The creates a very unhealthy business and IT relationship where IT is wasting money because it has little choice and the business sees IT as a waste because they are not allowed to operate efficiently.

To fix this situation the business and IT need to work together.  IT needs to act more like a business-savvy unit and the business needs to lean on IT for guidance and not use prediction-based budgeting or become entangled in picking technological approaches without the technical understanding of the ramifications of those choices.  IT needs to be able to trust the business to make logical business financial decisions and the business needs to be able to trust IT to make logical technological decisions for the business.  The business drives IT, IT enables the business.  It is a symbiotic relationship.  If the business insists on making IT predict and operate on fixed budgets, IT will continue to be forced to overspend and overarchitect whenever possible in the hopes of being prepared for tomorrow when the budget may not be approved.  If IT was trusted to request what is needed and the business was trusted to fund technology needs at the appropriate time both could operate more effectively for the common good.

Takeaway: Don’t invest early, you don’t know what technology or the business will do tomorrow.

Where Windows 8 Fails

There is a lot of talk about why people love or hate Windows 8, but I see a lot of talk about this from the perspective of the IT department and the big picture seems to be often dropped completely.  Overall, Windows 8 is a great operating system delivering a lot of cool, new features and building on the Windows Vista legacy (and, in turn, on the Windows 7 legacy.)  We have a stable, mature kernel and a lot of new efficiency in the system itself.

To really look at Windows 8 we need to look at the historic value of Windows as a desktop operation system.  For many years, even as far back as the late 1990s, Windows has competed against other desktop options on two fundamental premises.  The first premise is that moving to the next version of Windows is less disruptive and requires less retraining of staff than moving to a competitive platform allowing the majority of end users to remain comfortable and efficient even when going through major desktop upgrades.  The second is that the majority of business applications are written for Windows and moving to another platform severely limits application options.

Windows provides ancillary benefits of course such as a tight security model, well known support processes, massive user community, IT departments well versed in supporting it, excellent training and certification programs and good change processes.  But to a business selecting its next computing platform, continuity of usability and application support are the features traditionally driving the almost blind adoption of subsequent Windows versions year after year.

What makes Windows 8 unique in the long history of Windows desktop operating environments is that for the very first time even since the Windows 3.1 days there is a major change in the look, feel and usability of the desktop environment leaving many users stranded and confused in extreme cases and at least, in most cases, inefficient and frustrated.  Windows has never before departed from the basic need to make sure that users felt as little pain moving from version to version before and the need for retraining was basically out of the question beyond quick highlights showing where something has moved or showing off new features.  Windows 95 was the most extreme change of the past ~20+ years of Windows desktops and compared to Windows 8 it was relatively trivial.

With Windows 8 the move to the latest Windows edition is so dramatic that it begs comparison to the competition.  It is not that Windows 8 is bad, it is quite good, but that it doesn’t delivery the traditional Windows desktop upgrade value proposition either in user experience or in being a unique application platform as the majority of modern business applications are desktop platform agnostic running in the web browser leaving Windows in a very precarious position.  There are Linux desktops, for example, that offer user experiences far closer to Windows 7 than Windows 8 offers.  This, combined with the widespread use of enterprise web-based applications, means that, in theory, Windows 8 is no longer the simple upgrade path for desktops but it, in fact, the more difficult option requiring more training, more time and more pain for users and, from what we have seen, more long term loss of productivity as Windows 8 simply lacks the end-user efficiencies of most non-Windows platforms (Linux, Mac OSX and BSD.)

I’ve heard many people attempt to defend Windows but but the defense seems, to me, to be universally centered around mitigating Windows 8 flaws rather than finding places where it excels.  That users should not avoid it because they “haven’t taken time to learn how to deal with it”, that users should learn “keyboard shortcuts to make up for GUI deficiencies”, that they should “work hard to customize the Metro interface to make it less painful” or that “users should remove and/or replace troublesome Windows applications with more functional third party components” all, to me, sound like failures of the platform rather than reasons why Windows 8 is a good choice.  Yes, Windows 8 can certainly be made to be functional.  But Mac OSX or Linux Mint, as examples, solve all of these issues out of the box.  Users can hit the ground running and remain productive in the future.

From an IT support perspective there is a lot of pressure to maintain a status quo.  While Windows 8 is a departure it does not represent any significant change from supporting past Windows versions.  The tools and techniques are the same.  The set of experience and skills acquired for many years can be leveraged against Windows 8 and everyone comes to Windows 8 fresh so if there are new skills to be learned existing Windows desktop administrators and supporters are in the best position to learn them first.  Windows 8 continues to be the best job retention gamble and leverages best the in place support teams.  Moving to any new platform means that completely new skills and approaches need to be learned, new vendors need to be engaged and the risk of large portions of the department being replaced with outsides already possessing those skills looms large.

For end users, though, pressures might be the opposite.  IT needs to keep perspective that IT is not the end user of technology but the supplier of it.  The business and the business users are the end users of technology and it is the role of the IT department to support those needs.  If Windows 8 fails to deliver business value in comparison to competing options then it is IT’s job to deliver alternatives even if that means retraining for IT in order to make the business run more smoothly and more cost effectively.

When we step back and do a business by business analysis, Windows 8 is going to continue to dominate, there is no question.  But a shift is clear, Windows desktops are no longer the clear and obvious choice for end user easy of use and continued efficiency.  Microsoft is playing a dangerous game of alienating those whom they have courted for the longest.  Users looking for an easy transition will have to think twice about Windows 8 and the future of the Windows desktop.  Windows is already suffering from having lost the mobile and tablet space to the iOS and Android camps and has seen heavy market attrition in netbooks to Linux and the traditional desktop and laptop space to Mac OSX.  Windows’ areas of market dominance are growing fewer and those remaining are shrinking.  Ten years ago running a company without Windows on the desktop was unthinkable.  Today it is a very real consideration and both Mac OSX and many Linux distros have an opportunity to go through one or even several iterations before Windows 8’s replacement OS arrives from Microsoft giving them time to polish, advance and attract users who will be considering the Windows 8 move over the next few years.

Windows 8 fails to continue providing the Windows desktop’s traditional value.  Windows 8 fails to deliver new benefits to justify it on its own right.  Windows 8 fails to convince users and businesses of Microsoft long term vision.