Category Archives: Storage

Logical Volume Managers

A commonly used but often overlooked or misunderstood storage tool is the Logical Volume Manager.  Logical Volume Managers, or LVMs, are a storage abstraction, encapsulation and virtualization technology used to provide a level of flexibility often otherwise unavailable.

Most commonly an LVM is used to replace traditional partitioning systems, and sometimes additional functionality is rolled into an LVM such as RAID functions.  Nearly all operating systems offer an integrated LVM product today and most have for a very long time.  LVMs have become a standard feature of both server and client side storage management.

LVMs do not necessarily offer uniform features but common features often included in an LVM are logical volumes (soft partitioning), thin provisioning, flexible physical location allocation, encryption, simple RAID functionality (commonly only mirror based RAID) and snapshots.  Essentially all LVMs offer logical volumes, snapshots and flexible allocation; these being considered fundamental LVM functions.

Popular LVMs include Logical Disk Management on Windows Server 2000 through Server 2008 R2, Storage Spaces on Windows 2012 and later, LVM on Linux, BtrFS on Linux, Core Storage on Mac OSX, Solaris Volume Manager on Solaris, ZFS on Solaris and FreeBSD, Vinum Volume Manager on FreeBSD, Veritas Volume Manager for most UNIX systems, LVM on AIX and many more.  LVMs have been increasingly popular and standard since the late 1980s.  ZFS and BtrFS are interesting as they are filesystems that implement an LVM inside of the filesystem as an integrated system.

An LVM consumes block devices (drive appearances) and creates logical volumes (often referred to as LVs) which are themselves drive appearances as well.  Because of this, an LVM can sit at any of many different places in the storage stack.  Most commonly we would expect an LVM to consume a RAID array, split one RAID array into one or more logical volumes with each logical volume having a filesystem applied to it.  But it is completely possible for an LVM to sit directly on physical storage without RAID, and it is very possible for RAID to be implemented via software on top of the logical volumes rather than beneath them.  LVMs are also very useful for combining many different storage systems into one such as combining many physical devices and/or RAID arrays into a single, abstracted entity that can then be split up into logical volumes (with single volumes potentially utilizing many different underlying storage devices.)  One standard use of an LVM is to combine many SAN LUNs (potentially from a single SAN system or potentially from several different ones) into a single volume group.

While LVMs provide power and flexibility for working with multiple storage devices and types of storage devices while presenting a standard interface to higher layers in the storage stack, probably the most common usages are to provide for flexibility where rigid partitions used to be and for snapshots.  Traditional partitions are rigid and cannot be resized.  Logical volumes can almost always be grown or shrunk as needed making them tremendously more flexible.

Snapshots have become a major focus of LVM usage in the last decade, although mostly this has happened because of snapshot awareness growing rather than a recent shift in availability.  Commodity virtualization systems have brought snapshots from an underlying, storage industry knowledge component into the IT mainstream.  Much of how virtualization technologies tend to tackle storage virtualization can be thought of as being related to LVMs, but generally this is similar functionality offered in a different manner or simply passing LVM functionality on from a lower layer.

Today you can expect to find LVMs in use nearly everywhere, even implemented transparently on storage arrays (such as SAN equipment) to provide more flexible provisioning.  They are not just standardly available, but standardly implemented and have done much to improve the reliability and capability of modern storage.

Drive Appearance

One of the more common, yet more tricky fundamental concepts in computing today is the concept of drive appearance or, in other words, something that appears to be a hard drive.  This may sound simple, and mostly it is, but it can be tricky.

First, what is a hard drive.  This should be simple.  We normally mean a traditional spinning disk Winchester device such have been made for decades in standard three and a half inch as well as two and a half inch form factors.  They contain platters that spin, a drive head that moves forward and backward and they connect using something like ATA or SCSI connectors.  Most of us can pick up a hard drive with our hands and be certain that we have a hard drive.  This is what we call the physical manifestation of the drive.

To the computer, though, it does not see the casing of the drive nor the connectors.  The computer has to look through its electronics and “see” the drive digitally.  This is very, very different from how humans view the physical drive.  To the computer, a hard drive appears as an ATA, SCSI or Fibre Channel device at the most basic physical level and are generally abstracted at a higher level as a block device.  This is what we would call a logical appearance, rather than a physical one.  For our purposes here, we will think of all of these drive interfaces as being block devices.  They do differ, but only slightly and not consequentially to the discussion.  What is important is that there is a standard interface or set of closely related interfaces that are seen by the computer as being a hard drive.

Another way to think of the logical drive appearance here is that anything that looks like a hard drive to the computer is something on which the computer format with a filesystem.  Filesystems are not drives themselves, but require a drive on which to be placed.

The concept of the interface is the most important one here.  To the computer, it is “anything that implements a hard drive interface” that is truly seen as being a hard drive.  This is both a simple as well as a powerful concept.

It is because of the use of a standard interface that we were able to take flash memory, attach it to a disk controller that would present it over a standard protocol (both SATA and SAS implementations of ATA and SCSI are common for this today) and create SSDs that look and act exactly like traditional Winchester drives to the computer yet have nothing physically in common with them.  They may or may not come in a familiar physical form factor, but they definitely lack platters and a drive head.  Looking at the workings of a traditional hard drive and a modern SSD we would not guess that they share a purpose.

This concept applies to many devices.  Obviously SD cards and USB memory sticks work in the same way.  But importantly, this is how partitions on top of hard drives work.  The partitioning system uses the concept of drive impression interface on one side to be able to be applied to a device, and on the other side it presents a drive impression interface to whatever wants to use it; normally a filesystem.  This idea of something that using the drive impression interface on both sides is very important.  By doing this, we get a uniform and universal building block system for making complex storage systems!

We see this concept of “drive in; drive out” in many cases.  Probably the best know is RAID.  A RAID system takes an array of hard drives, applies one of a number of algorithms to make the drives act as a team, and then present them as a single drive impression to the next system up the “stack.”  This encapsulation is what gives RAID its power: systems further up the stack looking at a RAID array see literally a hard drive.  They do not see the array of drives, they do not know what is below the RAID.  They just see the resulting drive(s) that the RAID system present.

Because a RAID system takes an arbitrary number of drives and presents them as a standard drive we have the theoretical ability to layer RAID as many times as we want.  Of course this would be extremely impractical to do to any great degree.  But it is through this concept that nested RAID arrays are possible.   For example, if we had many physical hard drives split into pairs and each pair in a RAID 1 array.  Each of those resulting arrays gets presented as a single drive.  Each of those resulting logical drives can be combined into another RAID array, such as RAID 0.  Doing this is how RAID 10 is built.  Going further we could take a number of RAID 10 arrays, present them all to another RAID system that puts them in RAID 0 again and get RAID 100 and so forth indefinitely.

Similarly the logical volume layer uses the same kind of encapsulation as RAID to work its magic.  Logical Volume Managers, such as LVM on Linux and Dynamic Disks on Windows, sit on top of logical disks and provide a layer where you can do powerful management such as flexibly expanding devices or enabling snapshots, and then present logical disks (aka drive impression interface) to the next layer of the stack.

Because of the uniform nature of drive impressions the stack can happen in any order.  A logical volume manager can sit on top of RAID, or RAID can sit on top of a logical volume manager and of course you can skip or the other or both!

The concept of drive impressions or logical hard drives is powerful in its simplicity and allows us great potential for customizing storage systems however we need to make them.

Of course there are other uses of the logical drive concept as well.  One of the most popular and least understood is that of a SAN.  A SAN is nothing more than a device that takes one or more physical disks and presents them as logical drives (this presentation of a logical drive from a SAN is called a LUN) over the network.  This is, quite literally, all that a SAN is.  Most SANs will incorporate a RAID layer and likely a logical volume manager layer before presenting the final LUNs, or disk impressions, to the network, but that is not required to be a SAN.

This means, of course, that multiple SAN LUNs can be combined in a single RAID or controlled via a logical volume layer.  And of course it means that a SAN LUN, a physical hard drive, a RAID array, a logical volume, a partition…. can all be formatted with a filesystem as they are all different means of achieving the same result.  They all behave identically.  They all share the drive appearance interface.

To give a real world example of how you would often see all of these parts come together we will examine one of the most common “storage stacks” that you will find in the enterprise space.  Of course there are many ways to build a storage stack so do not be surprised if yours is different.  At the bottom of the stack is nearly always physical hard drives, which could include solid state drives.  This are located physically within a SAN.  Before leaving the SAN the stack will likely include the actual storage layer of the drives, then a RAID layer combining those drives into a single entity.  Then a logical volume layer to allow for features like growth and snapshots.  Then there is the physical demarcation between the SAN and the server which is presented as the LUN.  The LUN then has a logical volume manager applies to it on the server / operating system side of the demarcation point.  Then on top of that LUN is a filesystem which is our final step as the filesystem does not continue to present a drive appearance interface but a file interface, instead.

Understanding drive appearance, or logical drives, and how these allows components to interface with each other to build complex storage subsystems is a critical building block to IT understanding and is widely applicable to a large number of IT activities.

The Emperor’s New Storage

We all know the story of the Emperor’s New Clothes.  In Hans Christian Anderson’s telling of the classic tale we have some unscrupulous cloth vendors who convince the emperor that they have clothes made from a fabric with the magical property of only being visible to people who are fit for their positions.  The emperor, not being able to see the clothes, decides to buy them because he fears people finding out that he cannot see them.  Everyone in the kingdom pretends to see them as well – all sharing the same fear.  It is a brilliant sales tactic because it puts everyone on the same team: the cloth sellers, the emperor, the people in the street all share a common goal that requires them to all maintain the same lie.  Only when a little boy who cares naught about his status in society but only about the truth points out that the emperor is naked is everyone free to admit that they don’t see the clothes either.

And this brings us to the storage market today.  Today we have storage vendors desperate to sell solutions of dubious value and buyers who often lack the confidence in their own storage knowledge to dare to question the vendors in front of management or who simply have turned to vendors to make their IT decisions on their behalf.  This has created a scenario where the vendor confidence and industry uncertainty has engendered market momentum causing the entire situation to snowball.  The effect is that using big, monolithic and expensive storage systems is so accepted today that often systems are purchased without any thought at all.  They are essentially a foregone conclusion!

It is time for someone to point at the storage buying process and declare that the emperor is, in fact, naked.

Don’t get me wrong.  I certainly do not mean to imply that modern storage solutions do not have value.  Most certainly they do.  Large SAN and NAS shared storage systems have driven much technological development and have excellent use cases.  They were not designed without value, but they do not apply to every scenario.

The idea of the inverted pyramid design, the overuse of SANs where they do not apply, came about because they are high profit margin approaches.  Manufacturers have a huge incentive to push these products and designs because they do much to generate profits.  SANs are one of the most profit-bearing products on the market.  This, in turn, incentivizes resellers to push SANs as well, both to generate profits directly through their sales but also to keep their vendors happy.  This creates a large amount of market pressure by which everyone on the “sales” side of the buyer / seller equation has massive pressure to convince you, the buyer, that a SAN is absolutely necessary.  This is so strong of a pressure, the incentives so large, that even losing the majority of potential customers in the process is worth it because the margins on the one customer that goes with the approach is generally worth losing many others.

Resellers are not the only “in between” players with incentive to see large, complex storage architectures get deployed.  Even non-reseller consultants have an incentive to promote this approach because it is big, complex and requires, on average, far more consulting and support than do simpler system designs.  This is unlikely to be a trivial number.  Instead of a ten hour engagement, they may win a hundred hours, for example, and for consultants those hours are bread and butter.

Of course, the media has incentive to promote this, too.  The vendors provide the financial support for most media in the industry and much of the content.  Media outlets want to promote the design because it promotes their sponsors and they also want to talk about the things that people are interested in and simple designs do not generate a lot of readership.  The same problems that exist with sensationalist news: the most important or relevant news is often skipped so that news that will gather viewership is shown instead.

This combination of factors is very forceful.  Companies that look to consultants, resellers and VARs, and vendors for guidance will get a unanimous push for expensive, complex and high margin storage systems.  Everyone, even the consultants who are supposed to be representing the client have a pretty big incentive to let these complex designs get approved because there is just so much money potentially sitting on the table.  You might get paid one hour of consulting time to recommend against overspending, but might be paid hundreds of hours for implementing and supporting the final system.  That’s likely tens of thousands of dollars difference, a lot of incentive, even for the smallest deployments.

This unification of the sales channel and even the front line of “protection” has an extreme effect.  Our only real hope, the only significant one, for someone who is not incentivized to participate in this system is the internal IT staff themselves.  And yet we find very rarely that internal staff will stand up to the vendors on these recommendations or even produce them themselves.

There are many reasons why well intentioned internal IT staff (and even external ones) may fail to properly assess needs such as these.  There are a great many factors involved and I will highlight some of them.

  • Little information in the market.  Because no company makes money by selling you less, there is almost no market literature, discussions or material to assist in evaluating decisions.  Without direct access to another business that has made the same decision or to any consultants or vendors promoting an alternative approach, IT professionals are often left all alone.  This lack of supporting experience is enough to cause adequate doubt to squash dissenting voices.
  • Management often prefers flashy advertising and the word of sales people over the opinions of internal staff.  This is a hard fact, but one that is often true.  IT professionals often face the fact that management may make buying decisions without any technical input whatsoever.
  • Any bid process immediately short circuits good design.  A bid would have to include “storage” and SAN vendors can easily bid on supplying storage while there is no meaningful way for “nothing” to bid on it.  Because there is no vendor for good design, good design has no voice in a bidding or quote based approach.
  • Lack of knowledge.  Often dealing with system architecture and storage concerns are one off activities only handled a few times over an entire career.  Making these decisions is not just uncommon, it is often the very first time that it has ever been done.  Even if the knowledge is there, the confidence to buck the trend easily is not.
  • Inexperience in assessing risk and cost profiles.  While these things may seem like bread and butter to IT management, often the person tasked with dealing with system design in these cases will have no training and no experience in determining comparative cost and risk in complex systems such as these.  It is common that risk goes unidentified.
  • Internal staff often see this big and costly purchase as a badge of honour or a means to bragging rights.  Excited to show off how much they were able to spend and how big their new systems are.  Everyone loves gadgets and these are often the biggest, most expensive toys that we ever touch in our industry.
  • Internal staff often have no access to work with equipment of this type, especially SANs.  Getting a large storage solution in house may allow them to improve their resume and even leverage the experience into a raise or, more likely, a new job.
  • Turning to other IT professionals who have tackled similar situations often results in the same advice as from sales people.  This is for several reasons.  All of the reasons above, of course, would have applied to them plus one very strong one – self preservation.  Any IT professional that has implemented a very costly system unnecessarily will have a lot of incentive to state that they believe that the purchase was a good one.  Whether this is irrational “reverse rationalization” – the trait where humans tend to apply ration to a decision that lacked ration when originally made, because they fear that their job may be in jeopardy if it was found out what they had done or because they have not assessed the value of the system after implementation; or even possibly because their factors were not the same as yours and the design was applicable to their needs.

The bottom line is that basically everyone, no matter what role they play, from vendors to sales people to those that do implementation and support to even your friends in similar job roles to strangers on Internet forums, all have big incentives to promote costly and risky storage architectures in the small and medium business space.  There is, for all intents and purposes, no one with a clear benefit for providing a counter point to this marketing and sales momentum.  And, of course, as momentum has grown the situation becomes more and more entrenched with people even citing the questioning of the status quo and asking critical questions as irrational or reckless.

As with any decision in IT, however, we have to ask “does this provide the appropriate value to meet the needs of the organization?”  Storage and system architectural design is one of the most critical and expensive decisions that we will make in a typical IT shop.  Of all of the things that we do, treating this decision as a knee-jerk, foregone conclusion without doing due diligence and not looking to address our company’s specific goals could be one of the most damaging that we make.

Bad decisions in this area are not readily apparent.  The same factors that lead to the initial bad decisions will also hide the fact that a bad decision was made much of the time.  If the issue is that the solution carries too much risk, there is no means to determine that better after implementation than before – thus is the nature of risk.  If the system never fails we don’t know if that is normal or if we got lucky.  If it fails we don’t know if this is common or if we were one in a million.  So observation of risk from within a single implementation, or even hundreds of implementations, gives us no statistically meaningful insight.  Likewise when evaluating wasteful expenditures we would have caught a financial waste before the purchase just as easily as after it.  So we are left without any ability for a business to do a post mortem on their decision, nor is there an incentive as no one involved in the process would want to risk exposing a bad decision making process.  Even companies that want to know if they have done well will almost never have a good way of determining this.

What makes this determination even harder is that the same architectures that are foolish and reckless for one company may be completely sensible for another.  The use of a SAN based storage system and a large number of attached hosts is a common and sensible approach to controlling costs of storage in extremely large environments.  Nearly every enterprise will utilize this design and it normally makes sense, but is used for very different reasons and goals than apply to nearly any small or medium business.  It is also, generally, implemented somewhat differently.  It is not that SANs or similar storage are bad.  What is bad is allowing market pressure, sales people and those with strong incentives to “sell” a costly solution to drive technical decision making instead of evaluating business needs, risk and cost analysis and implementing the right solution for the organization’s specific goals.

It is time that we, as an industry, recognize that the emperor is not wearing any clothes.  We need to be the innocent children who point, laugh and question why no one else has been saying anything when it is so obvious that he is naked.  The storage and architectural solutions so broadly accepted benefit far too many people and the only ones who are truly hurt by them (business owners and investors) are not in a position to understand if they do or do not meet their needs.  We need to break past the comfort provided by socially accepted plausible deniability or understanding, or culpability for not evaluating.  We must take responsibility for protecting our organizations and provide solutions that address their needs rather than the needs of the sales people.

 

For more information see: When to Consider a SAN and The Inverted Pyramid of Doom

Ferraris and Tractor Trailers

Working in the SMB world, it is actually pretty rare that we need to talk about latency.  The SMB world is almost universally focused on system throughput and generally unaware of latency as a need.  But there are times where latency becomes important and when it does it is critical that we understand the interplay of throughput and latency and just what “speed” means to us.  Once we start moving into the enterprise space, latency is more often going to be viewed as a concern, but even there throughput nearly always reigns supreme, to the point that concepts of speed almost universally revolve around throughput and concepts of latency are often ignored or forgotten.

Understanding the role of latency in a system can be complicated, even though latency itself is relatively simple to understand.

A great comparison between latency and throughput that I like to use is the idea of a Ferrari and a tractor trailer.  Ferraris are “fast” in the traditional sense, they have a high “miles per hour.”  One might say that they are designed for speed.  But are they?

We generally consider tractor trailers to be slow.  They are big and lumbering beasts that have a low top end speed.  But they haul a lot of stuff at once.

In computer terms we normally think of speed like hauling capacity – we think in terms of “items” per second.  In the terms of a Ferrari going two hundred miles per hour is great, but it can haul maybe one box at a time.  A tractor trailer can only go one hundred miles per hour but can haul closer to one thousand boxes at a time.  When we talk about throughput or speed on a computer this is more what we think about.  In network terms we think of gigabytes per second and are rarely concerned with the speed of an individual packet as a single packet is rarely important.  In computational terms we think about ideas like floating point operations per second, a similar concept.  No one really cares how long a single FLOP (floating point operation) takes, only how many we can get done in one or ten seconds.

So when looking at a Ferrari we could say that it has a useful speed of two hundred box-miles per hour.  That is for every hour of operations, a Ferrari can move one box up to two hundred miles.  A tractor trailer has a useful speed of one hundred thousand box-miles per hour.  In terms of moving packages around, the throughput of the tractor trailer is easily five hundred times “faster” than that of the Ferrari.

So in terms of how we normally think of computers and networks a tractor trailer would be “fast” and a Ferrari would be “slow.”

But there is also latency to consider.  Assuming that our payload is tiny, say a letter or a small box, a Ferrari can move that one box over a thousand miles in just five hours!  A tractor trailer would take ten hours to make this same journey (but could have a LOT of letters all arriving at once.)  If what we need is to get a message or a small parcel from one place to another very quickly the Ferrari is the better choice because it has half the latency (delay) from the time we initiate the delivery until the first package is delivered than the tractor trailer does.

As you can imagine, in most cases tractor trailers are vastly more practical because their delivery speed is so much higher.  And, this being the case, we actually see large trucks on the highways all of the time and the occurrence rate of Ferraris is very low – even though each cost about the same amount to purchase (very roughly.)  But in special cases, the Ferrari makes more sense.  Just not very often.

This is a general case concept and can apply to numerous applications.  It applies to caching systems, memory, CPU, networking, operating system kernels and schedulers, to cars and more.  Latency and throughput are generally inversely related – we give up latency in order to obtain throughput.  For most operations this makes the best sense.  But sometimes it makes more sense to tune for latency.

Storage is actually an odd duck in computing where nearly all focus on storage performance is around IOPS, which is roughly a proxy measurement for latency, instead of throughput which is measured in “data transferred per second.”  Rarely do we care about this second number as it is almost never the source of storage bottlenecks.  But this is the exception, not the rule.

Latency and throughput can have some surprising interactions in the computing world.  When we talk about networks, for example, we typically measure only throughput (Gb/s) but rarely care much about the latency (normally measured in milliseconds.)  Typically this is because nearly all networking systems have similar latency numbers and most applications are pretty much unconcerned with latency delays.  It is only the rare application like VoIP over International links or satellite where latency affects the average person or can sometimes surprise people when they attempt something uncommon like iSCSI over a long distance WAN connection and suddenly latency pops up to surprise them as an unforeseen problem.

One of the places where the interaction of latency and throughput starts to become shocking and interesting is when we move from electrical or optical data networks to physical ones.  A famous quote in the industry is:

Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.

This is a great demonstration of huge bandwidth with very high latency.  Driving fifty miles across town a single stationwagon or SUV could haul hundreds of petabytes of data hitting data rates that 10GB/s fiber could not come close to.  But the time for the first data packet to arrive is about an hour.  We often discount this kind of network because we assume that latency must be bounded at under about 500ms.  But that is not always the case.

Australia recently made the news where they did a test to see if a pigeon carrying an SD card could, in terms of network throughput, outperform the regions ISP – and the pigeon ended up being faster than the ISP!

In terms of computing performance we often ignore latency to the point of not even being aware of it as a context in which to discuss performance.  But in low latency computing circles it is considered very carefully.  System throughput is generally greatly reduced (it becomes common to target systems to only hit ten percent CPU utilization when more traditional systems target closer to ninety percent) with concepts like real time kernels, CPU affinity, processor pinning, cache hit ratios and lowered measuring all being used to focus on obtaining the most immediate response possible from a system rather than attempting to get the most total processing out of a system.

Common places where low latency from a computational perspective is desired is in critical controller systems (such as manufacturing controllers were even a millisecond of latency can cause problems on the factory floor) or in financial trading systems where a few milliseconds of delay can cause investments to have changed in price or products to have already been sold and no longer be available.  Speed, in terms of latency, is often the deciding factor between making money or losing money – even a single millisecond can be crippling.

Technically even audio and video processing systems have to be latency sensitive but most modern computing systems have so much spare processing overhead and latency is generally low enough that most systems, even VoIP PBXs and conferencing systems, can function today with only very rarely needing to be aware of latency concerns on the processing side (even networking latency is becoming less and less common as a concern.)  The average system administrator or engineer might easily manage to go through a career without ever needing to work on a system that is latency sensitive or for which there is not so much available overhead as to hide any latency sensitivity.

Defining speed, whether that means throughput, latency or even something else or some combination of the two, is something that is very important in all aspects of IT and in life.  Understanding how they affect us in different situations and how they react to each other with them generally existing in an indirect relationship where improvements in throughput come at a cost to latency or vice versa and learning to balance these as needed to improve the systems that we work on is very valuable.