Disaster Recovery planning is always difficult, there are so many factors and “what ifs” that have to be considered and investing too much in the recovery solution can itself become a bit of a disaster. A factor that is often overlooked in DR planning is that: in the event of a disaster you are generally able and very willing to make compromises where needed because a disaster has already happened. It is triage time, not business as usual.
Many people immediately imagine that if you need capacity and performance of X for your live, production systems that you will need X as well for your disaster recovery systems. In the real world, this is rarely true, however. In the event of a disaster you can, with rare exception, work with lower performance and limit system availability to just the more critical systems and many maintenance operations, which often includes archiving systems, are suspended until full production is restored. This means that your disaster recovery system can often be much smaller than your primary production systems.
Disaster recovery systems are not investments in productivity, they are hedges against failure and need to be seen in that light. Because of this it is a common and effective strategy to approach the DR system needs more from a perspective of being “adequate” to maintain business activities while not enough to necessarily do so comfortably or transparently. If a full scale disaster hits and staff have to deal with sluggish file retrieval, slower than normal databases or hold off on a deep BI analysis run until the high performance production systems are restored, few people will complain. Most workers and certainly more business decision makers can be very understanding that a system is in a failed state and that they may need to help carry on as best as they can until full capacity is restored.
With this approach in mind, it can be an effective strategy to re-purpose older platforms for use at Disaster Recovery sites when new platforms are purchased and implemented for primary production usage. This can create a low cost and easily planned around “DR pipeline” where the DR site always has the capacity of your “last refresh” which, in most DR scenarios, is more than adequate. This can be a great way to make use of equipment that otherwise might either be scrapped outright or might tempt itself into production re-deployment by invoking a “sunk cost” emotional response that, in general, we want to avoid.
The sunk cost fallacy is a difficult one to avoid. Already owning equipment makes it very easy to feel that deploying it again, even when a newly designed system is being implemented, outside of the system designs and specifications is useful or good. And there are cases where this might be true, but most likely it is not. But just as we don’t want to become overly emotionally attached to equipment just because we have already paid for it, we also don’t want to ignore the value in the existing equipment that we already own. This is where a planned pipeline into a Disaster Planning scenario can leverage what we have already invested in a really great way in many cases. We do have to remember that this is likely very useful equipment with a lot of value left in it, if we just know how to use it properly to meet our existing needs.
A strong production to disaster recovery platform migration planning process can be a great way to lower budgetary spending while getting excellent disaster recovery results.