Tag Archives: software

You Can’t Virtualize That!

We get this all of the time in IT, a vendor tells us that a system cannot be virtualized.  The reasons are numerous.  On the IT side, we are always shocked that a vendor would make such an outrageous claim; and often we are just as shocked that a customer (or manager) believes them.  Vendors have worked hard to perfect this sales pitch over the years and I think that it is important to dissect it.

The root cause of problems is that vendors are almost always seeking ways to lower costs to themselves while increasing profits from customers.  This drives a lot of what would otherwise be seen as odd behaviour.

One thing that many, many vendors attempt to do is limit the scenarios under which their product will be supported.  By doing this, they set themselves up to be prepared to simply not provide support – support is expensive and unreliable.  This is a common strategy.  It some cases, this is so aggressive that any acceptable, production deployment scenario fails to even exist.

A very common means of doing this is to fail to support any supported operating system, de facto deprecating the vendor’s own software (for example, today this would mean only supporting Windows XP and earlier.)  Another example is only supporting products that are not licensed for the use case (an example would be requiring the use of a product like Windows 10 be used as a server.)  And one of the most common cases is forbidding virtualization.

These scenarios put customers into difficult positions because on one hand they have industry best practices, standard deployment guidelines, in house tooling and policies to adhere to; and on the other hand they have vendors often forbidding proper system design, planning and management.  These needs are at odds with one another.

Of course, no one expects every vendor to support every potential scenario.   Limits must be applied.  But there is a giant chasm between supporting reasonable, well deployed systems and actively requiring unacceptably bad deployments.  We hope that our vendors will behave as business partners and share a common interest in our success or, at the very least, the success of their product and not directly seek to undermine both of these causes.  We would hope that, at a very minimum, best effort support would be provided for any reasonable deployment scenario and that guaranteed support would be likely offered for properly engineered, best practice scenarios.

Imagine a world where driving the speed limit and wearing a seatbelt would violate your car warranty and that you would only get support if you drove recklessly and unprotected!

Some important things need to be understood about virtualization.  The first is that virtualization is a long standing industry best practice and is expected to be used in any production deployment scenario for services.  Virtualization is in no way new, even in the small business market it has been in the best practice category for well over a decade now and for many decades in the enterprise space.  We are long past the point where running systems non-virtualized is considered acceptable, and that includes legacy deployments that have been in place for a long time.

There are, of course, always rare exceptions to nearly any rule.  Some systems need access to very special case hardware and virtualization may not be possible, although with modern hardware passthrough this is almost unheard of today.  And some super low latency systems cannot be virtualized but these are normally limited to only the biggest international investment banks and most aggressive hedgefunds and even the majority of those traditional use cases have been eliminated by improvements in virtualization making even those situations rare.  But the bottom line is, if you can’t virtualize you should be sad that you cannot, and you will know clearly why it is impossible in your situation.  In all other cases, your server needs to be virtual.

Is it not important?

If a vendor does not allow you to follow standard best practices for healthy deployments, what does this say about the vendor’s opinion of their own product?  If we were talking about any other deployment, we would immediately question why we were deploying a system so poorly if we plan to depend on it.  If our vendor forces us to behave this way, we should react in the same manner – if the vendor doesn’t take the product to the same degree that we take the least of our IT services, why should we?

This is an “impedance mismatch”, as we say in engineering circles, between our needs (production systems) and how the vendor making that system appears to treat them (hobby or entertainment systems.)  If we need to depend on this product for our businesses, we need a vendor that is on board and understands business needs – has a production mind set.  If the product is not business targeted or business ready, we need to be aware of that.  We need to question why we feel we should be using a service in production, on which we depend and require support, that is not intended to be used in that manner.

Is it supported?  Is it being tested?

Something that is often overlooked from the perspective of customers is whether or not the necessary support resources for a product are in place.  It’s not uncommon for the team that supports a product to become lean, or even disappear, but the company to keep selling the product in the hopes of milking it for as much as they can and bank on either muddling through a problem or just returning customer funds should the vendor be caught in a situation where they are simply unable to support it.

Most software contracts state that the maximum damage that can be extracted from the vendor is the cost of the product, or the amount spent to purchase it.  In a case such as this, the vendor has no risk from offering a product that they cannot support – even if charging a premium for support.  If the customer manages to use the product, great they get paid. If the customer cannot and the vendor cannot support it, they only lose money that they would never have gotten otherwise.  The customer takes on all the risk, not the vendor.

This suggests, of course, that there is little or no continuing testing of the product as well, and this should be of additional concern.  Just because the product runs does not mean that it will continue to run.  Getting up and running with an unsupported, or worse unsupportable, product means that you are depending more and more over time on a product with a likely decreasing level of potential support, slowly getting worse over time even as the need for support and the dependency on the software would be expected to increase.

If a proprietary product is deployed in production, and the decision is made to forgo best practice deployments in order to accommodate support demands, how can this fit in a decision matrix? Should this imply that proper support does not exist? Again, as before, this implies a mismatch in our needs.

 

Is It Still Being Developed?

If the deployment needs of the software follow old, out of date practices, or require out of date (or not reasonably current software or design) then we have to question the likelihood that the product is currently being developed.  In some cases we can determine this by watching the software release cycle for some time, but not in all cases.  There is a reasonable fear that the product may be dead, with no remaining development team working on it.  The code may simply be old, technical debt that is being sold in the hopes of making a last, few dollars off of an old code base that has been abandoned.  This process is actually far more common than is often believed.

Smaller software shops often manage to develop an initial software package, get it on the market and available for sale, but fail to be able to afford to retain or restaff their development team after initial release(s).  This is, in fact, a very common scenario.  This leaves customers with a product that is expected to become less and less viable over time with deployment scenarios becoming increasingly risky and data increasing hard to extricate.

 

How Can It Be Supported If the Platform Is Not Supported?

A common paradox of some more extreme situations is software that, in order to qualify as “supported”, requires other software that is either out of support or was never supported for the intended use case.  Common examples of this are requiring that a server system be run on top of a desktop operating system or requiring versions of operating systems, databases or other components, that are no longer supported at all.  This last scenario is scarily common.  In a situation like this, one has to ask if there can ever be a deployment, then, where the software can be considered to be “supported”?  If part of the stack is always out of support, then the whole stack is unsupported.  There would always be a reason that support could be denied no matter what.   The very reason that we would therefore demand that we avoid best practices would equally rule out choosing the software itself in the first place.

Are Industry Skills and Knowledge Lacking?

Perhaps the issue that we face with software support problems of this nature are that the team(s) creating the software simply do not know how good software is made and/or how good systems are deployed.  This is among the most reasonable and valid reasons for what would drive us to this situation.  But, like the other hypothesis reasons, it leaves us concerned about the quality of the software and the possibility that support is truly available.  If we can’t trust the vendor to properly handle the most visible parts of the system, why would we turn to them as our experts for the parts that we cannot verify?

The Big Problem

The big, overarching problem with software that has questionable deployment and maintenance practice demands in exchange for unlocking otherwise withheld support is not, as we typically assume a question of overall software quality, but one of viable support and development practices.  That these issues suggest a significant concern for long term support should make us strongly question why we are choosing these packages in the first place while expecting strong support from them when, from the onset, we have very visible and very serious concerns.

There are, of course, cases where no other software products exist to fill a need or none of any more reasonable viability.  This situation should be extremely rare and if such a situation exists should be seen as a major market opportunity for a vendor looking to enter that particular space.

From a business perspective, it is imperative that the technical infrastructure best practices not be completely ignored in exchange for blind or nearly blind following of vendor requirements that, in any other instance, would be considered reckless or unprofessional. Why do we so often neglect to require excellence from core products on which our businesses depend in this way?  It puts our businesses at risk, not just from the action itself, but vastly moreso from the risks that are implied by the existence of such a requirement.

Choosing Software Versions for Deployment

Something that I see discussed very often in IT circles is “which version of software should I install.” This could apply to a database, an application, firmware or, probably most often, operating systems, and with the upcoming end of support life for Windows XP the topic has reached a fevered pitch.

There are effectively two sides to this discussion. One side believes that the latest and, presumably, greatest software should always be used. The other believes that software needs to mature and take a “wait and see” approach or even considers each version to be a different product and not a continuum of development.

Both approaches have their merits and neither should exist completely without the other. Blindly updating software willy nilly is not wise and avoiding patches and updates without reason is not wise either. Careful consideration of the factors and empathy for the software development process are important to keep in mind when making these decisions.

First, there are two completely different scenarios to consider. One is the updating of current, existing software. The assumption being that the current state of things is “working” with the accepted possibility that “working” might include a security exposure that has been discovered and requires updating in order to close. The other scenario is a new deployment where there is nothing currently and, we are starting from scratch.

Let’s start with the second case, as it is far easier to provide guidance on.

In the case of new software deployments (or new operating systems), always use the current, most recent version of the software unless there is a clearly known technology limitation preventing it ?? such as known bugs or software incompatibilities.

Software is not like other types of products, especially not in today’s world of online patch releases and updates. I assume that the mentality that old versions of software might be preferable to current ones comes from a combination of physical products (watches, cars, dishes, furniture, wine) where a specific year or model might be superior to a newer model for various reasons and from legacy software delivery modes where finished software products were just ??thrown over the wall? and the final state was, quite simply, the final state without any reasonable opportunities for updates, patches or fixes. Neither of these cases applies to modern business software (with only the rarest of exceptions.)

Software development is roughly a continuum. Normal development processes have new software being built on top of old software either directly (by creating updates to an existing code base) or indirectly (by rebuilding based on knowledge gained from having built a previous version of the software.) The idea being that each subsequent version of software is superior to the one preceding it. This is not guaranteed, of course, there are such concepts as regression errors and just bad development, but by and large, software improves over time – especially when we are talking about enterprise class software used in businesses and under active development. New software is not just the next phase of the old software, it also represents, in nearly all cases, the current state of patches, bug fixes, updates and, when necessary, changes in approach or technique. New software, coming from quality shops, is almost exclusively better than old software. Software evolves and matures.

Beyond the quality of software itself, there is the concept of investing in the future. Software is not something that can sit on the shelf forever. It needs to stay, to some degree, up to date or it stops functioning because the platform that it runs on changes, some new artifact comes to light, security holes are discovered or needs change. Installing old software means that there is an investment in the past, an investment in installing, learning, using and supporting old technology. This is called “technical debt.” This old technology might last for years or even decades, but old software loses value over time and becomes increasingly expensive to support both for the vendors, if they continue to support it, and for the end users, who have to support it.

The same concept of technical debt applies to the software vendors in question. There is a very large cost in creating software and especially in maintaining multiple versions of that software. Software vendors have a lot of incentive to reduce support for older versions to focus resources on current software releases (this is a major reason why SaaS deployments are so popular, the vendor controls the available versions and can eliminate legacy versions through updates.) If customers require support for old versions, the cost must be absorbed somewhere and often it is absorbed both in monetary impact to all customers as well as a decrease in focus on the new product as development teams must be split to support patching old versions as well as developing the new. The more effort that must go in to old versions, the less effort that can be put into new improvements.

Within the framework of what I have already said, it is important to talk about code maturity. Often code maturity is stated as a reason for deploying “old code”, but I think that this is an IT misunderstanding of software development processes. If we think about a released line of code, just because it is released and in use does not really make it more mature. Code does not change in the wild, it just sits there. Its maturity is “locked” on the day that it is released. If it is patched, then yes, it would “mature” post release. Later versions of the same software, based on the same code base but more up to date, is truly the more “mature” code as it has been reviewed, updated, tested, etc. to a greater degree than the early release of the same code.

This is counterintuitive to, say, a car where each release is a fresh thing with new opportunities for mechanical problems and different reliability concerns – where waiting a few years gives you a chance to see what reliability issues get uncovered. Software is not like this. So the concept of wanting more mature software would push you to deploy the “latest and greatest” rather than the “tried and true.”

If we think of software version numbers rather like ages, this comes through.  Linux 3.1 is much older, in terms of software maturing, than Linux 2.4.  It has a decade of additional development.

Let’s use a real world example that is very relevant today. You are in a shop about to install your first server(s). Windows Server 2012 R2 has just released. Should you install Windows Server 2008, 2008 R2 (2010), Server 2012 or Server 2012 R2 (late 2013?)

To many shops, this sounds like we are talking about somewhere between two and four different products entirely which probably have different reasons for choosing each. This, by and large, is untrue. Each newer version is simply an upgrade, update, patch and feature increase on the previous one. Each one, in turn, is more advanced and mature than the one preceding it. Each new version benefits from the work done on the original release of its predecessor as well as bug fixes, patches and feature additions done in the interim between the original release and the successor release. Each new release is, in reality, a “minor release” of the one before it. If we look at the kernel revision numbers, instead of the marketing names of the releases, it might make more sense.

Windows Server 2008 was Windows NT 6.0. Windows Server 2008 R2 was Windows NT 6.1, obviously a minor revision or even a “patch” of the previous release. Windows Server 2012 was Windows NT 6.2 and our current Windows Server 2012 R2 is Windows NT 6.3. If we were to use the revision numbers instead of the marketing names, it sounds almost crazy to intentionally install an old, less mature, less updated and less patched version. We want the latest updates, the latest bug fixes and the latest security issues to have been addressed.

For new software deployments, the newer the software installed, the better opportunity to leverage the latest features and the most time before inevitable obsolescence takes its toll. All software ages so installing newer software gives the best chance that that software will last for the longest time. It provides the best flexibility for the unknown future.

Following this line of thinking might lead us to feel that deploying pre-release or even beta software would make sense as well. And while there might be specific cases where this does make sense, such as in “test groups” to check out software before releasing it to the company at large, in general it does not. The nature of pre-release software is that it is not supported and may contain code which never will be supported. Using such code in isolation can be beneficial, but for general use it is not advised.  There are important processes that are followed between preview or beta releases and final releases of code no matter what maturity level the overall product is at.

That brings us to the other situation, the one in which we are updating existing software. This, of course, is a completely different scenario to a fresh install and there are many, many more factors involved.

One of the biggest factors for most situations is that of licensing. Updating software regularly may incur licensing fees that need to be factored in to the benefits and cost equation. Some products, like most open source software, do not have this cost and can be updated as soon as new versions are available.

The other really large factor in updating software is a human effort cost to updating – unlike in a fresh installation, where the effort of install is effectively a break even between old software and new.  In reality, new software tends to be easier to install than old software simply due to improvements and advancements.  Maintaining a single version of software for a decade means that resources were not dedicated, during that time, to upgrade processes. Upgrading annually during that time means that resources were used ten times to enact separate upgrades. That makes updating much harder to cost justify. But there is more than just the effort of the update process itself, there is also the continuous training needed for end users who will be forced to experience more changes, more often through constant upgrades.

This might make updating software sound like a negative, but it is not. It is simply an equation where each side needs to be weighed. Regular updates often mean small, incremental changes rather than large leaps allowing end users to adapt more naturally. Regular updates mean that update processes are often easier and more predictable. Regular updates mean that technical debt is always managed and the benefits of the newer versions which may be features, efficiencies or security improvements, are available sooner allowing them to be leveraged for a longer period of time.

Taking what we have learned from the two scenarios above, however, there is another important take away to be found here. Once the decision to perform an update has been made, the question is often “to what version do we update?” In reality, however, every update that is more than a standard patching process is really like a miniature “new software” buying decision and the logic as to why we “always” install the newest available version when doing a fresh install also applies here. So when performing an update, we almost always should be updating as far as we can – hopefully to the current version.

To apply the Microsoft example again, we can take an organization that has Windows XP deployed today. The business decides to invest in an update cycle to a newer version, not just continued patching. There are several versions of the Windows desktop platform that are still under active support from Microsoft. These include Windows Vista, Windows 7, Windows 8 and Windows 8.1. Updating to one of the less current versions results in less time before that version’s end of life which increases organizational risk, using older versions means continued investment in already old technologies which means an increase in technical debt and less access to new features which may prove to be beneficial once available. In this particular example, newer versions are also considered to be more secure and require fewer hardware resources.

Every business needs to find the right balance for them for existing software update cycles. Every business and every software package is different. Enterprise software like Microsoft Windows, Microsoft Office or an Oracle Database follow these models very well. Small software projects and those falling near the bespoke range may have a more dynamic and unpredictable release cycle but generally will still follow most of these rules. Consider applying empathy to the software development process to understand how you and your software vendor can best partner to deliver the greatest value to your organization and combine that with your need to reduce technical debt to leverage your software investment in the best possible way for your organization.

But the rules of thumb are relatively easy:

When deploying new or updating, shoot for the latest reasonable version of software.  Use any deployment opportunity to eliminate technical debt as much as possible.

When software already exists weight factors such as human effort, licensing costs, environmental consistency and compatibility testing against benefits in features, performance and technical debt.