As IT professionals we often have to evaluate several different approaches, products or techniques. The IT field is vast and we are faced with so many options that it can become difficult to filter out the noise and find just the options that truly make sense in our environment.
One thing that I have found repeatedly creating a stumbling block for IT professionals is that they come from a stance of traditional, legacy knowledge (a natural situation since all of our knowledge has to have come from sometime in the past) and attempting to justify new techniques or technologies in relationship to the existing, established assumptions of “normal.” This is to be expected.
IT is a field of change, however, and it is critical that IT professionals accept change as normal and not react to it as an undermining of traditional values. It is not uncommon for people to feel that decisions that they have made in the past will be judged by the standards of today. They feel that because there is a better option now that their old decision is somehow invalid or inadequate. This is not the case. This is exacerbated in IT because decisions made in the past that have been dramatically overturned in favour of new knowledge might only be a few years old and the people who made them still doing the same job. Change in IT is much more rapid than in most fields and we can often feel betrayed by good decisions that we have made not long ago.
This reaction puts us into a natural, defensive position that we must rationally overcome in order to make objective decisions about our systems.
One trick that I have found is to reverse questions involved assumed norms. That is to say, if you believe that you must justify a new technique against an old and find that while convincing you are not totally sways, perhaps you should try the opposite – justify the old, accepted approach versus the new one. I will give some examples that I see in the real world regularly.
Example one, in which we consider virtualization where none existed before. Typically someone looking to do this will look for virtualization to provide some benefit that they consider to be significant. Generally this results in someone feeling that virtualization either doesn’t offer adequate benefits or that they must incorporate other changes and end up going dramatically overboard for what should have been a smaller decision. Instead, attempt to justify not using virtualization. Treat virtualization as the accepted pattern (actually, it long has been, just not the in SMB space) and try to justify going with physical servers instead.
What we find is that, normally, our minds accepted that the physical machine only had to be “nearly as good” or “acceptable” in order to be chosen even though virtualization was, in nearly all cases, “better”. Why would be decide to use something that is not “better”? Because we approached one as change and one as not change. Our minds play tricks on us.
Example two, in which traditional server storage is two arrays with the operating system on one RAID 1 array and the data partition on a second RAID 5 array versus the new standard of a single RAID 10 array holding both operating system and data. If we argue from the aspect of the traditional approach we can make decent arguments, at times, that we can make the old system adequate for our needs. Adequate seems good enough to not change our approach. But argue from the other direction. If we assume RAID 10 is the established, accepted norm (again, it is today) then it is clear that it comes out as dramatically superior in nearly all scenarios. If we try to justify why we would chose a split array with RAID 1 and RAID 5 we would quickly see that they never provide a compelling value. So sticking with RAID 10 is a clear win.
This reversal of thinking can provide for a dramatic, eye-opening effect on decision making. Making assumptions about starting points and forcing new ideas to significantly “unseat” incumbent thinking is dangerous. This keeps us from moving forward. In reality, most approaches should start from equal ground and the “best” option should win. It is far too often than a solution is considered “adequate” when it is not the best. Yes, a solution may very well work in a given situation but why would we ever intentionally choose a less than superior solution (we assume that cost is factored into the definition of best?)
As IT professionals attempting to solve problems for a business we should be striving to recommend and implement the best possible solutions, but making due with less than ideal ones simply because we forget to equally consider the reasonable options against one another. And it is important to remember that cost is inclusive in deciding when a solution is best or adequate. The best solution is not a perfect solution but the best for the company, for the money. But very often solutions are chosen that cost more and do less simply because they are considering the de facto starting point and the alternatives are expected to dramatically outperform them rather than simply being “better”.
Taking a fresh look at decision making can help us become better professionals.