Tag Archives: sellers agent

No One Ever Got Fired For Buying…

It was the 1980s when I first heard this phrase in IT and it was “no one ever got fired for buying IBM.”  The idea was that IBM was so well known, trusted and reliable that it was the safe choice as a vendor for a technology decision maker to select.  As long as you chose IBM, you were not going to get in trouble, no matter how costly or effective the resulting solution turned out to be.

The statement on its own feels like a simple one.  It makes for an excellent marketing message and IBM, understandably, loved it.  But it is what is implied by the message that causes so much concern.

First, we need to understand what the role of the IT decision maker in question is.  This might sound simple, but it is surprising how easily it can be overlooked.  Once we delve into the ramifications of the statement itself, it is far too easy to lose track of the real goals. In the role of a decision maker, the IT professional is tasked with selecting the best solution for their organization based on its ability to meet organizational goals (normally profits).  This means evaluating options, shielding non-technical management from sales people and marketing, understanding the marketplace, research and careful evaluation.  These things seem obvious, until we begin to put things into practice.

What we have to then analyze is not that “no one ever got fired for choosing product X”, but what the ramifications of such a statement actually are.

First, the statement implies an organization that is going to judge IT decision making not on its merits or applicability but on the brand name recognition of the product maker.  In order for a statement like this to have any truth behind it, it requires the entire organization to either lack the ability or desire to evaluate decisions but also an organizational desire to see large, expensive brand names (the statement is always made in conjunction with extremely high cost items compared to the alternatives) over other alternatives.  An organizational preference towards expensive, harder to justify spends is a dangerous one at best.  We assume that not only does buying the most expensive, most famous products will be judged well compared to less expensive or less well known ones, but that buying products is seen as beneficial to not buying products; even though often the best IT decisions are to not buy things when no need exists.  Prioritizing spending over savings for their own reasons without consideration for the business need is very bad, indeed.

Second, now that we realize the organizational reality that this implies, that the IT decision maker is willing to seize this opportunity to leverage corporate politics as a means of avoiding taking the time and effort to make a true assessment of needs for the business but rather skip this process, possibly completely, we have a strong question of ethics.  Essentially, whether out of fear of the organization not properly evaluating the results  or by blaming the decision maker for unforeseeable events after the fact or of looking to take advantage of the situation to be paid for a job that was not done, we have a significant problem either individually, organizationally, or both.

For any IT decision maker to use this mindset, one that there is safety in a given decision regardless of suitability, there has to be a fundamental distrust of the organization.  Whether this is true of the organization or not is not known, but that the IT decision maker believes it is required for such a thought to even exist.  In many organizations it is understandable that politics trump good decision making and it is far more important to make decisions for which you cannot be blamed rather than trying honestly to do a good job.  That is sad enough on its own, but so often it is simply an opportunity to skip the very job for which the IT decision maker is hired and paid and instead of doing a difficult job that requires deep business and technical knowledge, market research, cost analysis and more – simply allowing a vendor to sell whatever they want to the business.

At best, it would seem, we have an IT decision maker with little to no faith in the ethics or capabilities of those above them in the organization.  At worst we have someone actively attempting to take advantage of a business by being paid to be a key decision maker while, instead of doing the job for which they are hired or even doing nothing at all, actively putting their weight behind a vendor that was not properly evaluated based possibly solely on not needing to do any of the work themselves.

What should worry an organization is not that vendors that could often be considered “safe” get recommended or selected, but rather why they were selected.  Vendors that fall into this category often offer many great products and solutions or they would not earn this reputation in the first place.  But likewise, after gaining such a reputation those same vendors have a strong financial incentive to take advantage of this culture and charge more while delivering less as they are not being selected, in many cases, on their merits but instead on their name, reputation or marketing prowess.

How does an organization address this effect?  There are two ways.  One is to evaluate all decisions carefully in a post mortem structure to understand what good decisions look like and not limit post mortems to obviously failed projects.  The second is to look more critical, rather than less critically, at popular product and solution decisions as these are red flags that decision making may be being skipped or undertaken with less than the appropriate rigor.  Popular companies, assumed standard approaches, solutions found commonly in advertising or commonly recommended by sales people, resellers, and vendors should be looked at with a discerning eye, moreso than less common, more politically “risky” choices.

 

Buyers and Sellers Agents in IT

When dealing with real estate purchases, we have discrete roles defined legally as to when a real estate agent represents the seller or when they represent the buyer.  Each party gets clear documentation as to how they are being represented.  In both cases, the agent is bound by honesty and ethical limitations, but beyond that their obligations are to their represented party.

Outside of the real estate world, most of us do not deal with buyer’s agents very often.  Seller’s agents are everywhere, we just call them salespeople.  We deal with them at many stores and they are especially evident when we go to buy something large, like a car.

In business, buyer’s agents are actually pretty common and actually come in some interesting and unspoken forms.  Rarely does anyone actually talk about buyer’s agents in business terms, mostly because we are not talking about buying objects but about buying solutions, services or designs.  Identifying buyer’s and seller’s agents alone can become confusing and, often, companies may not even recognize when a transaction of this nature is taking place.

We mostly see the engagement of sellers – they are the vendors with products and services that they want us to purchase.  We can pretty readily identify the seller’s agents that are involved.  These include primarily the staff of the vendor itself and the sales people (which includes pre-sales engineering and any “technical” resource that gets compensation by means of the sale rather than being explicitly engaged and remunerated to represent your own interests) of the resellers (resellers being a blanket term for any company that is compensated for selling products, services or ideas that they themselves do not produce; this commonly includes value added resellers and stores.)  The seller’s side is easy.  Are they making money by somehow getting me to buy something?  If so… seller’s agent.

Buyer’s agents are more difficult to recognize.  So much so that it is common for businesses to forget to engage them, overlook them or confuse seller’s agents for them.  Sadly, outside of real estate, the strict codes of conduct and legal oversight do not exist and ensuring that seller’s agent is not engaged mistakenly where a buyer’s agent should be is purely up to the organization engaging said parties.

Buyer’s agents come in many forms but the most common, yet hardest to recognize, is the IT department or staff, themselves.  This may seem like a strange thought, but the IT department acts as a technical representative of the business and, because they are not the business themselves directly, an emotional stop gap that can aid in reducing the effects of marketing and sales tactics while helping to ensure that technical needs are met.  The IT team is the most important buyer’s agent in the IT supply chain and the last line of defense for companies to ensure that they are engaging well and getting the services, products and advice that they need.

Commonly  IT departments will engage consulting services to aid in decision making. The paid consulting firm is the most identifiable buyer’s agent in the process and the one that is most often skipped (or a seller’s agent is mistaken for the consultant.)  A consultant is hired by, paid by and has an ethical responsibility to represent the buyer.  Consultants have an additional air gap that helps to separate them from the emotional responses common of the business itself.  The business and its internal IT staff are easily motivated by having “cool solutions” or expensive “toys” or can be easily caused to panic through good marketing, but consultants have many advantages.

Consultants have the advantage that they are often specialists in the area in question or at least spend their time dealing with many vendors, resellers, products, ideas and customer needs.  They can more easily take a broad view of needs and bring a different type of experience to the decision table.

Consultants are not the ones who, at the end of the day, get to “own” the products, services or solutions in question and are generally judged on their ability to aid the business effectively.  Because of this they have a distinct advantage in being more emotionally distant and therefore more objective in deciding on recommendations.  The coolest, newest solutions have little effect on them while cost effectiveness and business viability do.  More importantly, consultants and internal IT working together provide an important balancing of biases, experience and business understandings that combine the broad experience across many vendors and customers of the one, and the deep understanding of the individual business of the other.

One can actually think of the Buyer’s and Seller’s Agent system as a “stack”.  When a business needs to acquire new services, products or to get advice, the ideal and full stack would look something like this: Business > IT Department > ITSP/Consultants <> Value Added Reseller < Distributor < Vendor.  The <> denotes the reflection point between the buyer’s side and the seller’s side.  Of course, many transactions will not involve and should not involve the entire stack.  But this visualization can be effective in understanding how these pieces are “designed” to interface with each other.  The business should ideally get the final options from IT (IT can be outsourced, of course), IT should interface through an ITSP consultant in many cases, and so forth.  An important part of the processes is keeping actors on the left side of the stack (or the bottom) from having direct contact with those high up in the stack (or on the right) because this can short circuit the protections that the system provides allowing vendors or sales staff to influence the business without the buyer’s agents being able to vet the information.

Identifying, understanding and leveraging the buyer’s and seller’s agent system is important to getting good, solid advice and sales for any business and is widely applicable far outside of IT.

Understanding Bias

I often write about the importance of alignment in goals between IT and vendors and how critical it is to avoid getting advice from those that you are not paying for that advice, because that makes them salespeople, basically the importance of getting advice and guidance from a buyer’s agent rather than directly from the seller’s agent.  This leads to questions about bias; clearly the idea is that a salesperson is biased in a way that is likely unfavourable to you.  But, it should be obvious, that all people are biased.

This is true, all people have bias.  We cannot seek to escape or remove all bias, that is simply impossible.  In fact, in many ways, when we see advice whether it be from a paid consultant whose job it is to present us with a good option, from IT itself doing the same or getting feedback from a friend on products that they have tested – it is actually their biases that we are seeking!

What we need to do is strive to understand the biases and motivations of the people with whom we speak and receive advice, be self reflecting to understand our own biases, have a good knowledge of what biases are good for us and attempt to get advice from people who have a general bias-alignment with us.

Biases come in many forms.  We can have good and bad biases, strong and weak ones.

The biggest biases typically come externally in the form of monetary or near-monetary compensation for bias.  This might be someone being paid as a sales person to promote the products that they are available to sell, commission structures would take this to an even more acute level.  Someone paid to do sales might face two of the strongest biases: monetary (they get money if they make the sale) and ethical (they made an agreement to sell this product if possible and they are ethically bound to try to do so.)  These are the standard biases of the “seller’s agent” or sales person.

On the other hand a consultant is paid by the buyer or customer and is a buyer’s agent and as the same monetary and ethical biases, but in the favour of the buyer rather than against them.  (I use the term buyer and customer here mostly interchangeably to represent the business or IT department, the ones receiving advice or guidance on what to do or buy.)  These biases are pretty evident and easily to control and I have covered them before – never get advice from the seller’s agent, always get your advice from the buyer’s agent.

If we assume that these big biases, those of alignment, are covered we still have a large degree of bias from our buyer’s agent that we need to uncover and understand.

One of the most common biases is one towards familiarity.  This is not a bad bias, but we must be aware of it and how it colours recommendations.  This bias can run very deep and affect decision making in ways that we may not understand without investigation.  At the highest level, the idea is simply that most anyone is going to favour, possibly unintentionally, solutions and products with which they have familiarity and the stronger that familiarity often the stronger the bias towards those products will be.

This may seem obvious but it is a bias that is commonly overlooked.  People turning to consultants will often seek their advice from someone with a very small set of experiences which serves as a means by which the resulting recommendations are likely drawn.  In a way, this is effectively the buyer preselecting the desired outcome and choosing a consultant that will deliver the desired outcome.  An example of this would be choosing a network engineer to design a solution when that engineer only knows one product line; naturally the engineer will almost certainly design a solution from that product line.  In choosing someone with limited experience in that area we are, for all intents and purposes, directly the results by picking based on a strong bias.  This happens extremely often in IT, presumably because those hiring consultants base this decision on what they think are foregone conclusions about what the resulting advice will be and forgetting to step back and get advice at a higher level.

Of course, like with many things, there is also an offset bias to the familiarity bias, the exploration bias.  While we tend to be strongly biased towards things that we know, there is also a bias towards the unknown and the opportunity to explore and learn.  This bias tends to be extremely weak compared to the familiarity bias, but far from trivial in many IT practitioners.  It is a bias that should not be ignored and is important for helping broaden the potential scope of advice from a single consultant.

Of course there are more biases that stem from familiarity.  There is a natural, strong bias towards companies that we have found to have good products, have good support or interact well.  Companies with whom we have experienced product, support or interaction issues we tend to be strongly biased against.  These, of course, are highly valuable biases that we specifically want consultants to bring with them.

One of the worst biases, however, and one that affects everyone is marketing bias.  Companies with large or well made marketing campaigns or that align with industry marketing campaigns can induce a large amount of bias that is not based on something valuable to the end user.  Similarly, market share is an almost valueless and often negative factor (large companies often charge more for equal products – e.g. you “pay for the name”) but can be a strong bias, one often brought to the table by the customer.  Customers commonly either directly control this bias by demanding only well marketed, seemingly popular or large vendor promoted recommendations be made or fail to react properly to apparently alternative solutions: both reactions heavily influence what a consultant is willing to recommend.  This is known as “no one ever got fired for buying IBM” from the 1980s, and is often an amazingly costly bias and a difficult one to overcome.  Of course it applies much more broadly than only to IBM and does not primarily pertain to them today, but the term became famous during IBM’s heyday of IT.

Of course the main bias that we seek is the bias of “what is the best option for the customer.”  This is itself, a bias.  One that we hope, when combined with other positive biases, overpowers the influence of negative biases.  And likewise there is a prestige bias, a desire to produce advice that is so good that it increases the respect for the consultant.

Biases come in many different types and are both the value in advice and the dangers in it.  Leveraging bias requires an understanding of the major biases that are or are likely at play in any specific instance as well as having empathy for the people that give advice.  If you take time to learn about what their financial, ethics, experiential and objective biases are, you can understand their role far better and you can better filter their advice based on that knowledge.

Take the time to consider the biases of the people from whom you get advice.  Likely you already know a lot of which biases affect them significantly and may be able to guess what more of them are.  Everyone has different biases and all people react to them differently.  What is a strong bias for one person is a weak one for someone else.  Consider talking to your consultants about their biases, they should be open to this conversation (and if not, be extra cautious) and hopefully have thought about it themselves, even if not in depth or in the same terms.

The people from whom you get advice should have biases that strongly align favourably towards you and your goals.

 

Explaining the Lack of Large Scale Studies in IT

IT practitioners ask for these every day and yet, none exist – large scale risk and performance studies for IT hardware and software.  This covers a wide array of possibilities, but common examples are failure rates between different server models, hard drives, operating systems, RAID array types, desktops, laptops, you name it.  And yet, regardless of the high demand for such data there is none available.  How can this be.

Not all cases are the same, of course, but by and large there are three really significant factors that come into play keeping this type of data from entering the field.  These are the high cost of conducting a study, the long time scale necessary for a study and a lack of incentive to produce and/or share this data with other companies.

Cost is by far the largest factor.  If the cost of large scale studies could be overcome, all other factors could have solutions found for them.  But sadly the nature of a large scale study is that it will be costly.  As an example we can look at server reliability rates.

In order to determine failure rates on a server we need a large number of servers in order to collect this data.  This may seem like an extreme example but server failure rates is one of the most commonly requested large scale study figures and so the example is an important one.  We would need perhaps a few hundred servers for a very small study but to get statistically significant data we would likely need thousands of servers.  If we assume that a single server is five thousand dollars, which would be a relatively entry level server, we are looking at easily twenty five million dollars of equipment!  And that is just enough to do a somewhat small scale test (just five thousand servers) of a rather low cost device.  If we were to talk about enterprise servers we would easily just to thirty or even fifty thousand dollars per server taking the cost even to a quarter of a billion dollars.

Now that cost, of course, is for testing a single configuration of a single model server.  Presumably for a study to be meaningful we would need many different models of servers.  Perhaps several from each vendor to compare different lines and features.  Perhaps many different vendors.  It is easy to see how quickly the cost of a study becomes impossibly large.

This is just the beginning of the cost, however.  To do a good study is going to require carefully controlled environments on par with the best datacenters to isolate environmental issues as much as possible.  This means highly reliable electric, cooling, airflow, humidity control, vibration and dust control.  Good facilities like this are very expensive and are why many companies do not pay for them, even for valuable production workloads.  In a large study this cost could easily exceed the cost of the equipment itself over the course of the study.

Then, of course, we must address the needs for special sensors and testing.  What exactly constitutes a failure?  Even in production systems there is often dispute on this.  Is a hard drive failing in an array a failure, even if the array does not fail?  Is predictive failure a failure? If dealing with drive failure in a study, how do you factor in human components such as drive replacement which may not be done in a uniform way?  There are ways to handle this, but they add complication and make the studies skew away from real world data to contrived data for a study.  Establishing study guidelines that are applicable and useful to end users is much harder than it seems.

And the biggest cost, manual labor.  Maintaining an environment for a large study will take human capital which may equal the cost of the study itself.  It takes a large number of people to maintain a study environment, run the study itself, monitor it and collect the data.  All in all, the cost are generally, simply impossible to do.

Of course we could greatly scale back the test, run only a handful of servers and only two or three models, but the value of the test rapidly drops and risks ending up with results that no one can use while still having spent a large sum of money.

The second insurmountable problem is time.  Most things need to be tested for failure rates over time and as equipment in IT is generally designed to work reliably for decades, collecting data on failure rates requires many years.  Mean Time to Failure numbers are only so valuable, Mean Time Between Failures and failure types, modes and statistics on that failure is very important in order for a study to be useful.  What this means is that for a study to be truly useful it must run for a very long time creating greater and greater cost.

But that is not the biggest problem.  The far larger issue is that for a study to have enough time to generate useful failure numbers, even if those numbers were coming out “live” as they happened it would already be too late.  The equipment in question would already be aging and nearing time for replacement in the production marketplace by the time the study was producing truly useful early results.  Often production equipment is only purchased for three to five years total lifespan.  Getting results even one year into this span would have little value.  And new products may replace those in the study even more rapidly than the products age naturally making the study only valuable from a historic context without any use in determining choices in a production decision role – the results would be too old to be useful by the time that they were available.

The final major factor is a lack of incentive to provide existing data to those who need it.  While few sources of data exists, a few do, but nearly all are incomplete and exist for large vendors to measure their own equipment quality, failure rates and such.  These are rarely done in controlled environments and often involve data collected from the field.  In many cases this data may even be private to customers and not legally able to be shared regardless.

But vendors who collect data do not collect it in an even, monitored way so sharing that data could be very detrimental to them because there is no assurance that equal data from their competitors would exist.  Uncontrolled statistics like that would offer no true benefit to the market nor do the vendors who have them so vendors are heavily incentivized to keep such data under tight wraps.

The rare exception are some hardware studies from vendors such as Google and BackBlaze who have large numbers of consumer class hard drives in relatively controlled environments and collect failure rates for their own purposes but have little or no risk from their own competitors leveraging that data but do have public relations value in doing so and so, occasionally, will release a study of hardware reliability on a limited scale.  These studies are hungrily devoured by the industry even though they generally contain relatively little value as their data is old and under unknown conditions and thresholds, and often do not contain statistically meaningful data for product comparison and, at best, contain general industry wide statistical trends that are somewhat useful for predicting future reliability paths at best.

Most other companies large enough to have internal reliability statistics have them on a narrow range of equipment and consider that information to be proprietary, a potential risk if divulged (it would give out important details of architectural implementations) and a competitive advantage.  So for these reasons they are not shared.

I have actually been fortunate enough to have been involved and run a large scale storage reliability test that was conducted somewhat informally, but very valuably on over ten thousand enterprise servers over eight years resulting in eighty thousand server years of study, a rare opportunity.  But what was concluded in that study was that while it was extremely valuable what it primarily showed is that on a set so large we were still unable to observe a single failure!  The lack of failures was, itself, very valuable.  But we were unable to produce any standard statistic like Mean Time to Failure.  To produce the kind of data that people expect we know that we would have needed hundreds of thousands of server years, at a minimum, to get any kind of statistical significance but we cannot reliably state that even that would have been enough.  Perhaps millions of servers years would have been necessary.  There is no way to truly know.

Where this leaves us is that large scale studies in IT simply do not exist and will never, likely, exist.  When they do they will be isolated and almost certainly crippled by the necessities of reality.  There is no means of monetizing studies on the scale necessary to be useful, mostly because failure rates of enterprise gear is so low while the equipment is so expensive, so third party firms can never cover the cost of providing this research.  As an industry we must accept that this type of data does not exist and actively pursue alternatives to having access to such data.  It is surprising that so many people in the field expect this type of data to be available when it never has been historically.

Our only real options, considering this vacuum, are to collect what anecdotal evidence exists (a very dangerous thing to do which requires careful consideration of context) and the application of logic to assess reliability approaches and techniques.  This is a broad situation where observation necessarily fails us and only logic and intuition can be used to fill the resulting gap in knowledge.