Category Archives: Uncategorized

Hiring IT: Speed Matters

After decades of IT hiring, something that I have learned is that companies serious about hiring top talent always make hiring decisions very quickly.  They may spend months or even years looking for someone that is a right fit for the organization, but once they have found them they take action immediately.

This happens for many reasons.  But mostly it comes down to wanting to secure resources once they have been identified.  Finding good people is an expensive and time consuming process.  Once you have found someone that is considered to be the right fit for the need and the organization, there is a strong necessity to reduce risk by securing them as quickly as possible.  A delay in making an offer presents an opportunity for that resource to receive another offer or decide to go in a different direction.  Months of seeking a good candidate, only to lose them because of a delay of a few hours or days in making an offer is a ridiculous way to lose money.

Delays in hiring suggest that either the situation has not yet been decided upon or that the process has not gotten a priority and that other decisions or actions inside of the company are seen as more  important than the decisions around staffing.  And, of course, it may be true that other things are more important.

Other factors being more important are exactly the kinds of things that potential candidates worry about.   Legitimate priorities might include huge disasters in the company, things that are not a good sign in general.  Or worse, maybe the company just doesn’t see acquiring the best talent as being important and delays are caused by vacations, parties, normal work or not even being sure that they want to hire anyone at all.

It is extremely common for companies to go through hiring rounds just to “see what is out there.”  This doesn’t necessarily mean that they will not consider hiring someone if the right person does come along, but it easily means that the hiring is not fully approved or funded and might not even be possible.  Candidates go through this regularly, a great interview might result in no further action and so know better than to sit around waiting on positions, even ones that seem very likely and possible.  The risks are too high and if a different, good opportunity comes along, will normally move ahead with that.  Few things signal that a job offer is not forthcoming or that a job is not an ideal one than delays in the hiring process.

Candidates, especially senior ones, know that good jobs hire quickly.  So if the offer has not arrived promptly it is often assumed that offer(s) are being made to other candidates or that something else is wrong.  In either situation, candidates know to move on.

If hiring is to be a true priority in an organization, it must be prioritized.  This should go without saying, but good hiring slips through the cracks more often than not.  It is far too often seen as a background activity; one that is approached casually and haphazardly.  It is no wonder that so many organizations waste countless hours of time on unnecessary candidate searches and interviews and untold time attempting to fill positions when, for all intents and purposes, they are turning away their best options all the while.

Using Certifications to Drive Education

There is little denying that Information Technology is a field driven by certification.  What other field is filled so completely with certifications for every approach, niche, vendor, product, revision and career path?

Some industries, but relatively few, have broad certifications.  Automotive repair being a key one to which we are often exposed where it is possible, and probably likely, that repair specialists will be certified by some combination of industry certifying body as well as individual vendor or even product certification authority.  But even in industries such as automotive repair it would be uncommon to find an individual technician with more than a handful of certifications.  In IT we often expect that even relatively junior IT staff may have a dozen or more certifications in addition to their experience and formal education from a collegiate level body.

Certifications are simply a way of life in IT and probably for good reason.  Few fields require the style of knowledge, blending vendor specifics and general best practices, as IT and very few are as broad of an industry at all; IT is exceptional in what it covers and is arguably not a single field, at all.  Certificates are one of the few ways of easily displaying areas of focus, level of skill, devotion and passion.

In my own career I have found another great use of certificates, as a driver for education.  We all know that in IT we need to learn new things all of the time.  Continuous self education is just part of what we do.  One of the challenges to this process, however, is remaining focused and setting goals.  It is easy to find a broad, sweeping goal, like “learning Windows Administration”, but be left with little idea where to begin, what avenues to head down, discover what knowledge is deemed to be foundational and where to end up when all is said and done.  And when finished, how do we convey this new found knowledge and experience to current or potential employers?

There are, of course, many ways to tackle this problem.  Finding good books on Windows Administration would be an excellent start.  Many great books exist and I think that most people would find themselves coming to this conclusion if asked to consider how to best go about learning a large scope topic such as this.  And for those that prefer, there are many free and paid video and computer based training options.  These can be wonderful as well.  Much of this would depend on the type of learner that you are.

One of the things that I like most about certifications is that they define a very clear goal and nearly always have a natural selection of training and educational materials associated with them either officially or unofficially.  Windows Administration is an excellent example, again.

If I am pursuing an education around Windows Administration, what better starting point could I have than to start with the recommended educational path from the vendor in question?  This does not mean that vendor education is the end all, be all of this process, but it is almost always the best starting point for developing a firm foundation in the basics, establishing a good, rounded knowledge base from which to grow and having a common base of knowledge shared with other professionals in a similar arena.  In this example, Microsoft themselves provides clear starting points into Windows Administration, guidance on common knowledge expected before beginning the process, ideas on how to do hands on experiential learning, paths for focusing on specific areas of interest and straightforward advancement from entry level certs to mid level certs to advanced certs.

Microsoft continues to be an excellent example because they publish widely available, high quality educational materials, primarily in book and online video form; and they even provide online labs and downloadable products for trial and education.  Using the official curriculum as a framework for learning is a great way to stay on target and drive forward.  Spending time digging into hands on experimentation with concepts as they are learned, treating the certification as mandatory and spending time on less interesting areas and concepts and learning the background and reasoning behind designs and practices are just some of the benefits.

Certifications are helpful from a purely career advancement standpoint, but I think looking at them in this manner does a disservice to the certification process.  Certifications are so much more valuable to you as an individual, and to any potential employer current or future, when they are not about the certification at the end, but rather about the education along the way.

Much of the best learning that I did in IT was done during my serious certification days.  Because I used books heavily, many hands on labs and worked through one certification after another, taking the time to practice and learn and not pushing to get the certificate as quickly as possible and just move on, I gained a firm foundation in the basics, learned many product specifics, became aware of published best practices and learned how specific vendors saw and sought to have me see the IT world.

Certificates themselves are not enough, of course.  It requires great dedication and desire to learn.  The value to the certifications, beyond getting to show them off once completed, is in being a guiding hand keeping the ship on course as you gain the fundamental knowledge of a topic.  Both on course within a narrow topic of a single certification but also, in many cases, for charting a more complex route through a series of certifications.  Or helping in providing a logical progression over time.

Certifications also provide insight into the skills and areas of study that vendors and certification authorities deem to be most important.  Once upon a time Microsoft offered foci in Exchange, Proxy Server, SQL Server, IIS and even Internet Explorer.  Today foci still include Exchange and SQL Server but other concepts like application platforms have moved into “trivial, basic skillsets” and new areas of note include virtualization, cloud platforms and hosted software-as-a-service management.

Certifications also tend to force a coordinated effort on learning the latest features and trends within products, not just new product areas themselves.  This is especially important as vendors are likely to use certifications to begin to guide their expert base towards mindsets that will be useful in the future.  If a vendor knows what is coming down the pike in a revision or two they will sometimes begin to set up foundational knowledge or start to encourage different types of designs via the certification process early so that their supporters in the field are as prepared as possible when new products or features are released.

Remember when looking at certifications, like any education, they are a means and not an end.  By changing our perspective on certifications from one of proving what we know to being a tool for learning we can make the ends that much greater and obtain far better value from the process.


The High Cost of On Premises Infrastructure

IT Infrastructure is a challenge for any company and especially companies that are not large enough to implement their own, full scale datacenters.  Like many things in IT, major challenges come in the form of lacking specific, seldom used expertise as well as lacking the scale to utilize singular resources effectively.

This lack of scale can come in many forms.  The obvious one is in man power.  Managing a physical computing infrastructure uses unique skills that are separate from IT itself and are often desired to be available “around the clock.”  This can vary from security to electrical to cooling and facilities to “datacenter technician” style staff.  Of course, smaller businesses simply do without these roles available to them, but this raises the cost incurred on a “per server” basis to maintain the infrastructure.  Large businesses and dedicated datacenters leverage an efficiency of scale to make the cost of physically housing an IT infrastructure lower – either by actually lowering the cost directly or by raising the quality and reliability of the equipment.

The cost effectiveness of delivering power, cooling and datacenter services is only one aspect of the cost of IT infrastructure in a business.  Where many businesses attack this problem, by reducing infrastructure investment and staff, may counteract some amount of the up front costs of the infrastructure, but generally does so to the detriment of availability and longevity of equipment.  Whether it is a lack of ISP redundancy, an absence of diesel electric generators or shaving a year or two of service off of a server’s service life, these costs generally add up, often in ways that are difficult to identify and track.

We see the effects of low qualify infrastructure often come out in the behaviour and expectations of smaller businesses.  For example in the enterprise datacenter an average server lifespan may be ten years or more, but smaller businesses often assume that a server is worn out and unreliable in seven or eight years.  This increase in failure rate also leads to more concern about system failure.  Smaller businesses often see a higher, rather than a lower, need to have redundant systems even when lower revenue would normally suggest otherwise.  Small businesses are prone to investing heavily in high availability mechanisms, often at great expense, to mitigate a perceived risk of high system fail rates that larger businesses may be less likely to see.  These factors can combine to create a high cost through more rapid system replacement and a tendency towards overbuying hardware – sometimes even doubling the otherwise necessary investment to protect against risks created by lower quality facilities management.

This concept is not unique to information infrastructure.  In the audiophile world, while huge investments in high quality audio equipment is common, it is a rule of thumb that fifty percent of audio quality comes from the equipment and fifty percent comes from the environment into which it is placed.  This lesson applies to information infrastructure.   Lower cost gear may run longer and more reliably in a high quality physical environment than more expensive, better engineered equipment will in a lower quality one.

Of course the most obvious components of lower reliability come from being unable to maintain redundant generators, independent power rails, adequate fuel supplies, uninterrupted power supply units, steady temperature and humidity, air filtration and, of course, highly redundant multi-path WAN access.  These aspects we think of all the time and are almost completely out of reach of all but the largest companies.   Even simple things like restricting access to only essential server room staff can be an insurmountable challenge in a small environment.

These challenges create an opportunity to find alternatives for the SME, SMB and SOHO business markets to look for ways to leverage combined scale.  While many companies today turn to ideas such as hosted cloud computing, the associated costs to elastically expanding capacity often make this impractical as this same market struggles the most to have the ability to utilize that type of functionality.  Cloud computing can be an answer in some cases, but normally only for the very smallest of companies for whom a single server is too much scale, or for those companies so large that they have a DevOps-style automation infrastructure capable of scaling elastically with load demands and workloads that make sense for this process.  But these companies are the exception, not the norm.  More often hosted cloud computing makes sense for only a specific subset of public-facing workloads and only in some cases.

For the majority of companies too small to create the scale necessary to build out their own full scale IT infrastructure, the answer is likely going to be found in colocation.  It must be noted that there are obviously potentially overarching locational or environmental factors that can make off-premises infrastructures impossible or at least impractical.  Most businesses, however, will not be subject to these limitations.

Colocation tackles the cost challenges of the smaller business market by generating the scale necessary to make high quality, dedicated information infrastructure facilities possible.  This includes staff, WAN connectivity, environmental controls, power, and expertise.  Cost savings can often come from surprising places including lower power cost per kilowatt hour, lower cost of cooling and power conditioning and higher real estate density.

It is often believed that colocation represents a cost premium service for businesses that have needs above and beyond the average, but in reality colocation is often and should often be chosen because it represents an opportunity to lower costs while also improving reliability.  Colocation, in most cases, will actually bring a cost savings on a month by month basis providing for an impressive return on investment potential over time as the initial cost can be equal or similar to other investments, but the ongoing monthly cost can be lower and, perhaps more importantly, the costs can become far more predictable with fewer risks and unexpected expenditures.

Because the cost of services are potentially very granular it is actually far easier for colocation lower the overall expenditure than is generally believed.  For example, a small business with just one or two servers would still need certain basics such as air conditioning and UPS support plus footprint space and security; all dedicated for only a very small amount of equipment.  In a colocation facility these servers may represent less than one percent of the cooling of a large, high efficiency cooling system, may use just a small fraction of a large UPS and so forth.

Colocation also frees IT staff from performing datacenter functions, at which they are generally untrained and poorly qualified, to focus on the tasks at which they are more valuable and intentioned.  Then the datacenter tasks can be performed by experienced, dedicated datacenter staff.

Calculating exactly ROI can be challenging because individual cases are very unique and depend heavily on the workloads, use cases, independent needs and environmental factors of an individual business and the colocation options considered.  But it should be approached with a mindset that colocation does not present only an opportunity for improvements in the quality or reliability of IT infrastructure services, not that it can represent a return on investment but that it may, in fact, do both of these things on top of fundamentally lowering costs overall.

The Scale HC3 Advantage

For years I have been pitching ideas about how a performant, highly reliable architecture for the small and medium business market should be approached.  Concepts like maintaining high reliability across all layers of the architecture, reducing the total number of points of failure whenever possible, reducing complexity, leveraging mirroring instead of parity for storage, keeping storage close to compute and so forth.

What I like about the Scale HC3 approach to hyperconvergence is that it addresses all of these items and more.  Scale takes all of the components of a complete platform and reduces them into a singular “package” that is deployed to all nodes in the system making every layer of the system as redundant as all of the other layers.  Even the management console itself is automatically replicated and available from every node in the cluster,  independently!

Compute and storage are integrated into the same package sitting together on each node.  This allows for the lowest latency in communications and the highest reliability both per node, as there is less to fail by way of having fewer moving parts, and also across the entire architecture by making each stable building block replicated so that nothing depends on any single node for computer, storage, management or orchestration.

Instead of the “traditional” approach so often associated with virtualization, where several layers of services are separate yet totally dependent upon each other, the Scale hyperconverged approach flattens these taking us to the opposite end of the spectrum from the infamous “inverted pyramid of doom” to a flat, broad “brick”.  No single points of failure, no highly fragile components.  Keep each component reliable on its own, then make them redundant anyway.

Digging down to the storage layer, the Scale HC3 uses a RAIN approach which can most easily be described as a “network RAID 1e” design.  RAID 1e is an interesting twist on RAID 10 which basically adds a striping mechanism inside the mirrors so that you never have anything but mirroring but the mirrors are not at the disk level but at the block level and spread out among many disks.  RAID 1e is more advanced and more flexible than RAID 10 and brings some minor performance and reliability advantages but, more importantly, allows a RAID 10 approach to be used effectively over a network without having to resort to the fragility of RAID 01.  Scale does not use actual RAID, but true RAIN, that does intelligent block mirroring in the cluster with logic not only about drives, not about nodes on which the drives sit.  We only use RAID as an example for comparison.

The Scale HC3 also adds powerful hybrid spinning disk and solid state drive tiering using a heat map approach to determining which blocks are being used regularly and which are predominantly idle.  This allows the system to intelligently move blocks from slow spinning storage to high performance SSD for performance without needing all storage to sit on SSDs.  This keeps cost down while also allowing for large capacity.  The heat map approach, coupled with a simplistic priority management system, makes this nearly transparent to end users.

Scale also takes the management of a complicated beast like a full virtualization stack and makes it simple and easy for a small business IT department to manage.  SMB IT departments are very busy places being pulled in a lot of simultaneous directions.  It is very important that their solutions, especially the critical ones on which the most depends, are elegant, well supported, effective and easy to manage.  Scale provides a solution that allows for growth, performance and high reliability while maintaining an astonishing degree of ease of use. Even an IT department with all of the resources and experience to manage the most complex and demanding infrastructure can benefit by not wasting time where there is no competitive advantage and, instead, putting their efforts to use where they can most benefit the business.

The Scale HC3 allows small and medium business IT departments to stop worrying about how to design a system to meet their needs, and instead focus on the what’s most important to their business and what provides them a unique, competitive advantage.


[Commissioned for the Scale Blog]