Category Archives: Uncategorized

Using Certifications to Drive Education

There is little denying that Information Technology is a field driven by certification.  What other field is filled so completely with certifications for every approach, niche, vendor, product, revision and career path?

Some industries, but relatively few, have broad certifications.  Automotive repair being a key one to which we are often exposed where it is possible, and probably likely, that repair specialists will be certified by some combination of industry certifying body as well as individual vendor or even product certification authority.  But even in industries such as automotive repair it would be uncommon to find an individual technician with more than a handful of certifications.  In IT we often expect that even relatively junior IT staff may have a dozen or more certifications in addition to their experience and formal education from a collegiate level body.

Certifications are simply a way of life in IT and probably for good reason.  Few fields require the style of knowledge, blending vendor specifics and general best practices, as IT and very few are as broad of an industry at all; IT is exceptional in what it covers and is arguably not a single field, at all.  Certificates are one of the few ways of easily displaying areas of focus, level of skill, devotion and passion.

In my own career I have found another great use of certificates, as a driver for education.  We all know that in IT we need to learn new things all of the time.  Continuous self education is just part of what we do.  One of the challenges to this process, however, is remaining focused and setting goals.  It is easy to find a broad, sweeping goal, like “learning Windows Administration”, but be left with little idea where to begin, what avenues to head down, discover what knowledge is deemed to be foundational and where to end up when all is said and done.  And when finished, how do we convey this new found knowledge and experience to current or potential employers?

There are, of course, many ways to tackle this problem.  Finding good books on Windows Administration would be an excellent start.  Many great books exist and I think that most people would find themselves coming to this conclusion if asked to consider how to best go about learning a large scope topic such as this.  And for those that prefer, there are many free and paid video and computer based training options.  These can be wonderful as well.  Much of this would depend on the type of learner that you are.

One of the things that I like most about certifications is that they define a very clear goal and nearly always have a natural selection of training and educational materials associated with them either officially or unofficially.  Windows Administration is an excellent example, again.

If I am pursuing an education around Windows Administration, what better starting point could I have than to start with the recommended educational path from the vendor in question?  This does not mean that vendor education is the end all, be all of this process, but it is almost always the best starting point for developing a firm foundation in the basics, establishing a good, rounded knowledge base from which to grow and having a common base of knowledge shared with other professionals in a similar arena.  In this example, Microsoft themselves provides clear starting points into Windows Administration, guidance on common knowledge expected before beginning the process, ideas on how to do hands on experiential learning, paths for focusing on specific areas of interest and straightforward advancement from entry level certs to mid level certs to advanced certs.

Microsoft continues to be an excellent example because they publish widely available, high quality educational materials, primarily in book and online video form; and they even provide online labs and downloadable products for trial and education.  Using the official curriculum as a framework for learning is a great way to stay on target and drive forward.  Spending time digging into hands on experimentation with concepts as they are learned, treating the certification as mandatory and spending time on less interesting areas and concepts and learning the background and reasoning behind designs and practices are just some of the benefits.

Certifications are helpful from a purely career advancement standpoint, but I think looking at them in this manner does a disservice to the certification process.  Certifications are so much more valuable to you as an individual, and to any potential employer current or future, when they are not about the certification at the end, but rather about the education along the way.

Much of the best learning that I did in IT was done during my serious certification days.  Because I used books heavily, many hands on labs and worked through one certification after another, taking the time to practice and learn and not pushing to get the certificate as quickly as possible and just move on, I gained a firm foundation in the basics, learned many product specifics, became aware of published best practices and learned how specific vendors saw and sought to have me see the IT world.

Certificates themselves are not enough, of course.  It requires great dedication and desire to learn.  The value to the certifications, beyond getting to show them off once completed, is in being a guiding hand keeping the ship on course as you gain the fundamental knowledge of a topic.  Both on course within a narrow topic of a single certification but also, in many cases, for charting a more complex route through a series of certifications.  Or helping in providing a logical progression over time.

Certifications also provide insight into the skills and areas of study that vendors and certification authorities deem to be most important.  Once upon a time Microsoft offered foci in Exchange, Proxy Server, SQL Server, IIS and even Internet Explorer.  Today foci still include Exchange and SQL Server but other concepts like application platforms have moved into “trivial, basic skillsets” and new areas of note include virtualization, cloud platforms and hosted software-as-a-service management.

Certifications also tend to force a coordinated effort on learning the latest features and trends within products, not just new product areas themselves.  This is especially important as vendors are likely to use certifications to begin to guide their expert base towards mindsets that will be useful in the future.  If a vendor knows what is coming down the pike in a revision or two they will sometimes begin to set up foundational knowledge or start to encourage different types of designs via the certification process early so that their supporters in the field are as prepared as possible when new products or features are released.

Remember when looking at certifications, like any education, they are a means and not an end.  By changing our perspective on certifications from one of proving what we know to being a tool for learning we can make the ends that much greater and obtain far better value from the process.

 

The High Cost of On Premises Infrastructure

IT Infrastructure is a challenge for any company and especially companies that are not large enough to implement their own, full scale datacenters.  Like many things in IT, major challenges come in the form of lacking specific, seldom used expertise as well as lacking the scale to utilize singular resources effectively.

This lack of scale can come in many forms.  The obvious one is in man power.  Managing a physical computing infrastructure uses unique skills that are separate from IT itself and are often desired to be available “around the clock.”  This can vary from security to electrical to cooling and facilities to “datacenter technician” style staff.  Of course, smaller businesses simply do without these roles available to them, but this raises the cost incurred on a “per server” basis to maintain the infrastructure.  Large businesses and dedicated datacenters leverage an efficiency of scale to make the cost of physically housing an IT infrastructure lower – either by actually lowering the cost directly or by raising the quality and reliability of the equipment.

The cost effectiveness of delivering power, cooling and datacenter services is only one aspect of the cost of IT infrastructure in a business.  Where many businesses attack this problem, by reducing infrastructure investment and staff, may counteract some amount of the up front costs of the infrastructure, but generally does so to the detriment of availability and longevity of equipment.  Whether it is a lack of ISP redundancy, an absence of diesel electric generators or shaving a year or two of service off of a server’s service life, these costs generally add up, often in ways that are difficult to identify and track.

We see the effects of low qualify infrastructure often come out in the behaviour and expectations of smaller businesses.  For example in the enterprise datacenter an average server lifespan may be ten years or more, but smaller businesses often assume that a server is worn out and unreliable in seven or eight years.  This increase in failure rate also leads to more concern about system failure.  Smaller businesses often see a higher, rather than a lower, need to have redundant systems even when lower revenue would normally suggest otherwise.  Small businesses are prone to investing heavily in high availability mechanisms, often at great expense, to mitigate a perceived risk of high system fail rates that larger businesses may be less likely to see.  These factors can combine to create a high cost through more rapid system replacement and a tendency towards overbuying hardware – sometimes even doubling the otherwise necessary investment to protect against risks created by lower quality facilities management.

This concept is not unique to information infrastructure.  In the audiophile world, while huge investments in high quality audio equipment is common, it is a rule of thumb that fifty percent of audio quality comes from the equipment and fifty percent comes from the environment into which it is placed.  This lesson applies to information infrastructure.   Lower cost gear may run longer and more reliably in a high quality physical environment than more expensive, better engineered equipment will in a lower quality one.

Of course the most obvious components of lower reliability come from being unable to maintain redundant generators, independent power rails, adequate fuel supplies, uninterrupted power supply units, steady temperature and humidity, air filtration and, of course, highly redundant multi-path WAN access.  These aspects we think of all the time and are almost completely out of reach of all but the largest companies.   Even simple things like restricting access to only essential server room staff can be an insurmountable challenge in a small environment.

These challenges create an opportunity to find alternatives for the SME, SMB and SOHO business markets to look for ways to leverage combined scale.  While many companies today turn to ideas such as hosted cloud computing, the associated costs to elastically expanding capacity often make this impractical as this same market struggles the most to have the ability to utilize that type of functionality.  Cloud computing can be an answer in some cases, but normally only for the very smallest of companies for whom a single server is too much scale, or for those companies so large that they have a DevOps-style automation infrastructure capable of scaling elastically with load demands and workloads that make sense for this process.  But these companies are the exception, not the norm.  More often hosted cloud computing makes sense for only a specific subset of public-facing workloads and only in some cases.

For the majority of companies too small to create the scale necessary to build out their own full scale IT infrastructure, the answer is likely going to be found in colocation.  It must be noted that there are obviously potentially overarching locational or environmental factors that can make off-premises infrastructures impossible or at least impractical.  Most businesses, however, will not be subject to these limitations.

Colocation tackles the cost challenges of the smaller business market by generating the scale necessary to make high quality, dedicated information infrastructure facilities possible.  This includes staff, WAN connectivity, environmental controls, power, and expertise.  Cost savings can often come from surprising places including lower power cost per kilowatt hour, lower cost of cooling and power conditioning and higher real estate density.

It is often believed that colocation represents a cost premium service for businesses that have needs above and beyond the average, but in reality colocation is often and should often be chosen because it represents an opportunity to lower costs while also improving reliability.  Colocation, in most cases, will actually bring a cost savings on a month by month basis providing for an impressive return on investment potential over time as the initial cost can be equal or similar to other investments, but the ongoing monthly cost can be lower and, perhaps more importantly, the costs can become far more predictable with fewer risks and unexpected expenditures.

Because the cost of services are potentially very granular it is actually far easier for colocation lower the overall expenditure than is generally believed.  For example, a small business with just one or two servers would still need certain basics such as air conditioning and UPS support plus footprint space and security; all dedicated for only a very small amount of equipment.  In a colocation facility these servers may represent less than one percent of the cooling of a large, high efficiency cooling system, may use just a small fraction of a large UPS and so forth.

Colocation also frees IT staff from performing datacenter functions, at which they are generally untrained and poorly qualified, to focus on the tasks at which they are more valuable and intentioned.  Then the datacenter tasks can be performed by experienced, dedicated datacenter staff.

Calculating exactly ROI can be challenging because individual cases are very unique and depend heavily on the workloads, use cases, independent needs and environmental factors of an individual business and the colocation options considered.  But it should be approached with a mindset that colocation does not present only an opportunity for improvements in the quality or reliability of IT infrastructure services, not that it can represent a return on investment but that it may, in fact, do both of these things on top of fundamentally lowering costs overall.

The Scale HC3 Advantage

For years I have been pitching ideas about how a performant, highly reliable architecture for the small and medium business market should be approached.  Concepts like maintaining high reliability across all layers of the architecture, reducing the total number of points of failure whenever possible, reducing complexity, leveraging mirroring instead of parity for storage, keeping storage close to compute and so forth.

What I like about the Scale HC3 approach to hyperconvergence is that it addresses all of these items and more.  Scale takes all of the components of a complete platform and reduces them into a singular “package” that is deployed to all nodes in the system making every layer of the system as redundant as all of the other layers.  Even the management console itself is automatically replicated and available from every node in the cluster,  independently!

Compute and storage are integrated into the same package sitting together on each node.  This allows for the lowest latency in communications and the highest reliability both per node, as there is less to fail by way of having fewer moving parts, and also across the entire architecture by making each stable building block replicated so that nothing depends on any single node for computer, storage, management or orchestration.

Instead of the “traditional” approach so often associated with virtualization, where several layers of services are separate yet totally dependent upon each other, the Scale hyperconverged approach flattens these taking us to the opposite end of the spectrum from the infamous “inverted pyramid of doom” to a flat, broad “brick”.  No single points of failure, no highly fragile components.  Keep each component reliable on its own, then make them redundant anyway.

Digging down to the storage layer, the Scale HC3 uses a RAIN approach which can most easily be described as a “network RAID 1e” design.  RAID 1e is an interesting twist on RAID 10 which basically adds a striping mechanism inside the mirrors so that you never have anything but mirroring but the mirrors are not at the disk level but at the block level and spread out among many disks.  RAID 1e is more advanced and more flexible than RAID 10 and brings some minor performance and reliability advantages but, more importantly, allows a RAID 10 approach to be used effectively over a network without having to resort to the fragility of RAID 01.  Scale does not use actual RAID, but true RAIN, that does intelligent block mirroring in the cluster with logic not only about drives, not about nodes on which the drives sit.  We only use RAID as an example for comparison.

The Scale HC3 also adds powerful hybrid spinning disk and solid state drive tiering using a heat map approach to determining which blocks are being used regularly and which are predominantly idle.  This allows the system to intelligently move blocks from slow spinning storage to high performance SSD for performance without needing all storage to sit on SSDs.  This keeps cost down while also allowing for large capacity.  The heat map approach, coupled with a simplistic priority management system, makes this nearly transparent to end users.

Scale also takes the management of a complicated beast like a full virtualization stack and makes it simple and easy for a small business IT department to manage.  SMB IT departments are very busy places being pulled in a lot of simultaneous directions.  It is very important that their solutions, especially the critical ones on which the most depends, are elegant, well supported, effective and easy to manage.  Scale provides a solution that allows for growth, performance and high reliability while maintaining an astonishing degree of ease of use. Even an IT department with all of the resources and experience to manage the most complex and demanding infrastructure can benefit by not wasting time where there is no competitive advantage and, instead, putting their efforts to use where they can most benefit the business.

The Scale HC3 allows small and medium business IT departments to stop worrying about how to design a system to meet their needs, and instead focus on the what’s most important to their business and what provides them a unique, competitive advantage.

 

[Commissioned for the Scale Blog]

Using a Wiki for Quick Documentation

If your business is anything like the businesses with which I normally deal one of the hardest items to tackle is documentation.  This can include all kinds of documentation from human resources processes to accounting practices to core business procedures to the information technology department’s system records.  Businesses need good documentation for many reasons.

Traditionally small businesses simply end up doing without key documentation and have to reinvent the wheel every time something comes up for which the people currently working have not had a chance to memorize the process.  Larger businesses often place their limited documentation into Microsoft Word or Adobe PDF files and store them away in an unsearchable file server or possibly even on paper – putting them into large, ringed binders that no one even knows exist let alone how to find necessary information within.  These are not effective processes, but there is a simple solution.

The solution is a web-based application known as a Wiki.  Most people get their first introduction to a Wiki through the ubiquitous online encyclopedia Wikipedia which is built on a Wiki platform (MediaWiki, to be specific), but this is hardly the only use for a Wiki.  Wikis are simple document repositories designed to allow many editors to easily create and modify online documentation.  The whole concept of the Wiki is about being simple and easy.  The full name, Wiki Wiki, means “quick” or “fast” in Hawaiian.

Wikis have now been around for several years and have begun to become popular in many businesses.  Wikis are generally very lightweight and there are many vendors making both open source and proprietary Wiki products in addition to several hosted services available online.  You can really pick out a Wiki based on your particular needs.  Most Wiki products are free and for the budget conscious business there is no reason to need to consider a Wiki to be a cost center.  This is a simple product that your IT department should be able to roll out for you quickly and easily giving you a documentation repository right away.

At first the idea of a Wiki is a bit foreign to most people.  On the Internet we often encounter Wikis in use for system documentation.  This is becoming increasingly popular. Wikis are often used to allow anyone to log in and make documentation changes.  This can be a good way to get started with your Wiki.  You can also start from the beginning with detailed user access controls allowing only certain individuals to post documentation in the system instead of allowing a documentation free-for-all.  Your needs will depend upon your type of organization.

What makes the Wiki concept powerful is the ease with which anyone can hop on, search for documents that they need and create or modify those documents if they cannot find the information for which they are looking.  The entire concept of the Wiki really encourages staff to make use of the format.  Lowering the barrier to creating useful documentation is the best possible way to get documentation created, and because the documentation is so easy to modify it makes it far more likely that that document will be kept up to date.

A common feature amongst Wiki systems is the idea of tracking changes to Wiki pages.  This means that if someone goes in and makes a change to a page that people using the Wiki system can view past versions of that document to see what changes have been made over time and by whom those changes were made.  This feature also makes it very simple for a system administrator to roll back bad changes if someone is not posting appropriately.

One of my personal favourite Wiki features is the idea of subscribing to a particular Wiki page either through email or an RSS feed.  The subscription model allows any staff members to be alerted to changes to documentation in which they take an interest.  These can be staff members for whom a particular Wiki page is critical to their job functions such as HR managers following changes to the corporate employment policies pages or just interested staff members who want to know when a page changes such as managers subscribing to the cafeteria’s lunch menu page or developers subscribing to a page about a particular software project’s status.

This method is a wonderful way to allow anyone to keep up with any publicly available knowledge without needing to interupt the actual process to view status.  Useful at every level of the organization and extremely simple.  So often organizations do a poor job of keeping everyone “in the loop” who needs to be and with the Wiki subscription model everyone has the opportunity to take responsibility for keeping themselves informed through whatever method is most useful to them.

As I mentioned before, there are many Wiki products available on the market today.  There are enough that choosing one is actually a rather formiddable task.  Some key differentiators between products include their use license, the data store architecture – typically filesystem based or database based, their platform dependence and their integration with other products and authentication systems.  Of course there is also the option of choosing a hosted Wiki service that hosts your Wiki online – mostly this is popular with companies using Wikis as a means of serving their customers rather than for private, internal documentation.  There are so many Wikis from which to choose that the site WikiMatrix is dedicated to helping you choose the Wiki that is best for you.

Before you dive into the world of exploring Wikis on your own I will mention a few that are rather popular and worth looking into early on in your Wiki decision making process.  Popular Wiki platforms include MediaWiki, DokuWiki, TWiki and pmWiki.  These are just the tip of the Wiki iceberg but provide a good look into the features that you should expect to see throughout your search for the best Wiki for your implementation.  The Wiki choosing wizard on the WikiMatrix web site is a great place to begin as well.  Each of these Wikis that I have mentioned thus far are available for free and rather than spending a lot of time studying their benefits you may wish to simply download one or more of them, install them on a spare server and give them a try.

In addition to stand-alone Wiki products like we have mentioned here there are also Wiki engines built into several enterprise content managment and portal systems such as Microsoft’s Sharepoint, Alfresco and Joomla.  For any businesses looking to make a larger investment in an enterprise content management system having a Wiki functionality built into that product can provide a single, unified Intranet web portal interface to serve many different internal documentation and document storage needs.

Wikis are powerful and affordable tools that small and medium businesses can leverage today, even in a climate of budget cuts and uncertainty, to document processes, ease documentation burdens and increase internal communications and efficiency.  It is unlikely that we will see the popularity of the Wiki concept wane but rather they are already beginning to take their place as a staple of the business documentation and communication process.