Should IT Embrace Subscription Licensing

With big name, traditionally boxed products like Microsoft Office and Adobe’s Creative Suite turning to new subscription licensing models we, as IT, have to look into this model and determine if and when it is right for our businesses.  In some cases, like with MS Office, we have choices to buy boxed products, volume license deals or subscription licenses.  This is very flexible and allows us to consider many alternatives.  With Adobe, however, non-subscription options have been dropped and if we want to use their product line subscription pricing is our only option.  As we move forward this will be a trend more and more and something that all of the industry must face and understand.  It cannot be avoided easily.

First we should understand why subscription models are good for the vendors.  Many people, especially in IT, assume that subscriptions are designed to extract higher fees from customers and certainly any given vendor may raise prices in conjunction with changing models, but fundamentally subscription pricing is purely a licensing approach and does not imply and increase in cost.  It may, potentially, even mean a decrease.

Software vendors like subscription pricing for three key reasons.

The first is license management.  With traditional software purchases it was trivially easy for customers to install multiple copies, perhaps accidentally, of software causing a loss of revenue if software was used but not licensed.  License management was traditionally complicated and expensive for all parties involved.   Moving to subscription models makes it very easy to clearly communicate licensing requirements and to enforce policies.

For customers purchasing software, this change is actually beneficial as it lowers the overall cost of software because it helps to eliminate illegitimate uses of software.  By lowering the piracy rate the cost that needs to be passed on to legitimate businesses can be lowered.  Whether this turns into lower cost for customers or higher margins for vendors it is a benefit to all of the legitimate parties involved.

The second is eliminating legacy versions from support.  In traditional software and support models, customers might use old versions of software for many years resulting in many different versions requiring support simultaneously.  Often this would mean that support teams would need extensive training for a long tail of legacy customers or separate support groups would be needed for different software versions.  This was extremely expensive as support is a key cost in software development.  Likewise, development teams would be forced to be split with most resources focusing on developing or fixing the current software version while some developers would be forced to spend time patching and maintaining legacy versions that were no longer being sold.  These costs were often enormous and meant that great energy was being spent to support customers who were not investing in new software and came at the expense of resources for improving the software and support for the best customers.  The move to subscription licensing generally eliminates support needs for legacy versions as all customers move to the latest versions all of the time.

Again, this is a move that greatly benefits both the vendor and good customers.  It only sometimes is a negative to customers who were relying on being “expensive to maintain” customers who used old software for a long time rather than updating.  But commonly even those customers benefit from not running old software, even if this is not how they would operate if they had their druthers.  The benefits to the vendor and to “good” customers is very large, the penalty to customers that were formally not profitable is generally very small.

The third reason, which is really a combination of the above, is that customers who previously depended on buying a single version of a product and continuing to use it for a very long time, likely many years past the end of support, are effectively eliminated.  These customers, lacking a means to buy in this traditional manner, are normally either lost as customers (which is not a financial loss as they were not very profitable) or they convert to higher profit customers, even if begrudgingly.  This makes vendors very happy – separating the wheat from the chaff, so to speak.  Cutting lose customers that were not making them money and creating more customers that are making them money.

Now that we have seen why vendors like this model and why we are likely to see more and more of it in the future as large, leading vendors both demonstate the financial value of the change and condition customers to think in terms of subscription license models, we will look at why IT departments and businesses should consider embracing this model for their own reasons.

To the business itself, subscription licensing offers some significant value, especially to finance departments.  Through moving to subscription licensing we are generally able to move from capital expenses (capex) to operational expenses (opex) which is generally seen as favorable.  But subscription value is far larger than that.  Subscription pricing gives cost predictability.  A finance department can accurately predict their costs over time rarely being surprised whereas in the old approach software was largely forgotten and then some need would require an old package to be updated and suddenly a very large invoice would be forthcoming with potentially very little warning (often followed by large re-training expenses due to the possibly large gap in software versions.)  With subscription pricing, costs normally fluctuate fluidly with employee count.  As new employees are hired the finance department can predict exactly how much they will cost.  And when employees leave subscriptions can be discontinued and cost reduced.  Only software that is truly used is purchased.  The need to overbuy to account for fluctuations or predicted growth no longer exists.  Subscription licensing also leverages the time-value of money allowing businesses to hold onto their funds for as long as possible requiring them to pay only for what they use as they use it.

For IT the benefits are even greater.  IT should benefit from having a better relationship with finance and human resources as the costs and needs of incoming or outgoing users are better understood.  This eliminates some of the friction between these departments which is always beneficial.

IT also benefits from the effective enforcement of best practices.  It is common for IT departments to struggle to convince businesses to invest in newer versions of software which often results in support issues and unnecessary complexity and less than happy users.  With subscription pricing, IT is constantly supplied with the latest software for users which, in nearly all cases, is an enormous benefit both to IT and to the users of the software.  This eliminates much of the friction that IT experiences with the business and with management by moving the need for updates to an external mandate and no longer something that IT or the users must request.

IT benefits from easier license management on their end as well.  It is generally far easier to determine license availability and need.  Audits are unnecessary because the licensing process is generally handled (generally, nothing technically requires this) via an authentication mechanism with the vendor which means that unless specific effort is taken to violate licencing (cracking software or some other extreme measure) that licensing accidents are unlikely and easy to correct.

IT may also benefit from easier ability to handle complex licensing situations such as providing a higher feature set level for one user and not for another.  Licenses can often be purchased at a minimum level and upgraded if more needs are discovered.  The ability to easily customize per user and over time means that IT can deliver more value with less effort.

Many of the objections with subscription licensing are not actually with subscription licensing itself.  Often it is a perception of higher cost.  This is, of course, difficult to prove since any given company may choose to charge anything that they want for different license options.  Microsoft offers both subscription and non-subscription license options for some of their key products such as MS Office.  This gives us a chance to see how they see the cost differences and benefits and to compare the options so that we can find the most cost effective option for our own business.  By keeping both models Microsoft can be audited by their customers to keep costs of each model in line.  However, by offering both they also lose many of the benefits that pure subscription models bring such as needing to support only a single version at a time.

Adobe, on the other hand, made the switch from traditional licensing to subscription licensing basically all at once and appears to have decided to raise their prices at the same time.  This is very misleading because Adobe actually raised the price, and it is not the subscription model creating the price increase.  The benefits of subscription pricing are benefits of the model.  The pricing decisions of any given vendor are a separate thing and must be evaluated in the same way that any pricing evaluation is done.

The other common complaint that I have heard many times is an inability to “own” software.  This is a natural reaction but one that IT and business units should not have.  In a business setting software is not owned by people and we should have no emotional ties to it.  Software is just another tool for completing our work and whatever gives us the best ability to do that, at the best price, is what we want.  From a purely business perspective, owning software is irrelevant.  The desire to own things is a human reaction that is not conducive to good business thinking.  It is also very valuable to point out that IT should never have this mental reaction to owning software – it is the business, not the IT department or the IT professionals, who own software in their business.  IT is simply selecting, deploying, configuring and managing the software on behalf of the business that it supports.

Overall I truly believe that subscription licensing models are good, in general, for nearly everyone involved.  They benefit vendors in such a way that it enables them to be more viable and profitable, while making it easier for IT departments to deliver better value to their users often while enforcing many best practices that businesses would otherwise be tempted to avoid.  The improved profitability may also encourage vendors to pursue niche software titles that would have been previously unaffordable to create and support.  Vendors, IT and end users are nearly universal winners while businesses face the only real grey area where pricing may or may not be beneficial to them in this model.

Originally posted on the StorageCraft Blog.

The Weakest Link: How Chained Dependencies Impact System Risk

When assessing system risk scenarios it is very easy to overlook “chained” dependencies.  We are trained to look at risk at a “node” level asking “how likely is this one thing to fail.”  But system risk is far more complicated than that.

In most systems there are some components that rely on other components. The most common place that we look at this is in the design of storage for servers, but it occurs in any system design.  Another good example is how web applications need both application hosts and database hosts in order to function.

It is easiest to explain chained dependencies with an example.  We will look at a standard virtualization design with SAN storage to understand where failure domain boundaries exist and where chained dependencies exist and what role redundancy plays in system level risk mitigation.

In a standard SAN (storage area network) design for virtualization you have virtualization hosts (which we will call the “servers” for simplicity), SAN switches (switches dedicated for the storage network) and the disk arrays themselves.  Each of these three “layers” is dependent on the others for the system, as a whole, to function.  If we had the simplest possible set with one server, one switch and one disk array we very clearly have three devices representing three distinct points of failure.  Any one of the three failing causes the entire system to fail.  No one piece is useful on its own.  This is a chained dependency and the chain is only as strong as its weakest link.

In our simplistic example, each device represents a failure domain.  We can mitigate risk by improving the reliability of each domain.  We can add a second server and implement a virtulization layer high availability or fault tolerance strategy to reduce the risk of server failure.  This improves the reliability of one failure domain but leaves two untouched and just as risky as they were before.  We can then address the switching layer by adding a redundant switch and configuring a multi-pathing strategy to handle the loss of a single switching path reducing the risk at that layer.  Now two failure domains have been addressed.  Finally we have to address the storage failure domain which is done, similarly, by adding redundancy through a second disk array that is mirrored to the first and able to failover transparently in the event of a failure.

Now that we have beefed up our system, we still have three failure domains in a dependency chain.  What we have done is made each “link” in the chain, each failure domain, extra resilient on its own.  But the chain still exists.  This means that the system, as a whole, is far less reliable than any single failure domain within the chain is alone.  We have made something far better than where we started, but we still have many failure domains.  These risks add up.

What is difficult in determining overall risk is that we must assess the risk of each item, then determine the new risk after mitigation (through the addition of redundancy) and then find the cumulative risk of each of the failure domains together in a chain to determine the total risk of the entire system.  It is extremely difficult to determine the risk within each failure domain as the manner of risk mitigation plays a significant role.  For example a cluster of storage disk arrays that fails over too slowly may result in an overall system failure even when the storage cluster itself appears to have worked properly.  Even defining a clear failure can therefore be challenging.

It is often tempting to take a “from the top” view assessment of risk which is very dangerous, but very common for people who are not regular risk assessment practitioners.  The tendency here is to look at the risk only viewing the “top most” failure domain – generally the servers in a case like this, and ignoring any risks that sit beneath that point considering those to be “under the hood” rather than part of the risk assessment.  It is easy to ignore the more technical, less exposed and more poorly understood components like networking and storage and focus on the relatively easy to understand and heavily marketed reliability aspects of the top layer.  This “top view” means that the risks under the top level are obscured and generally ignored leading to high risk without a good understanding of why.

Understanding the concept of chained dependencies explains why complex systems, even with complex risk mitigation strategies, often result in being far more fragile than simpler systems.  In our above example, we could do several things to “collapse” the chain resulting in a more reliable system as a whole.

The most obvious component which can be collapsed is the networking failure domain.  If we were to remove the switches entirely and connect the storage directly to the servers (not always possible, of course) we would effectively eliminate one entire failure domain and remove a link from our chain.  Now instead of three chains, each of which has some potential to fail, we have only two.  Simpler is better, all other things being equal.

We could, in theory, also collapse in the storage failure domain by going from external storage to using storage local to the servers themselves essentially taking us from two failure domains down to a single failure domain – the one remaining domain, of course, is carrying more complexity than it did before the collapsing, but the overall system complexity is greatly reduced.  Again, this is with all other factors remaining equal.

Another approach to consider is making single nodes more reliable on their own.  It is trendy today to look at larger systems and approach risk mitigation in that way, by adding redundant, low cost nodes to add reliability to failure domains.  But traditionally this was not the default path taken to reliability.  It was far more common in the past, as is shown in the former prevalence of mainframe and similar classed systems, to build in high degrees of reliability into a single node.  Mainframe and high end storage systems, for example, still do this today.  This can actually be an extremely effective approach but fails to address many scenarios and is generally extremely costly, often magnified by a need to have systems partially or even completely maintained by the vendor.  This tends to work out only in special niche circumstances and is not practical on a more general scope.

So in any system of this nature we have three key risk mitigation strategies to consider: improve the reliability of a single node, improve the reliability of a single domain or reduce the number of failure domains (links) in the dependency chain.  Putting these together as is prudent can help us to achieve the risk mitigation level appropriate for our business scenario.

Where the true difficulty exists, and will remain, is in the comparison of different risk mitigation strategies.  The risk of a single node can generally be estimated with some level of confidence.  A redundancy strategy within a single domain has far less ability to be estimated – some redundancy strategies are highly effective, creating extremely reliably failure domains while others can actually backfire and reduce the reliability of a domain!  The complexity that often comes with redundancy strategies is never without caveat and while it will typically pay off, it rarely carries the degree of reliability benefit that is initially expected.  Estimating the risk of a dependency chain is therefore that much more difficult as it requires a clear understanding of the risks associated with each of the failure domains individually as well as an understanding of the failure opportunity existing at the domain boundaries (like the storage failover delay failure noted earlier.)

Let’s explore the issues around determining risk in two very common approaches to the same scenario building on what we have discussed above.

Two extreme examples of the same situation we have been discussing are a single server with internal storage used to host virtual machines versus a six device “chain” with two servers and using a high availability solution at the server layer, two switches with redundancy at the switching layer and two disk arrays providing high availability at the storage layer.  If we switch any large factor here we can generally provide a pretty clear estimate of relative risk – if any of the failure domains lacks reliable redundancy, for example – we can pretty clearly determine that the single server is the more reliable overall system except in cases where an extreme amount of single node reliability is assigned to a single node, which is generally an impractical strategy financially.  But with each failure domain maintaining redundancy we are forced to compare the relative risks of intra-domain reliability (the redundant chain) vs. inter-domain reliability (the collapsed chain, single server.)

With the two entirely different approaches there is no reasonable way to assess the comparative risks of the two means of risk mitigation.  It is generally accepted that the six (or more) node approach with extensive intra-domain risk mitigation is the more reliable of the two approaches and this is almost certainly, generally true.  But it is not always true and rarely does this approach outperform the single node strategy by a truly significant margin while commonly costing four to ten fold as much as the single server strategy.  That is potentially a very high cost for what is likely a small gain in reliability and a small potential risk of a loss in reliability.  Each additional piece of redundancy adds complexity that a human must implement, monitor and maintain and with complexity and human interaction comes more and more risk.  Avoiding human error can often be more important than avoiding mechanical failure.

We must also consider the cost of recovery.  If failure is to occur it is generally trivial to recover from the failure of a simple system.  An extremely complex system, having failed, may take a great degree of effort to restore to a working condition.  Complex systems also require much broader and deeper degrees of experience and confidence to maintain.

There is no easy answer to determining the reliability of systems.  Modern information delivery systems are simply too large and too complex with too many indeterminable factors to be able to evaluate in all cases.  With a good understanding of chained dependencies, however, and an understanding of risk mitigation strategies we can take practical steps to determine roughly relative risk levels, see where similar risk scenarios compare in cost, identify points of fragility, recognize failure domains and dependency chains,  and appreciate how changes in system design will move us clearly towards or away from reliability.

One Big Flat Network

There is a natural movement of networks to become unnecessarily complicated.  But there is great value in keeping networks clean and simple.  Simple networks are easier to manage, more performant and more reliable while being generally less expensive.  Every network needs a different level of complexity and large networks will certainly need an extensive level of it, but small businesses can often keep networks extremely simple which is part of what makes smaller businesses more agile and less expensive, giving them an edge over their larger counterparts.  This is an edge that they must leverage because they lack the enterprise advantage of scale.

There are two ways to look at network complexity.  The first is the physical network – the actual setup of the switches and routers that make up the network.  The second is the logical network – how IP address ranges are segmented, where routing barriers exist, etc.  Both are important to consider when looking at the complexity of your network.

It should be the goal of any network to be as simple as possible while still meeting all of the goals and requirements of the network.  

The first aspect we will address is the physically flat network.   Reducing a physical network to be flat can have a truly astounding effect on the performance and reliability of  that network.  In a very small network this could mean working from a single switch for all connections.  Typically this is only available for the very smallest networks as switches rarely are available above forty-eight or possibly fifty-two ports.  But for many small businesses this is completely possible.  It may require additional cabling for a building, in order to bring all connections back to a central location, but can often be attained – at least on a site by site basis.  Many businesses today have multiple locations or staff working from home and this can make the network challenges much greater, although each location can strive for its own simplicity in those cases.

As a network grows the concept of the single switch can be grown as well using the concept of switch stacking.  Stacked switches share a single switching fabric or backplane.  When stacked they behave as a single switch but with more ports.  (Some switches do true backplane sharing and some mimic this with very high speed uplink ports with shared management via that port.)  A switch stack is managed as a single switch making network management no more difficult, complex or time consuming for a stack than for a single switch.  It is common for a switch stack to grow to at least three hundred ports if not more.  This allows for much larger physical site growth before needing to leave the single switch approach.

In some cases, some large module single switch chassis will grow even larger than this allowing for four hundred or more ports in a single switch but in a “blade like” enterprise switching chassis.

By being creative and looking at simple, elegant solutions it is entirely possible to keep even a moderately large network contained to a single switching fabric allowing all network connections to share a single backplane.

The second area that we have to investigate is the logical complexity of the network.  Even in physically simple networks it is common to find small businesses investing a significant amount of time and energy into implementing unnecessary subnets or VLANs and all of the overhead that comes with those.

Subnetting is rarely necessary in a small or even a smaller medium-sized business.  Traditionally, going back to the 1990s, it was very common to want to keep subnets to a maximum of 256 devices (or a /24 subnet) because of packet collision, broadcasts and other practical issues.  This made a lot of sense in that era when hubs were used instead of switches and broadcasts were common and network bandwidth was lucky if it was 10Mb/s on a shared bus.  Today’s broadcast light, collision free, 1Gb/s dedicated channel networks experience network load in a completely different manner.  Where 256 devices on a subnet was an extremely large network then, having more than 1,000 devices on a single subnet is a non-issue today.

These changes in how networks behave mean that small and medium businesses almost never need to subnet for reasons of scale and can comfortably use a single subnet for their entire business reducing complexity and easing network management.  More than a single subnet may be necessary to support specific network segmentation like separating production and guest networks, but scale, the reason traditionally given for subnetting networks, becomes an issue solely of larger businesses.

It is tempting to want to implement VLANs on every small business environment as well.  Subnetting and VLANs are often related and often confused, but subnets often exist without VLANs, while VLANs do not exist without subnets.

In large environments VLANs are a foregone conclusion and it is simply assumed that they will exist.  This mentality often filters down to smaller organizations who are often tempted to apply this to businesses which lack the scale that makes VLAN management make sense.  VLANs should be relatively uncommon in a small business network.

The most common place where I see VLANs used when they are not needed is in Voice over IP or VoIP networks.  It is a common assumption that VoIP has special needs that require VLAN support.  This is not true.  VoIP and the QoS that it sometimes needs are available without VLANs and often will work better without them.

VLANs really only become important when either management is needed at large scale (where scale is larger than a single subnet can provision) and cannot be physically segregated or when specific network-layer security is needed which is relatively rare in the SMB market.  VLANs are very useful and do have their place.  VLANs are often used if a dedicated guest network is needed but generally in a small business guest access is provided via a direct guest connection to the Internet rather than a quarantined network for guests.

The most common practical use of a VLAN in an SMB is likely to be a walled garden DMZ designed for quarantined BYOD remote access where BYOD devices connect much like guests but have the ability to access remote access resources like RDP, ICA or PCoIP protocols.  VLANs would also be popular for building traditional DMZs for externally facing public services such as web and email servers – except that these services are not commonly kept on the local network for hosting in today’s SMBs so this classic use of VLANs in the SMB is rapidly fading.

Another use case where VLANs are often used inappropriately is for a Storage Area Network or SAN.  It is best practice that a SAN be a completely independent (air gapped), physically unique network unrelated to the regular switching infrastructure.  It is generally not advised that a SAN be created using VLANs or subnets but instead be on dedicated switches.

It is tempting to add complex switching setups, additional subnets and VLANs because we hear about these things from larger environments, they are fun and exciting, and they appear to add job security by making the network more difficult to maintain.  Complex networks require higher end skills and can seem like a great way to use that networking certificate.  But in the long run, this is a bad career and IT strategy.  Network complexity should be added in a lab for learning purposes, not in production networks.  Production networks should be run as simply, elegantly and cost effectively as possible.

With relatively little effort, a small business network can likely be designed to be both physically and logically very simple.  The goal, of course, is to come as close as possible to creating single, flat network structure where all devices are physical and logical peers with no unnecessary bottlenecks or protocol escalations.  This improves performance and reliability, reduces costs and frees IT resources to focus on more important tasks.

Originally posted on the StorageCraft Blog.

Starting the IT Clock Ticking

Everyone is always happy to tell you how important experience is over certifications and degrees when working in IT. Few things are so readily agreed upon within the industry. What is shocking, however, is how often that advice does not get translated into a practical reality.

New IT hopefuls, when asking for guidance, will be told the value of experience but then sent everywhere except towards experience with the advice that they receive. This makes no sense. When applying for IT jobs, hiring managers and human resources departments are interested in knowing when you started in IT and how many years you have been in the field. That’s a hard number and one that you can never change once it has been set. Your start date is a factor of your career with which you are stuck for the rest of your life. You can get a degree anytime. You can get certified anytime. But your entry date into the field is permanent, it is the most important thing that an IT professional hopeful needs to be focused on.

Many things will qualify as the first “start date” in a career. What is important is getting into a real IT position, or a software development position, to affix that date as early as possible.  (Nearly everyone in the field accepts the software engineering field as experience directly relevant to IT even though it is technically not IT.)  This counts towards experience which can, in turn, count towards other things including eligibility for positions, pay increases or even vacation accrual or similar benefits. Often IT professional hopefuls do not think about the range of possibilities for establishing that entry date into the field and overlook opportunities or they downplay the value of the entry date and opt out of opportunities that would have greatly benefited them choosing to focus, instead, or more “socially accepted” activities that ultimately play a far smaller role in their overall career.

The most obvious example of an IT entry date is obtaining an entry level position in the field.  Because this is so obvious, many people forget that there are other options and can easily become overly focused on finding their first “normal” job, typically on a helpdesk, and may lose sight of everything else.

Even worse,  it is common for assumptions to be made about how a first job is typically acquired and then, because of the assumed steps to get from A to B, often the focus shifts to those steps and the real goal is missed completely.  For example, it is often assumed that a college degree and industry certifications are requirements for getting into an entry level position.  Certainly it is true that an education and certifications can make breaking into the industry much easier.  But these themselves are not the goal, they are tools to achieve the goal.  Getting work to start a career is the goal, but often those extra steps get in the way of career opportunities and a loss of focus leads would-be IT pros to misstep and skip career opportunities because they have become focused on proximate achievements like certifications rather than looking at their life from a goal level.

I have heard many times IT students ask if they should take a  job offer in their chosen career or continue with a degree path instead.  Even if the job is very good, it seems that almost ubiquitously the choice will be made to turn down the critical professional position because the student has lost focus and is thinking of the proximate goal, their education, and forgetting about the true goal, their career.  This reaction is far more common than anyone would realize and very damaging to students’ prospects.  Perhaps they often feel that since an opportunity came along before they had completed their studies that good entry level positions are common and easy to acquire, perhaps they simply forgot why they were going to school in the first place and perhaps they simply are not concerned with their careers and wish to spend their time relaxing in college before taking that next step.  Many students probably fear being able to complete their education if they take a position in IT before completing but there are very good options for this that would allow for both the critical needs of their career and completing their education in a good way too.  Taking a career position does not need to have a negative impact on the ability to complete an education if the educational process is deemed to still be important.

There are several avenues that allow for starting the “career clock”, as I like to think of it.  The easiest for most people, especially those relatively young, is to find an internship.  Internships can be found even very young, middle school or early high school and generally into the mid or even late twenties.   Internships can be amazingly valuable, both because they often allow the earliest entry into the field (specifically unpaid internships) generally many years earlier than other options with the fewest up front expectations.  Students pursuing internships from a young age can often get a career jump of two to ten years on their non-interning counterparts!  The ability to leap forward in your career can be dramatic.  Internships abound and few students take the time and effort to invest in them.  Those students honestly interested in an internship will likely have no problems securing one.

Internships can be much more valuable than regular jobs because they, by definition, should include some amount of mentorship and projects designed to educate.  An entry level job typically focuses on simple, highly repeatable tasks that teach relatively little while a real internship should focus on growing and developing skills and an understanding of the IT discipline.  Because of this, a good internship will generally build a resume and establish experience much faster than most other methods, often allowing a wider range of exposure to different areas of IT.

Another good path for getting into IT as early as possible is volunteer work.  This is a little like interning except requires more effort and determination on the part of the hopeful IT professional and lacks the expectation of mentoring and oversight.  A volunteer role is always unpaid but because of this often offers a lot of flexibility and opportunity.  There are many places that need or welcome IT volunteers such as churches, private schools and other non-profits running on tight budgets.  With volunteer work you will often get greater decision-making opportunities and likely exposing the needs to think of IT within financial constraints which, while typically tighter at a not for profit, exists in every instance of IT.  This business exposure is even better for resume building.

Volunteering is generally more difficult to do at a young age and a level of maturity and knowledge is often needed but not in all cases.  Volunteering at a larger non-profit which already has paid IT or more senior volunteer IT might combine volunteering and a nearly intern-like situation.  Whereas a smaller non-profit, often like churches or similar, might result in dealing with IT alone which can be very educational but potentially daunting and even overwhelming to a younger or nascent IT professional in the making.  A volunteer in a small non-profit may be in a position to run an IT shop, from top to bottom, before even being employed in their first traditional position.

Of course no single approach need be taken alone.  Interning with a for profit firm and volunteering as well can be even better, making for an even stronger and more valuable IT entry point.  Sometimes intern or volunteer work may continue even after traditional, paying employment is found because one pays the bills while the other builds the resume.

Even less traditional options can exist such as starting a business on your own, which is generally extremely difficult and often not possible at a young age or finding traditional work while very young.  Starting a business will often teach a large volume of business skills and a small amount of IT ones and can be extremely valuable at a potentially devastating cost.  Compared to other approaches this is very risk under normal circumstances.  It certainly can be done but would rarely be considered the best choice.

What matters most is finding a position that establishes a starting point into IT.  Once that stake is driven into the proverbial ground it is set and the focus can shift to skill acquisition, broader experience, education, certifications or whatever is needed to take the career to the next level.  All of those subsequent skills are soft, they can be enhanced as needed.  But that starting date can never be moved and is absolutely crucial.

It is often not well communicated to high school and college age IT hopefuls that these opportunities are readily available and just how important they are.  So often society or the established education machine encourage students and those in the collegiate ages to discount professional opportunities and focus on education to the detriment of their experience and long term careers.  IT and software development are not careers that are well supported by traditional career planning and are especially not well suited to people who wait to jump into them until they feel “ready” because there will always be those with ambition and drive doing so at a far younger age who will have built a career foundation long before most of their peers even consider their futures.  IT is a career path that rewards the bold.

There is no need to follow the straight and narrow traditional path in IT.  That path exists and many will follow it; but it is not the only path and those that stray from it will often find themselves at a great advantage.

No matter what path you choose to take in your pursuit of a career in IT, be sure to be extremely conscious of the need to not just acquire skills but to establish experience and start the clock ticking.

The Information Technology Resource for Small Business