Standard Areas of Discipline Within IT

 

Information Technology and Business Infrastructure are an enormous field filled with numerous and extremely varied career opportunities not just in the industries in which work is done, but also in the type of work that is done. Only rarely are any two IT jobs truly alike. The variety is incredible. However, certain standard career foci do exist and should be understood and known to everyone in the field as they provide important terminology for mutual understanding.

It is very important to note that, like in any field, it is most common that a single person will do more than one role throughout their careers and even at the same time. Just as someone may be half time burger cook and half time cashier, someone may have their time split between different IT roles. But we need to know what those roles are and what they mean to be able to convey value, experience and expectation to others.

These are what we refer to as “IT Specializations” and are areas of specific focus and opportunity for deep skill within IT. These often do not just represent job roles within IT, but in large businesses generally are representative of entire departments of career peers who work together. None of these areas of focus is more or less senior to any other, these are different areas, not levels. There is no natural or organic progression from one IT discipline area to another, however all IT experience is valuable and it would be expected that experience in one discipline would prepare someone to more quickly learn and adapt to another area.

The terms “Administration” and “Engineering” are often applied today, these, again, are not levels, nor are they discipline areas. These refer to a role being focused on operations (the running of production systems) or on designing systems for deployment. These two share discipline areas. So, for example, the Systems discipline would have need for both administration and engineering workloads within it.

Systems. Shortened from “operating systems.” Systems roles are focused on the operating systems, normally of servers (but not necessarily in all cases.) This is the most broadly needed specialized IT role. Within systems, specializations tend to be such as Windows, RHEL, Suse, Ubuntu, AIX, HP-UX, Solaris, FreeBSD, Mac OSX and so forth. High level specializations such as UNIX are common with a single person or department servicing any system that falls under that umbrella, or larger organizations might split AIX, Solaris, RHEL and FreeBSD into four discrete teams to allow for a tight focus on skills, tools and knowledge. Systems specialists provide the application platform on which computer programs (which would also include databases) will run. Desktop support is generally seen as being a sub-discipline of systems, and one that often intersects pragmatically with end user and helpdesk roles.

Platforms. Also known as virtualization or cloud teams (depending on exact role), the platform discipline focues on the abstraction and management (hypervisor) layer that sits, or can sit, between physical hardware and the operating system(s). This team tends to focus on capacity planning, resource management and reliability. Foci within platform specialization would commonly include VMware ESXi, vCloud, Xen, XenServer, KVM, OpenStack, CloudStack, Eucalyptus, Hyper-V and so forth. With the advent of massively hosted platforms, there has also arisen a need for foci on specific hosted implementations of platforms such as Amazon AWS, Microsoft Azure, Rackspace, Softlayer and so on.

Storage. Storage of data is so critical to IT that it has filtered out as its own, highly focused discipline. Storage specialist generally focus on SAN, NAS and object store systems. Focus areas might include block storage in general, or might drill down to specific product or product lines, such as EMC VMAX or HPE 3PAR. With recent growth in scale out storage technologies, the storage arena is growing both in total size as well as in depth of skill expectation.

Databases. Similar to storage, databases provide critical “backing” of information to be consumed by other departments. While conceptually databases and storage overlap, in practicality the two are separated dramatically in how they are treated. We think of storage as “dumb”, “unstructured” or “bulk” storage and database as “smart”, “focused” or “highly structured” storage. At their fundamental level, the two are actually quite hard to distinguish. In practice, they are extremely different. Database specialists work specifically on database services, but rarely create databases and certainly do not code database connected applications. Like their systems counterparts, database specialists (often called DBAs) manage the database platform for other teams to consume. Database foci could be high level such as relational databases or non-relational (NoSQL) databases. Or, more commonly, a DBA would focus on one or more very specific database applications such as Informix, MS SQL Server, DBase, Firebird, PostgreSQL, MariaDB, MySQL, MongoDB, Redis, CouchDB, and many more.

Applications. Applications are the final product that consumes all other platform components from physical systems, platforms, systems, storage, databases and more. Applications are the ultimate component of the computational stack and can take a massive variety of forms. Application specialists would never use that term but would be referred to as a specialist on a specific application or set of applications. Some application families, such as CRM and ERP, as so large that an entire career might be spent learning and supporting a single one (such as an SAP ERP system.) While in many other cases one might manage and oversee hundreds of small applications over a career span. Common application areas include CRM, ERP, email, web portals, billing systems, inventory tracking, time tracking, productivity and so much more. Applications could include just about anything and while some are high provide, such as an Exchange email system; others might be very trivial such as a small desktop utility for calculating mortgage rates quickly.

Networking. Networks tie computers together and require much design and management on their own making them often the second largest discipline within IT. Network specialists work on buses, hubs, switches, routers, gateways, firewalls, unified thread management devices, VPNs, network proxies, load balancers and other aspects of allowing computers to speak to each other. Networking specialists typically focus on a vendor, such as Cisco or Juniper, rather than product types such as switches or routers. Networking is, with systems, the best well known or most commonly mentioned, role in IT even if the two are often confused. This role also supports the SAN (the actual network itself) for storage teams.

Security. Not truly an IT discipline itself, but rather an aspect that applies to every other role, IT Security specialists tend to either specialize by other discipline (network security, application security) or act as a cross discipline role with a focus on the security aspects as they cross those domains. Security specialists and teams might focus on proactive security, security testing and even social engineering.

Call Center, NOC or Helpdesk. The front line role for monitoring systems across other domains, taking incoming calls and emails and assisting in triage and sometimes direct support for an organization which may or may not include end users. This role varies heavily depending on who the direct “customer” of the service is, if tasks are interrupt (monitoring) based or if they are queue (ticket) based. Often the focus of this role is high level triage but can cross dramatically into end user support. This discipline is often seen as a “helper” group to other teams.

End User Support. Whether working sitting beside an end user in person (aka “deskside support) or remotely (aka helpdesk), end user support roles work directly with individual end users to resolve individual issues, communicate with other support teams, train and educate, and so forth. This is the only IT role that would commonly have any interaction with non-IT teams (unless reporting “up” in the organization to management.)

Hardware Technical Support. This role has no well known name and is often known only by the fact that it works with hardware. This role or family of roles includes the physical support and management of desktop or laptop devices, the support and management of physical servers, storage systems, or networking devices or the physical management of a datacenter or similar. This is the portion of IT that rubs shoulders with the “bench” field (considered to be outside of IT) and consists of much grey area overlapping with it. Hardware Support will often plug in and organize cables and generally works supporting other teams, predominately platforms or systems. Separating IT Hardware Support from Bench work is often nothing more than an “operational mindset” and most roles could potentially go in either direction. Placing desktops on desks is often seen as falling to bench, whereas racking, stacking and monitoring server hardware is generally seen as IT Hardware.


 

It is often practical to define what IT is not, rather than what it is. Many things are often assumed to be IT roles, but are not, and are so commonly connected with the field that it is worth expressly pointing out that they are not IT roles, but rather something else.

Project Management. PM is its own discipline that is far more a part of management than any other field and has no direct ties to IT whatsoever. IT often utilizes PM roles and PMs often oversee IT projects and IT companies or departments generally have PMs tasked to them; but the PM career itself is quite separate from IT. The same as any management role.

No Cabling. IT is most certainly not an electrician’s trade and the running off, termination of and certification of building cables is not even remotely within the scope of IT. Most IT departments will plug in computers to their network ports; but this no more makes IT the electrical maintenance department than plugging in a lamp at home makes you the electrician. The physical cabling plant of a company remains part of the electrical and maintenance roles clearly outside of IT.

No Programming. There is no programming role within IT. Software Engineering is a closely associated industry, but is not itself part of IT proper. The head of IT is seen as the CIO, the head of SE is see as the CTO. CIO is about business infrastructure – the “plumbing” of the organization. The CTO is about the engineering and creation of new tooling – often which would then be used by the IT organization. The expectation is that IT would request tools from SE. That is not to say that IT roles never write code, they often do, but coding is not the product of IT, it’s a tool in the toolset. An SE’s job is to deliver code as the end product.

No DevOps. DevOps is not a role. DevOps is a modern terminology for a specific style of working in other roles. One can be a DevOps System Admin or a DevOps Network Admin or a DevOps DBA, for example, but you can’t be just “DevOps” as it does not mean anything on its own. DevOps is a way of working, not a specific task. So we do not see DevOps on the list, even though DevOps is an important concept in IT in general.

The Software RAID Inflection Point

In June, 2001 something amazing happened in the IT world: Intel released the Tualatin based Pentium IIIS 1.0 GHz processor. This was one of the first few Intel processors (IA32 architecture) to have crossed the 1 GHz clock barrier and the first of any significance. It was also special in that it had dual processor support and a double sized cache compared to its Coppermine based forerunners or it’s non-“S” Tualatin successor (that followed just one month behind.) The PIIIS system boards were insanely popular in their era and formed the backbone of high performance commodity servers, such as Proliant and PowerEdge, in 2001 and for the next few years culminating in the Pentium IIIS 1.4GHz dual processor systems that were so important that they resulted in kicking off the now famous HP Proliant “G” naming convention. The Pentium III boxes were “G1”.

What does any of this have to do with RAID? Well, we need to step back and look at where RAID was up until May, 2001. From the 1990s and up to May, 2001 hardware RAID was the standard for the IA32 server world which mainly included systems like Novell Netware, Windows NT 4, Windows 2000 and some Linux. Software RAID did exist for some of these systems (not Netware) but servers were always struggling for CPU and memory resources and expending these precious resources on RAID functions was costly and would cause applications to compete with RAID for access and the systems would often choke on the conflict. Hardware RAID solved this by adding dedicated CPU and RAM just for these functions.

RAID in the late 1990s and early 2000s was also very highly based around RAID 5 and to a lesser degree, RAID 6, parity striping because disks were tiny and extremely expensive for capacity and squeezing maximum capacity out of the available disks was of utmost priority and risks like URE were so trivial due to the small capacity sizes that parity RAID was very reliable, all things considered. The factors were completely different than they would be by 2009. In 2001, it was still common to see 2.1GB, 4.3GB and 9GB hard drives in enterprise servers!

Because parity RAID was the order of the day, and many drives were typically used on each server, RAID had more CPU overhead on average in 2000 than it did in 2010! So the impact of RAID on system resources was very significant.

And that is the background. But in June, 2001 suddenly the people who had been buying very low powered IA32 systems had access to the Tualatin Pentium IIIS processors with greatly improved clock speeds, efficient dual processor support and double sized on chip caches that presented an astounding leap in system performance literally over night. With all this new power and no corresponding change in software demands systems that traditionally were starved for CPU and RAM suddenly had more than they knew how to use, especially as additional threads were available and most applications of the time were single threaded.

The system CPUs, even in the Pentium III era, were dramatically more powerful than the small CPUs, which were often entry level PowerPC or MIPS chips, on the hardware RAID controllers and the available system memory was often much larger than the hardware RAM caches and investing in extra system memory was often far more effective and generally advantages so with the availability of free capacity on the main system RAID functions could, on average be moved from the hardware RAID cards to the central system and gain performance, even while giving up the additional CPU and RAM of the hardware RAID cards. This was not true on overloaded systems, those starved for resources and was more relevant for parity RAID systems with RAID 6 benefiting the most and non-parity systems like RAID 1 and 0 benefiting the least.

But June, 2001 was the famous inflection point – before that date the average IA32 system was faster when using hardware RAID. And after June, 2001 new systems purchased would on average be faster with software RAID. With each passing year the advantages have leaned more and more towards software RAID with the abundance of underutilized core CPUs and idle threads and spare RAM exploding with the only advantage towards hardware RAID being the drop in parity RAID usage as mirrored RAID took over as the standard as disk sizes increased dramatically while capacity costs dropped.

Today is has been more than fifteen years since the notion that hardware RAID would be faster has been retired. The belief lingers on due primarily to the odd “Class of 1998” effect. But this has long been a myth repeated improperly by those that did not take the time to understand the original source material. Hardware RAID continues to have benefits, but performance has not been one of them for the majority of the time that we’ve had RAID and is not expected to ever rise again.

Legitimate University Programs Are Not Certification Training

The university educational process is one that is meant to broaden the mind, increase exposure to different areas, teach students to think outside of the box, encourage exploration, develop soft skills, and to make students better prepared to tackle more learning such as moving on to trade skills needed for specific fields.  The university program, however, is not meant to provide trade skills themselves (the skills used in specific trades), that is the role of a trade school.   Students leaving universities with degrees are intended to not be employable due to specific skill sets learned at college, but to be well prepared to learn on the job or move on to additional education for a specific job.

In the last two decades, led primarily by for profit schools looking to make money quickly without regards to the integrity of the university system, there has been a movement, especially in the United States, for trade schools to get accredited (an extremely low bar requirement that has no useful standing outside of legal qualifications for educational minimums and should never be see as a mark of quality) and sell trade degrees as if they were traditional university degrees.  This has been especially prevalent in IT fields where certifications are broadly known and desired, acquiring properly skilled educational staff is expensive and essentially impossible to do at the scale necessary to run a full program, degree areas are easily misunderstood by those entering their college years and where the personality traits most common to people going into the field sadly makes those people easy prey for collegiate marketing drives.  The promise of easy classes, double dipping (getting the certs you need anyway then getting a bonus degree for the effort) and the suggestion that by having a degree and certs all at once will open doors and magically provide career options that pay loads of money triggers an emotional response that makes potential students less able to make rational financial and education decisions, additionally.  It’s a predatory market, not an altruistic one.

Certificates play a fundamentally different role than a university education does.  Unlike universities, certification is about testing very specific skills, often isolated by product or vendor, things that should never appear in any university program.  Certification may be broad (and closer to collegiate work) in certs like the CompTIA Network+ which tests a broad range of basic networking knowledge and nothing specific to a vendor or product, but is still overly specific to a single networking technology or group of technologies to be truly appropriate for a university, but is, at the very least, leaning in that direction.  But more common certifications such as Microsoft MCSE, Cisco’s CCNA, CompTIA’s Linux+ or A+ are all overly product and vendor specific, far too “which button do I press” and far too little “what does the underlying concepts mean” for collegiate work.

Certifications are trade related and a great addition to university studies.  University work should prepare the student for broad thinking, critical thinking, problem solving and core skills like language, maths and learning.  Then applying that core knowledge to certifications should make achieving certifications easier and meaningful.  University should show a background in soft skills and broadness, while certifications should show trade skills and specific task capabilities.

Warning signs that a university is behaving improperly would include, in regards to this area of concern, overly specific programs that sound as if they are aimed at technologies like a degree in “Cisco Networking” or “Microsoft Systems”, if certifications are achieved during the university experience (double dipping – giving out a degree simply for having gotten certs) or if the program leans towards an indication of preparing someone “for the job” or expected to “get the student a great job upon completion” or is expected to “increase salary”.  These are not goals of proper university programs.

Critically evaluating any educational program is very important as educational investments are some of the largest that we make in our lives, both monetarily and in terms of our time commitments.  Ensuring that the programs are legitimate, valuable, meet both our own goals and proper goals, will be seen as appropriate by those that will see them in the future (such as hiring managers) are very important.  There are many aspects over which we must evaluate the university experience, this is only one but it is one that is a newer problem, suddenly very prevalent and one that specifically targets IT and technical hopefuls so requires extra diligence in our industry.

 

New Hyperconvergence, Old Storage

We all dream of the day that we get to build a new infrastructure from the ground up without any existing technical debt to hold us back.  A greenfield deployment where we pick what is best, roll it out fresh and enjoy.  But most of us live in the real world where that is not very realistic and what we actually face is a world where we have to plan for the future but work with what we already have, as well.

Making do with what we have is nearly an inevitable fact of life in IT and when approaching storage when moving from an existing architecture to hyperconvergence things will be no different.  In a great many cases we will be facing a situation where an existing investment in storage will be in place that we do not want to simply discard but does not necessarily fit neatly into our vision of a hyperconverged future.

There are obvious options to consider, of course, such as returning leased gear, retiring older equipment or selling still useful equipment outright.  These are viable options and should be considered.  Eliminating old gear or equipment that does not fit well into the current plans can be beneficial as we can simplify our networks, reduce power consumption and possible even recoup some degree of our investments.

In reality, however, these options are rarely financially viable and we need to make more productive use of our existing technology investments.  What options are available to us, of course, depend on a range of factors.  But we will look at some examples of how common storage devices can be re-purposed in a new hyperconverged-based system in order to maintain their utility either until they are ready to retire or even, in some cases, indefinitely.

The easiest re-purposing of existing storage, and this applies equally to both NAS and SAN in most cases, is to designate them as backup or archival targets.  Traditional NAS and SAN devices are excellent backup hardware and are generally usable by nearly any backup mechanism, regardless of approach or vendor.  And because they are generic backup targets if a mixture of backup mechanisms are used, such as agent based, agentless and custom scripts, these can all work to the same target.  Backups so rarely get the attention and investment that they deserve that this is not just the easiest but often the most valuable use of pre-existing storage infrastructure.

Of course anything that is appropriate for backups can also be used for archival storage.  Archival needs are generally less needed (only a percentage of firms need archival storage while all need backups) and are of lower priority, so this is more of an edge reuse case, but still one to consider, especially for organizations that may be working to re-purpose a large number of possibly disparate storage devices.  However it is worth noting that moving to hyperconvergence does tend to “flatten” the compute and storage space in a way that may easily introduce a value to lower performance, lower priority archival storage that may not have existed or existed so obviously prior to the rearchitecting of the environment.

NAS has the unique advantageous use cases of being usable as general purpose network storage, especially for home directories of end users.  NAS storage can be used in so many places on the network, it is very easy to continue to use, after moving core architectures.  The most popular case is for users’ own storage needs with the NAS connected directly to end user devices which allows for storage capacity, performance and network traffic to be offloaded from the converged infrastructure to the NAS.    It would actually be very rare to remove a NAS from a hyperconverged network as its potential utility is so high and apparent.

Both SAN and NAS have the potential to be attached directly to the virtual machines running on top of a hyperconverged infrastructure as well.  In this way they can continue to be utilized in a traditional manner until such time as they are no longer needed or appropriate.  While not often the recommended approach, attaching network storage to a VM directly, there are use cases for this and it allows systems to behave as they always have in a physical realm into the future.  This is especially useful for mapped drives and user directories via a NAS, much as we mentioned for end user devices, but the cases are certainly not limited to this.

A SAN can provide some much needed functionality in some cases for certain workloads that require shared block storage that otherwise is not available or exposed on a platform.  Workloads on a VM will use the SAN as they always have and not even be aware that they are virtualized or converged.  Of course we can also attach a SAN to a virtualized file server or NAS head running on our hyperconverged infrastructure if the tiering for that kind of workload is deemed appropriate as well.

Working with existing infrastructure when implementing a new one does present a challenge, but one that we can tackle with creativity and logical approach.  Storage is a nearly endless challenge and having existing storage to re-purpose may easily end up being exceptionally advantageous.

The Information Technology Resource for Small Business